OpenDroneMap – Transcription

There’s hardly more natural transition imaginable than to Dakota talking about OpenDroneMap. So let’s go right into that. All right. So I’m Dakota. I’m a geospatial developer for the Cleveland metro parks. But me and Stephen Mather who you probably know, he’s more active in this community, we have he started and we have been building OpenDroneMap for the past fiveish years. There’s something covering up. Sorry to interrupt. I think that works. That works for me if it works for you guys. So you can find both of us on Twitter. And I’ll explain what OpenDroneMap is. Thanks for giving us another drone to all the to the Cleveland Parks Metro fleet. I’ll be using this as a prop. How do I get this to go next? I can’t find my cursor. There we go. Okay. So what is OpenDroneMap? In a short sentence, it’s an open source toolkit for processing aerial drone imagery. And what that means is, it’s modern photogrammetry in a sense that it’s for modern technology, that is drone, and other low altitude image collection, balloons, kites, what have you. It does fullyautomated image matching, digital service modeling and mosaicking in preparation for the robot takeover. [ Laughter ] So what are we looking at in the future in society? Are we looking at something like this or like this? Maybe? Or like this? Maybe. I think something probably we saw some talks about automated vehicles. So maybe it will look something like this. It’s clicking a little slow. But complete organized chaos, maybe, to some. This is already happening. [ Laughter ]

As crazy as it might sound. But what is this sort of what people are thinking about in the future? What does this enable technologywise? It’s enabled really hightech sensors for automated cars. And that applies also to drones. They can go in a flight like this. And do some cool stuff and know where they are and have GPS and sensors and take photos. And build a sense of the world. So like I said, OpenDroneMap is an open source toolkit for processing what drones can do. And where does that start? Well, computer vision and photogrammetry. It’s a blend of the two. How to get computers to navigate and understand the world? And what does that depend on? It depends on stereo vision, much like we do. We have two eyes. We can see in 3D because we have two eyes. And so that’s one of the core concepts in OpenDroneMap in developing the 3D space which can create our maps. And you’ve already seen from Nate’s talk. I’m not going to go over this too much how drones can be used and how drone mapping, specifically, can be used. So I’m not going to go over it too much. Ecology, vegetation mapping and species monitoring, habitat mapping, things like that. Humanitarian response, of course. And in the case of OpenAerialMap, participatory creation of data for the public good. So what does OpenDroneMap do? How does it work? There’s a lot of jargon on here and I’ll go through it stepbystep. But like I said, it’s a process. It’s a pipeline, almost. It starts with structural promotion. Which is I’ll explain it when I get to that. But basically, it’s matching images together using the motion of the drone and creates sort of a sparse point cloud of tie points that tie in all the images in the cameras together. And then we densify that, we create a threedimensional point cloud and we can do a surface reconstruction. Texture over that surface and build a mosaicked orthophoto from that textured surface. So for structure motion, we have the drone this is taken from a kite. But I’m going to lie here. The drone goes over image one, takes a picture. Image two, takes a picture. Similar spaces, but they’re different because they’re from different angles or what have you. Even though the two images might look similar, they are two points in time. And then you can find points, features, you know, and this is where the computer vision comes into play. The computer is actually looking for unique identifiers in the image. And in each image and then matching those together. And that’s what structural promotion does. And thanks to Mapillary, they have a structural promotion software that they’ve released open source and it’s really great. And then so you get all these points. And there’s a here’s a photo of this sparse point cloud. And it looks actually pretty good. You can kind of understand what’s going on in there. There’s like a road and a parking lot and maybe a building and lots of trees. So then we have to densify. We have to make that more of an understandable, threedimensional space. There are some algorithms. I’m not going to go into them. But then we get another set of point cloud. And this is just a picture of it, where you can actually really tell what’s going on in that. And this is just points. This is this is just points. And I don’t actually know what the density of this is. But the next step in the process is to take those points and turn it into a surface that we can texture over. So we use a Poisson triangulation method available in the point cloud library. And we create something that looks like this. This is the same area as before. And you can see now that there’s a little bit more of a surface there. Although it’s not textured. It has no color values to it. It’s just faces. And if you look in closer, you can see, yeah, it throws away some of that detail as well and we’re working on that. So next step to texture that. This is the 3D mesh. It’s now had some images overlaid and it uses some some algorithms to determine which image is the best for which area for which face of the mesh. And then it mosaics and does color blinding and makes sure it’s an even color on the global and local scale. And finally, the most important part for many, many people, is generating an orthorectified mosaic. Questions you may be asking, this is great, Dakota. How do I get started in collecting the necessary data? That’s for the robot takeover. Yeah. You got to get a drone. You got to fly you got to get a drone that can do automated flights because you have to be able to fly in a grid pattern to get enough overlapping photos to be able to match them. Doing it manually is annoying at best and useless at worst. And so you have to get a drone with a GPS unit and a camera and a brain inside it. This one, unfortunately, will not work. It’s a little small. And then you get all these images. You get, you know, a hundred to 200,000 images in some cases. Offload them to your computer. And then you have to use this software. Well, you can use a software, I will say you should use OpenDroneMap because it’s open source and I built it. Here is the installation process. There’s three main ways to install it. If you understand and use Docker, I would say that’s the easiest method to get it running. We have a Docker hub. So you just kind of pull the image and you can run OpenDroneMap. That works crossplatform. We also, as of this past year, we have a web interface. It’s a deployable server. You can build your own service, do whatever you want. It’s called WebODM. There’s a Windows WebODM installer that one of our contributors and core members of our team he has built. And it works pretty seamlessly. And obviously you can find us at So if you install it natively, you have to use the command line tool. So there’s a little bit of tech barrier to that. Which is why oh, and, of course, we’ve integrated with portable OpenStreetMap. If you’re using that. I don’t know too much about that project. But I know that it’s been pretty useful for the team. And then WebODM. I’m going through that process a little bit. I think it’s by far the most userfriendly. And it has an API. If you want to build or hack it, you can do that. It’s a simple interface. You create a user account and you can upload the images you’ve got. All the images. Here we’ve got just a few images, but which, you know, the more images you have, the longer it’s going to take to process, right? And then it builds this task. And you can save it. You can also specify different options. If you want to build there’s a lot of options. There’s a lot of parameters that you can tweak to build a better dataset or a better product out of it. And that’s all available in our Wiki and on our Website. And you just start processing it. And it’ll like I said, it’ll build this task and it will start running. And it goes through each of those steps that I explained before to build each of those products. And the cool thing with WebODM, when it’s done, you can view the orthophoto and the 3D image online. So you can click orthophoto, and I believe you can see in a leaflet interface on the Website, you know, here’s your map. And you can look at how accurate is it? How, you know am I looking at everything right? And then you can download it when you’re happy. Or you can look also at the 3D product and this is really cool. You can look at the 3D product in the browser. You can sort of scan around it. And both the point cloud you can see here the textured model. So what are we looking at in the future for OpenDroneMap? Where are we? What are we developing next? What are our longterm and nearterm goals? First of all in funding, we’re funded by the Cleveland Metro Parks, and through the American Red Cross through the integration with Possum. And three quarters or so of the way through with the humanitarian innovation fund, which we have been doing to make OpenDroneMap easier to use. And then make the products from it better more accurate by improving some of the algorithms and some of the processes that we have internally. Where do we want to see more improvements? Obviously, we want to get better maps. We want to expand the boundary of how decent a dataset has to be. We want to scale up. We want to be able to process 200,000 images, which is simply impossible with the current it’s just like you need a million gigabytes of memory or something. And we want to be able to do data partitioning. Chunking out data to process it. And goes into the scalability. And the footprint and do more analysis. This is actually where we’re seeing a lot of changes right now. Automated DTM extraction, doing classification of the point cloud to get ground, nonground, et cetera. And then doing, you know, digital surface model and digital elevation models. We currently have both of those data surface models, digital elevation models in the pipeline. You could run right now and get those. But we’re working on improving those as we speak. As well as mesh improvements. You saw the meshes. They were a little bulky, they were a little blobby. And we want to be able to have, like, flat surfaces be flat and things like that. So with DTM extraction, for instance, you’ve got this nice point cloud dataset. And we want to remove the trees and the building to get an elevation. And so here’s some of the process we have been playing with to get that. And then we can get this DM out of it, which is quite nice. Let’s see. I already went over these. Quality reporting yeah. Improving robustness. So we want to add a reporting system. This is already in the process. How accurate are the matches? How many matches? What are the error rates? Why? Et cetera and things like that. Improving robustness. So classifying failures. Finding out where it’s failing and where improvements are needed. And more obviously having more test datasets is awesome. Always need more. And then I’ll talk about the large scale. How to get 20,000 images or 200,000 images to run? By breaking it into bitesize pieces and splitting a large number of images into smaller groups. Run the structure from motion reconstruction on each subgroup and then align all those together into a single product. Or have them all sort of connected through some other means. And so here’s an example of that. I believe this is in this is Zanzibar. And so you can see these are all separate datasets. And they’re massive. I think I think this is from 20,000 images. So this is in the process. This is hopefully bit end of the year we’ll have this integrated. And then other incremental optimizations. Improving our processer use or memory use through various means of optimization. Yeah. So 20,000 images. This is so that was yeah, this is the Zanzibar mapping initiative from before. They processed it on a single 32core machine. All 20,000 of those images using the improved scalability pipeline. But you could also segment that out and process it on different machines and then put it all together. That’s something that we’re hoping to have as well. Sort of a federated processing pipeline. So yeah with WebODM, we want to integrate those into WebODM as well. Display a model and do volume metric measurements. And control the mission planning itself, that’s how you fly the drone from start to finish. We improve a lot of the dataset gathering errors that people run into just not having enough images or not having a good enough overlap or something like that. And then integrating it into QGIS would be great as well. We need something like Entwining Greyhound for hosting that or viewing that. And we want to make it really easy. You’re happy with your dataset. With that orthophoto and you’re with WebODM. One click, or integrate your user account with OpenAerialMap. Shoot it up, send it to OpenAerialMap. Have an easy process to get it up there. So with that, let’s get flying. I hope your ears don’t break from this. Okay. Good. Any questions? [ Laughter ]

[ Applause ] AUDIENCE: Yes, I was wondering how we can use this in OSM? How do we integrate this into our editing workflow right now? Particularly with the products that are not just 2D, but involving side angles and stuff like that which are currently not available from other sources? I would say we’re a little far off from getting are direct integration or direct involvement with OpenStreetMap. But with, you know, being able to get that into OpenAerialMap or getting those images into humanitarian hands, people who are using OpenDroneMap and then doing the digitization work with portable OpenStreetMap, for example. AUDIENCE: That was also part of one of my questions is can you handle oblique imagery in your image processing and a couple of other questions? Yeah, we recommend an angle angling the images between 7 and 15 degrees for optimal camera position or camera angles for matching. If you can avoid any sky so anything sort of with the image without any sky is probably fine. And as a supplementary to nadir images, obliques are great. AUDIENCE: And then next question is, can you incorporate ground control points? Or are you just taking whatever the geotag quality in the exit header? We can do both or either, rather. We do have ground control process where you can insert like a text file or a CSV or something. You have to format it correctly. You have to go into each image and do the pixel. AUDIENCE: In the city that I live in, most of it’s controlled air space, so you can’t use drones. But the tool chain and the products that you get from this are still of interest for mapping and other reasons. Do you see it possible to use groundlevel cameras like hand held cameras with this tool chain, or is it not going to work? We have done some groundlevel imagery and processed it with this pipeline. There’s some weird issues that happen sometimes. But yeah. No, you can do groundlevel imagery. And I think, at least in the U.S., I don’t know about the other country laws, but if you have a balloon or a kite, there’s no there’s little regulation. Obviously, check your local laws. I’m not a lawyer. [ Laughter ]

AUDIENCE: While we’re waiting for the drone planning software, what are you using right now for planning these flights? Kind of fixating on the drones opinion It’s generally platformdependent. If you have a lot of there’s an open source tool called Mission Planner which runs with the PX4 flight controllers. And then like I think DJA has their own. And the drone here is the senseFly EB. SenseFly has their own mission flying software. That’s all included in the software I mentioned. They generally try to make it really easy to get you the data that you need. AUDIENCE: So two questions and maybe this is also cheating. Maybe sort of a question for Nate too. Similar to the groundlevel cameras, have you tried this methodology with satellite imagery? Is it applicable for that? And then on the 3D and the DTM and DEM, I know you’re still exploring this, but I would love to be thinking about, you know, are there ways that that can be made available and thinking about OpenAerialMap as well. Because that would be extremely valuable for 3D buildings and understanding not just footprints, but geometry of buildings and opening up a whole new set of applications. Is that in the plan for OpenAerialMap and how you’re working with them on that?

For the first question, I have heard on the GitHub issues queue people asking about using satellite imagery. I have never tried it myself. Yeah. I don’t know how to answer that. I don’t know if I can answer that. And then so yeah. The digital elevation model, the digital surface models and terrain models, those are available in the current master branch of OpenDroneMap. And you can do that in WebODM. You’ll get those products. Currently right now what we’re not getting is classified point clouds. Those aren’t being saved. AUDIENCE: Thanks for the talk. So I’ve got Marik and I fly. So the workflow that we figured out was to shoot one square kilometer and then, you know, at noon you are going out and shooting a single square kilometer and then you are processing it and then next day you are processing the next square kilometer. So the issue with that is that you’ve got the edge of the image. Edge of that square kilometer that is duplicated between these two squares. And it’s not being reused between the stations because, for one, it’s it doesn’t have enough imagery for one. And it doesn’t have enough imagery for the second. But for both of them, you have great overlap. Is there any plan for incremental updates for the so we want to shoot the whole Minsk, but it will take a year. So is there a plan for incremental update of the single mosaic that you can just throw more photos into? We’ve talked about it. We want to do it. Nothing concrete yet. Yeah. I think that can’t say any more about that. Thanks. I see there’s more questions. I suggest that we take that to the break because we need to also make enough time for our final speaker of the session. So thanks, Dakota. [ Applause ]