Detect Missing Buildings Using Imagery – Transcription

Rustem, you’re up next. Your presentation is right here, I think. This is you, right? Yes. Okay. Looking good. Yeah. Hi, everyone. My name is Rustem and today I am going to tell you how we can use medium resolution imagery to detect missing buildings. I am working at Astro Digital. We are operate small satellites and have a platform for easy access to the data and analytics and develop it. Also we provide data to OpenAerialMap. You saw before. The picture here is Los Angeles. So just to give you an example that even in a developed country a popular city can be sometimes undermapped. We know that OSM is the biggest data source of geospatial data, but still a lot of areas are undermapped. It can happen because some cities and towns are just unpopular and there is no recent image. Or there’s just not enough mappers who are interested in this area. Sometimes it’s problems that there’s a natural disaster or rapid change. And you have to have the most recent image. And sometimes they have buildings from users. So here on the picture you see Colorado University. On the right you see a DigitalGlobe image. And on the right, Sentinel 2, 10-meter resolution. We can agree that on the right it’s a lot easier to distinguish buildings, there are advantages to using mediumresolution imagery on the left to detect buildings. First of all, we receive every image every five days. Meaning that we always have the most recent image with the most recent changes. And second, which is more important, we can use this kind of frequency to build a series of patterns and analyze this pattern to distinguish buildings. Second is that we have very good coverage with Sentinel imagery. Meaning that even small towns or cities in developing countries are covered. And you see the most recent image. And also, we have a good historical archive of data, meaning that we can compare patterns between years to understand what has changed. Let me tell a bit more about the operation. Here you can see the airport in Korea. And there is a park near it. And we can just like leisurely analyze it, June of 2016 and compare it to 2017, we can see it has a standard pattern during 2016. No vegetation in the beginning of the year, high vegetation in the middle of the year. But even starting with 2017, even during March we can see that something is different. And this kind of comparison can lead us to believe that this park was destroyed and, like, some construction started in this area. And this kind of pattern analysis can be done automatically without any human input. And it’s very easy to compare. In general, algorithms which we used works in the following way. We build a time series for every pixel. Remove random noise clouds and so on. Then we use this data for the whole spectrum plus the features to cement the pixels into segments. And then we based on these segments create candidates. And then we, based on how these candidates are similar to alreadypredicted alreadyknown OSM data, we can label these candidates. The problem here is that still with this kind of model we won’t get a universally universal algorithm which will work everywhere. For example, algorithm which I just mentioned, it worked in Las Vegas, but didn’t work as good with Guangzhou in China. Part of it was cloudiness. Different images per year. And part of it was a different type of area. So by this moment, this kind of algorithm and model, they produce two types of two types of output. First, ranked building candidates. Which is practically vector data. Ranking meaning how sure is there a building in the area or not. And second is pattern of areas. We know that something changed. We may not know if it’s like a building or not, but we know that change is significant. So to solve it, and one of the best ways, is to add human analog. So human analog allows it to control accuracy to look at ranked candidates or find exactly what happened in the changed area. So this kind of analysis allows us to build this kind of feedback loop. So we use users so model produces results. Then a user validates the results. And passes it back to the model and model, and updates and shows are the results. That way the user doesn’t have to validate hundreds of results. He can validate ten or 15 with the same result. That model, overall, produces a lot better results than without human supervision. Here’s an example how we did it with Dubai. Basically we created a grid over Dubai. And overlaid with OSM data and asked users to check cells in had the grid where OSM data is complete. And by just validating 1015 tiles out of this grid, we were able to produce this mask. And as the next step, the user could validate this mask, the tiles, where this mask worked great. So this kind of feedback loop is enabled by our analytical pipeline where we use the imagery and labeled but incomplete data to train the model. And then basically by validating the results of the model with the user, retraining the model. The upside is that we were able to make the whole cycle of interaction with the user in under a minute. Meaning that for the user process, it still kind of remains interactive. So in conclusion I want to say that moderate resolution can be used for building detection and finding areas with change. Building detection can be automated with a human in the loop. But exactly in terms of active learning. So it’s not like we just pass validation data to the user. We make a second step and pass that data back to the model. And this kind of approach can help significantly to improve OSM data and editing workflows. Thank you very much for your attention. If you have any questions, I will be glad to answer. [ Applause ] AUDIENCE: Thank you. So a couple of questions, actually. So first of all, you mentioned training what kind of machine learning are you using here? What framework? So for the first step where we just wanted to unite based on values, here we just use basically we use existing OSM data to produce these kinds of clusters. And then we kind of divide all the candidates into these clusters. AUDIENCE: Thanks. And then how do you transform these into polygons? Yeah. So once the candidates are validated, we can basically adjust the building and adjust the polygon. So based on the values of each pixel we can calculate like the whole polygon. AUDIENCE: Okay. So have you considered using deep learning, for instance? Yes. So basically the part where we use like here this is our we use deep learning. The problem with deep learning is that it doesn’t work in the same way with mediumresolution imagery as it does with highresolution imagery. So it’s less connected with the geometric features with mediumresolution. But it works in a good way when you basically have to classify. Because overall what you get is basically a time series for each pixel. And after that, it’s just an additional feature that you want to adjust the pixel. And you can use techniques to segment the whole data on the picture. AUDIENCE: Thank you. AUDIENCE: Hi. Thanks for your presentation. How can we get access to your data? I mean, the satellite imagery and these masks for changing data and changing buildings? Or is that a kind of a private data and we can’t get it right now? We’re a commercial company. So it’s something which we are open to discuss. But especially like the whole process, the whole approach, this is free. It’s accessible and free. So it’s very easy to download. And the whole approach is recreatable. Because, like let me find that. Because you don’t need to have, like nothing commercial happening here. So it’s like, you have like data which is like open. You have OSM data which is like obviously open. AUDIENCE: Yeah. But the imagery is that also open? I mean, the The satellite is open imagery. It’s so basically here one of the guys who is responsible for Sentinel, a lot of the data on AWS. It’s very accessible. AUDIENCE: So, yeah. It’s accessible by Amazon and It’s free to download. Thanks. Well, also on Google Cloud. Okay. So if you have any questions, I will be glad to answer during break. And you can ask David because we worked on this project together. So yeah. Thank you very much. [ Applause ] Thanks, all. Hey, we have a little bit of time left now. So I think the other speakers are still around, yes. So I think there were some more questions. So we could extend it a little bit more and do a sort of a panel that little panel question session if people are interested? But other side I’m happy to break early. Can you raise your hand if you have any additional questions to any of the speakers? Or has it all been resolved? There’s a few more, okay. Let’s do another round. AUDIENCE: So this is the continuation of the question for Nate. Is this on? For Nate, I’m curious to hear we’ve heard about two other datasets that aren’t pure imagery datasets that, you know, could be available for aiding mappers. I know you said you haven’t gotten there yet, but is that part for OpenAerialMap? To have digital topography models? To have things like estimated buildings footprints? Things like that? I think so. If it’s a need and it fits well with a workflow, I think we could figure out how to work that in there. So we’re to the working on it right now in terms of actively development, but, you know, over the next six months where we’ve gotten to the certain point. We can pick our heads up. What are the other workflows and how to work that in and solve some needs for users? AUDIENCE: So a question for Dakota. It’s about putting textures on the images and picking the colors. So you mentioned that you’re trying to kind of make them uniform. So how do you pick the colors? And, for instance, so if you’re your drone is obviously taking pictures at different times of day. So does that get reflected on the final 3D products or not? Yeah. So there’s both local and global color leveling. And the software we pulled in external software for that. It’s called MVS texturing. And so I don’t know the specifics of that. But I think they do a pretty good job of making sure that, you know, if you’ve got two images and one somehow the, you know, it was just darker or something, there was a cloud or whatever. It does a pretty good job of making sure that across the whole scene and locally between images there’s a good amount of color coordination, I guess. AUDIENCE: And this one’s for Nate. The OpenAerialMap. And just sort of about open imagery in general. Which I’m a huge fan of. And then I took Nathaniel Raymond’s remoting humanitarian project with the signal program and hit up around a lot of the questions of the ethics of releasing data. And where it’s like, open imagery and open information is like, awesome. But what happens to what happens to the people that are being that the information is being collected about when they’re put in risk by bad actors that want to consume that data in a bad way? So I guess my question is, like, is that sort of part of the thinking around OpenAerialMap? And are there any mechanisms to understand the ethical considerations of releasing specifically highresolution imagery on the platform? There’s we have thought about this two ways. One is a technical solution. Downsampling. You know, where maybe 10 centimeters is fine. And you actually don’t need 2-centimeter data. Or, on the human side, is training, providing guidelines, providing, you know, maybe a registration process where you have to go through a little bit of training to be like, so if you’re collecting imagery, you’re doing it in an ethical way and working with the community and making sure they understand what’s happening. And so I think that’s that’s an important part that we want to make sure is integrated into any type of, hey, if you’re uploading, you have gone through these things. You have consulted with an area that you’re collecting. And you’ve thought about these things. So right now there’s there isn’t any of that kind of in there right now. But those would be the two ways that we could go forward with that. AUDIENCE: Interesting. And I would recommend reaching out to that group because they’re sort of thinking about that a lot and have great ideas. Yeah. No, I agree. All right. Well, I think that concludes the aerial imagery session. We are going into a coffee break now and then the final sessions for the afternoon. That’s going to be lightning talks here and two panel discussions in the other room. That’s I think all I have for logistics. So enjoy the coffee break and see you back here. [Break] Anyone who has a raffle ticket for the drawing, that is at 4:30 at our booth. If you want any chance to win that camera, you better head up there now. Because if you’re not there, you’re not going to win.