Use Machine Learning to Create Building Heights in OSM – Transcription
If it wasn’t obvious, Facebook is taking OpenStreetMap to the next level. So speaking on behalf of the community, thank you for everything that the Facebook team has done, and we want to see you be successful going forward. So thank you.
Up next, we have Shawn Gorman, Chris Helm, and Pramukta Kumar that are also going to be talking about machine learning.
Maybe, maybe not, I don’t know.
Hi, thanks for the turnout.
Before we get started, I wanted to give an announcement that we are looking for OSM foundation members, we need more to keep the community going. If you are interested in OSM foundation, please see McKel, and the more we have that, the more we participate the better. He is in the back waving his arms. I will tee this up and get out of the way for the folks.
And we wanted to open up the globe for sat lice access for machine learning and other analytical techniques that can help fuel OpenStreetMap. And myself, Pramukta and Chris, we will be ambitious, tag-team wrestling and swapping in and out.
And I wanted to open up with a couple bigger-picture questions, do machines help or hurt OSM as we build these things. As we build these things, do we run the risk of disrupting the community, and how do we keep people engaged in participating in that, and what is the right balance of machines and humans on the ground to make these things happen.
Obviously, we are big advocates of machine learning. Oops, nothing is up there. That’s weird.
Do you know why it is not showing up? Sorry, thanks for the flag.
But, yeah, this question of what is the right balance between machines and humans when it comes to OpenStreetMaps specifically. In the last questions, we had questions on satellites in the skies are machines, and there were questions when we were remote mapping and tracing satellite imagery, if that was good or bad for the community. It was a good thing for the community, but we had to find the right balance of pulling those things together.
So we are looking at, can two machines be better, if we can fuel the machine learning algorithms and other analytic techniques with satellite imagery, can we move the community further along beyond tracing imagery.
And we have an example that we worked to hopefully get folks excited about what you can do with machines and analytics on top of site lite imagery to add more goodness to OpenStreetMap, and one thing that we are pushing for is a community of learning. Can we make simple access patterns to allow folks to access imagery easily. It is easy to get images of cats and deep learning stuff, machine learning is harder, there’s not a lot of folks that figured out how to do it well. Can we reduce the barrier so people can solve the problems and grow the community, and finding the right balance of how humans and machines interact on these things.
I will turn it over to Chris first, we will talk about the work we are doing to open up the access patterns in the community, and Pramukta will talk about what we were doing on building heights.
Thanks, Sean. I will talk about the work that we are doing at DG, less technical, but growing more technical with every word, I will say.
I’m going to be looking back a little bit, cannot see the slides.
And on our team, we worked on companies where we built communities to share data. We worked on GeoComs, open data, and Timber, a data science platform. And we joined DG last year, we are working on a community of creativity, trying to enable users of all skill levels to come to DG’s platform and writing algorithms against satellite imagery, and it is about being creative. And our platform surrounds the concept of Jupiter Notebooks and sharing them with each other to put into the platform to run at scale.
This is what we are doing, and extending the Jupiter Notebooks to scale quickly. We have hooks in OSM services and other vector imagery services. And users come and write algorithms, do whatever they want to do, and share them with each other.
What we want to talk about, and we don’t have a lot of time to go into the details, is the concept of trying to enable global access to immediately computable imagery. It is a little bit more than just interacing with a Team Service that serves you a P&G, it is about serving users in the formats they are familiar with, and comfortable interacting with. For our users, we are thinking about Python users. And when we think about Python users and imagery to a Python users, we are taking stacks of data and making it available as an array. So bringing the data to the skill level of the user in the format they are thinking about using it in. So they are downloading a Tif, accessing it, we are trying to provide means for taking lots of imagery and providing analytical chains that can access the data at different levels of the analytical process. I will see if I can explain.
So when we interact with satellite imagery, we interact with P&Gs through the TMS, we are tracing imagery, drawing it, and we are getting this end result of a deep chain of analytics, or transformations of that data from the Raw 1B imagery, from the raw pixels to what you would look at on a map. That is going from 1B, to ortho rectification, to pan sharpening, and that is enough. We can train models to derive data from those things. And having the end result is always this pan sharpened image, we have RGB, we are not getting access to multi-spectral data. And we are trying to think about how people can get all aspects of this processing chain in an array or note book.
So what I’m trying to convey is this concept of friendly access, how do we take access to satellite imagery, typically heavy, where one image is 20GB in its raw form, how do we make it friendly for a user where the expertise level, in case of taking the 1B data and going through the process in their own, entering in at a step where they are comfortable.
And what we are doing is writing this concept of, well, employing a concept of deferred graphs. If you are familiar with TensorFlow or other libraries, we can write a graph that allows users to access those nodes. And so we take raw data stored as chunks of huge files on the back-end, and create a graph that tells us how to process this bit of information.
So these are individually-HTTP addressable chunks of data. These are virtually represented as a graph, that tells us how we fetch the 1B, this is how we fetch the Ortho and things like that. And on the client side, we mirror the graph to a deferred graph on the client that allows the user to build up a massive chunk of data without fetching until they need. You can mutake the graphs, index, work, and do things without fetching that down. And so this is, I feel that this is a talk on its own, which we’re hoping to give more talks about it.
It is a really cool thing, we start thinking about opening up access in different ways.
So, I think we are going to shift to the application of this concept, about what problems can we solve with this pattern.
I get to be the guinea pig for the access pattern this time around. We will try something ambitious, and measure building heights. And despite the fact that I work at DG also, we are going to try to generate, we are going to try to generate digital service models.
This is not like the T&G services imagery that the previous work highlighted and could support.
And the issues, even though that I have been at DG for a year, I don’t have a photogrammetry background, I’m an amateur. I can relate to you in that way, or to some of you or something (laughter).
So immediately when we try to do this thing, there’s a variety of challenges inherent to the problem. And there is more that we could put on this slide, but there is one really big one, and that is that I just don’t know what I’m doing.
(Laughter).
So, but there’s one, like, notable missing piece on this slide, like, a missing challenge, and that is easy access to the kind of data that we need in order to get this done, which means that there is something for me to start from and I can start to explore.
If I also walk around the farm at DG, I can talk to people in that community and find out that the general methodology for doing this is something like this: First, you have to select repair, which there’s a million different ways of doing that that I still don’t know.
And you orthorectify these things, and a buddy of mine from graduate school would be like, I know some of those words. But that is something that other people do.
Sorry.
I should use the microphone better.
(Laughter).
But what we do, we orthorectify those to heights, see where they match the best, and match the best is another thing that we’ve gotta figure out.
When you work through this by talking to people and Googling for the 10 percent of arkdemic papers that are not behind a paywall, you end up with something that actually kind of works.
So this is a two-meter DSM around the National Mall in DC. And, from that, what we can do is look at the little surface, use OSM-based building footprints to do some stats on them and, like, try and get some building heights out of it that maybe we can contribute back to the community, assuming they’re good, which they are probably something right now.
But, the point is that it is getting somewhere, and I started from zero. But the combination of easy access to computable imagery and the beginning of, kind of, a community around this stuff, allows me to go from zero to something that is a pretty decent result as a first-cut.
And that’s kind of the message I wanted to get to at this point.
I guess, I’m going to trail off a little bit, because I don’t really have anything else to say.
(Laughter).
But, I think that, you know, satellite imagery is interesting. It comes from a deep pipeline that is valuable in a variety of different stages, and I think more and more of us can access information from that by learning and stuff.
So, that’s it.
Oh, yeah, and we didn’t use machine learning for this, but there’s a lot of other talks on that, so that’s okay, and that is super valuable as well.
(Laughter).
And I guess I’ll – yeah, sure.
So what we are hoping to do with this, outside of play with it ourselves at work, is the goal is to get this launched out, and we are working really hard to convince the DG lawyers and business folks to make an open, global-free layer available for all of you to work with, not just the TMS, but full imagery. We are negotiating what to get out there, but there will be something. You will get free note book, free access to imagery, the trade off is you open source and share your notebooks and methods with the community. We will hopefully have it out next year if the arm wrestling with the lawyers works out. Feel free to grab us for questions, we would love your ideas and feedback as we get out the door. Thank you.
Live captioning by Lindsay @stoker_lindsay at White Coat Captioning @whitecoatcapx.
Don’t go too far, thanks, they are trying to bail out. This is the team that makes DigitalGlobe cool. 10 years ago, if DG was here, we would be in suits and ties and have a million acronyms on slides. Any questions? There was a traveling mic, great if there was another one. But any questions?
Any questions for Drishtie? Just kidding, (laughter).
Quick question, how do you handle registration around OSM data, or have you tried to arrange your global truth to theirs on these buildings?
I mostly just drew them and then hope for the best at this point.
(Laughter).
But one nice thing is that, since a lot of the community has worked on tracing Digital Globe imagery, there’s a decent chance that they work out okay for us. But we are special like that, I guess.
Hi.
Um, I was wondering if you could speak to the involvement that you have seen with the OpenStreetMap community, who previously sponsored the work.
For the – for the responses to OSM from previous work?
Yeah, just generally, like, what – what you have presented here, how you are working with the OpenStreetMap community. I mean, Drishtie touched on the successes and challenges, bridging the two. I was wondering what your own presence in that path looks like.
Yeah, it is super early, we looked closely at the work that Kelso and Simmons did on calculating building heights in San Francisco and getting that in as a bulk import. We got the results at 4:00AM this morning, or at least Pramukta did. So we are trying to meet our own threshold for quality, and seeing if that is worthwhile to get it pushed in. I think it is more of a process – we would like to create a method and make it open and available for folks to be able to run it in areas they are interested in, and then do that at community level to see if that makes sense to put it into the community as the folks are mapping individually.
So, instead of trying to do it as a bulk, really the concept is to create the method, make it open source, make it available, make some imagery available, and then allow people to individually decide if that is something that is cool, and they would like to push into their community as well.
So, it has been more focused on the method and process and opening it up than actually trying to get the specific building heights put into OpenStreetMap. And hopefully that is, we are going to test it out as a collaboration model going forward, instead of trying to wrestle around bulk imports, making the methods available and leaving it up to folks if that is worthwhile to contribute. It is early, we are playing around with it, seeing if there is value. If it is, we can open up the dialogue with the community and contribute and taking it into different directions.
Thank you. Another round of applause for our presenters.
(Applause).
Live captioning by Lindsay @stoker_lindsay at White Coat Captioning @whitecoatcapx.