Enhcnaing Maps With Mapillary – Transcription
Our next speaker is Christopher Beddow from Mapillary. And he’s going to talk to us about how photo mapping can help improve the quality of the map. Chris? Good morning. My name is Chris. I’m from Mapillary. And today I’m going talk to you about enhancing maps with Mapillary. You can see in the background image on the screen an example of what our computer vision does with streetlevel imagery. We use artificial coloring here just to highlight the differences between things like crosswalks, stoplights, vehicles, humans, buildings. And this is something we do with all the photos that we have currently on our platform contributed from users just like you. And one of the basic tenants of Mapillary is that we empower OpenStreetMap users to do more mapping on a larger scale as well as with more detail and quality. So the images that you contribute, they allow you to very quickly survey an area and then from the armchair go ahead and map in far more detail without having to manually walk the streets and clock in every detail you’re seeing there. And also share your imagery with people across the world who can use it to contribute to mapping in your area. And finally, you can use other people’s imagery from across the world to help with other tasks, including things like HOT OSM and Missing Maps as well as just specific projects for different organizations and causes. So a little more background about Mapillary if you’re not familiar with it. We are a collaborative, streetlevel imagery platform powered by Computer Vision. So breaking down that mouth full. What we do is take imagery from our contributors, and most importantly, besides providing that imagery as streetlevel imagery, is we actually turn it into map data. So we analyze the photos, identify what’s in the photos, and then we place that using the geotag to show you where photos containing different objects are. And then we work on actually triangulating the location of objects in the photos to produce more accurate geographic data. So we have iOS and Android applications that enable anyone to download Mapillary and start mapping right away. We have integrations with the ID and JOSM editors, as well as diverse integrations across other geospatial platforms. Here on this map you can see our coverage across the U.S. as well as Canada and a slice of Mexico there. We’re device agnostic. Meaning more specifically that we take all kinds of cameras and all kinds of images as long as they’re geotagged and have timestamps. You can see here, GoPros, a Rico camera, mobile phones, Garmin. Even mounting a 360 camera on a bicycle helmet and a forwardfacing mobile phone to get everything. You can up load through your mobile phone, on our Website including Python scripts. And in the future, it will be comprehensive. These belong to you on Mapillary. And you can tag the photos, the entire photo, marking it that matches an OSM tag. Or pieces of photos as something on OSM tags. And we know that it’s a challenge making maps accurate as well as deciding what kind of detail to show at what level. So here you can see Boulder, Colorado, on different mapping platforms. And you can see obviously OpenStreetMap has a greater amount of detail than any of the others. And much of this is due to the way that contributors put so much time and effort so the day detail is there. What Mapillary offers is another level of detail that’s possible. It also allows for groundbased evidence. So there’s often friction trying to take OpenStreetMap data and really prove that it’s authoritative. Prove that it’s as good as government data. And that now the average OpenStreetMapper has also some credentials and can contribute to an official map of the world rather than just gatekeepers such as government and big businesses. So with Mapillary your toolset is expanded. And taking a deeper look here, we can see on the screenshots. On top you see my hometown, Billings, Montana. And I put quite a few Mapillary images up in the center there. And now on our Website, you’re able to see our computer vision detections. So I filtered for crosswalks, and you can see the location of photos containing crosswalks. Supporting this in the near future into OpenStreetMap ID editor, you’ll be able to have suggestions on what to map and where. So I’ll take you to Santa Monica, California, just briefly. This is from OpenStreetMap roads there. Just showing what Santa Monica looks like with no other detail added. And then we can do what we call going out to Mapillary it. Capture the images and put them online and Mapillary. So you have the green lines indicating where coverage now exists. And like one of the users here, you might take a walk along the beach. I think this is actually a bike ride along the bicycle lanes there. And what we’re able to provide, for example, is we pinpoint where the street signs are located and classify what type they are. We even tell you the difference between a 35 miles an hour speed limit sign and 45 miles an hour. You can also click on these street signs on our Website and we’ll show you which images we identified them in, such as this stop sign here. And give you a more closeup look at that so you can confirm that it’s correct. This is then available in OpenStreetMap editors. From here you’ll see OpenStreetMap ID editor. And all these street signs are overlaid as well as Mapillary coverage on OpenStreetMap, allowing you to use that as reference. So, for example, you can find bicycle lane signs, share the road signs, any of these that indicate where like lanes exist and use that to aid you in mapping that route. So worldwide we have, as of about 30 minutes ago, 207 million photos. And also 30 minutes ago we hit 100 million photos just this year. So over half of what we have has really grown in the past 12 months. And we expect to see that keep growing and because of this, the networking effects of this are that all this data, all this imagery overlaps, it combines. And it just makes it more useful for everyone, the more that people contribute. Some examples of around the world of how larger groups have contributed to this publiclyavailable data and imagery. Lithuania, the transport authority there, just donated over 25,000 kilometers of road imagery. Mainly on highways. They use this internally for the government. And the intention is to make it available for all their citizens as well. In Australia, Vick Roads, a transportation agency in the State of Victoria in Australia, they take annual survey of over 22,000 kilometers of photos along their arterial roads. And each year they plan to continue uploading these on to Mapillary as well. Very importantly, this allows comparing photos over time. Which is a feature we also offer on our Website and are looking to put into more integrations. In the Netherlands, the city of Amsterdam captured their own streetlevel imagery of the entire city. Over 800,000 highdefinition panorama photos in 360. We have these available on JOSM editors to use with mapping. And we were also able to pull data from these, again, we’re going to add on to OpenStreetMap ID editor to see what kind of objects you would want to map and where. Just this fall in Canada, Bike Auto Organization, who is here, did a great job. Very great show of community activism. They uploaded about a quarter of a million photos over about a onemonth period. And much of this was targeting bikes bicycle infrastructure. And it’s going support ongoing projects in 2018 for just making Ottawa more bicycle friendly as well as having better data and maps that are useful for cyclists. And, again, this is all available on OpenStreetMap and we’ll aid an upcoming mapathon that’s going to take these photos and transform them into quality map data. Finally in the U.S., Microsoft recently donated about 2 million images from their streetside dataset. And these were in had support of disaster recovery in Florida and Texas. And so we have extremely dense coverage now that’s, again, available specifically on OpenStreetMap in support of the users who are doing disaster recovery mapping. And this is all predisaster imagery. So many users are able to contribute posthad disaster imagery and now see the change over time as far as what’s there, what’s gone, what’s changed. And in the future, we’ll also be able to see rebuilding and other important changes. So with Mapillary itself, we’re going to go over some more specific new upcoming features. The first I want to talk about is placement tools. What we’re doing here is we have actually taken every pixel of the images submitted and we attempt to match these up with geographic coordinates. So we’ve currently developed this tool that exists, allowing you to click in the photo. You place a marker. This marker appears in the photo on street lamps as you see here. And it also appears on the map where it should be relative to the observer location in the photo. So this is something also upcoming in the ID editor. I specifically want to thank Bryan at Mapbox for past, present and future work with this and making it possible. And this is something that will save many people from a timeold problem. Going back to days before Mapillary even when people have been using Google Street View alongside GIS tools as a reference, but constantly looking back and forth between images and the map and trying to eyeball where to place things. This entirely erases that problem and allows you to work much more quickly, more precisely, and just without the stress level that that can bring. So our intelligent detections, as I mentioned. You can see these on our Website now by turning on the AI detection setting. Here we’re in Portland, Oregon. And I’m specifically looking at crosswalks again here. So we’re able to pull up this photograph. We’re able to see the color coding we’ve placed and outline the crosswalk locations. So porting this into ID editor in the future as well, combined with the previous slide of placement tools, it will take you probably less than 5 seconds to add a single crosswalk on to the OpenStreetMap. And most importantly, it is validated with geo tech photo evidence. So it’s very easy to know exactly what you’re mapping as well as having very little struggle putting that on a map. We also have a new feature this month called tagging. You can tag an entire photo. This one we can tag as “Park.” Or you can see specifically in here I’m tagging the trash can. And I’m next able to tag the bridge here. And this works in multiple ways to improve everyone’s experience. One, we match these tags up with many OSM tags, so it makes it very easy for you to do this tagging on your own. Just keeping in mind what you then want to add to OpenStreetMap. This also is stored in a database that forms training data for us, for our computer vision algorithms to better learn what it is that’s in the images. And eventual we’d like to allow you to put in custom tags that may support very specific activities and missions that you’re working on. Even if you’re tagging Boulder, as you see in this photo, or something else extremely specific to your region, it’s something we want to enable you to do. And then enable you to benefit from computer vision. Receiving automatic detections and suggestions from. And finally, just a quick look at what we’re doing behind the scenes. We form a point cloud from images by matching pixels. And here we’re in the longer term looking to pinpoint all objects in the photo on the map. So here you can see trash cans, you can see manholes, you can see the color coding that we’re doing on these 3D models. And all of this ties back into just locating what’s in your images with very little effort from you besides on the ground mapping to support building a better map. Some other mentions here is that we have a plugin coming for 3.0. So it will enable a lot of the same editing on that platform. We integrated with Osmund Android application. So you can see point of interest images based on Mapillary being present there. As well as just see the Mapillary coverage in your area and use this to aid you when you’re out mapping to already see where coverage exists and fill in the gaps. So overall the idea with Mapillary and OpenStreetMap is to promote a smarter method of mapping. What’s most important that we offer for the community isn’t just the photos and the applications to capture those, but the tools to turn it into something meaningful as well as to allow communities to have on the ground efforts that actually support detailed mapping. We use machine learning, but this is not replace the on the ground efforts of the communities. It instead enables communities to go out collaboratively to take photos. It requires a community presence. And then share that for the community at large worldwide to then even form an even greater collaboration on adding detail to the map. And we also have an open source viewer. We call this Mapillary JS. It is a basic JavaScript library that has probe about 12 lines of code, I think, is the most basic way to insert that into any of your web projects. And this allows you to view your own images as well as those by others. Again, CC’d by SA. And we encourage people to make use of that and use this in your projects across the board. So overall, I want to thank our community for taking great images such as these you see up here. This is what’s made it all possible to grow at such a great rate. And it makes it really possible for us to continue making tools that make your mapping easier, your life easier, as well as collaboration worldwide more tight knit than ever before. So thank you. [ Applause ] We have four quick minutes for questions. Anyone have questions for Chris? Does any ah. AUDIENCE: Howdy. So it seems a lot of the images that were contributed were roads. Are you looking for imagery, if I mapped my backyard, would you appreciate that? Or walking in the woods, or mostly detection on streets in order to contribute to OSM? We want everything. The thing with Google Street View, it’s roadcentric. We can get away from that. You can take it your own direction. Do what you want. I do trail mapping and enjoy mapping things in different areas. We have people that mapped from boats. We encourage you to share it with others and show you unique you can get. AUDIENCE: Are you guys using off the shelf algorithms? Or mostly inhouse? I believe everything is very much inhouse as far as slam goes. We have a team a research team mostly based in Austria is working every day with this. So a lot of work has gone with making it fit our own purposes. AUDIENCE: Quick question about conflict resolution. The signs well, there are several instances I can think of on the U.S. 36 bikeway where the signs point in the wrong direction. so if you follow the line, you know this way is Boulder and that way is Denver. Yet the sign directional arrows are wrong. So the image doesn’t match reality. How do you deal with that? A lot of this depends on user validation. We have images are the wrong or no compass angle. So users checking their own or other’s imagery. We have validation tools we have just developed and released that allow you to see the detected signs that we have and confirm if they’re accurate or not. So we’re encouraging opening this up for people to submit their feedback and reintegrate that into a more intelligent solution. AUDIENCE: So with the amount of, like, pictures that people input every day, basically, on Mapillary, do you guys maybe think about a way of filtering, you know, maybe the last three months instead of just having, you know, the last five years there? And also, I saw that you guys were able to tag trash cans and everything. What about like surface types, number of lanes and stuff on that road? Is that available as well? Or would that be something that you guys would do? Sure. To your first question, a lot of the filtering, we just leave that up to the user and encourage integration in all ways. It’s easy in JavaScript maps with Mapbox leaflet. You can do that yourself if it fits a specific need you have. For our Website, we generally leave it as the default setting of everything there until why specify difficultly with filters. So surface types and lanes, we do have some tags for just very vague surface types. We recognize sand, we recognize earth, pavement. And you can see these on our Website. It’s in the list of detections. We also recognize lane markings. So it’s a little more of a project in the future to intelligently turn that into lane markings indicating number of lanes. It’s something we can work very well together with satellite imagery to combine a toplevel view from ground level to confirm, yes, this is a road. Yes, there are lane markings. And then we can start to indicate how many lanes are there. All right. Thanks so much for that presentation. It’s a very useful tool. Thank you. [ Applause ] So Kevin’s going to tell us how to get out there safely. And I’ll just quickly say, if you’re tall, kind of go more toward the mountain side. All right. In case you missed my opening comment this morning, we’re doing our group photo right out on the terrace where the reception was last night. Everyone can come this way. We have a world view 3 DigitalGlobe satellite scheduled to take a picture at exactly 12:28 and 35 seconds. So we have 18 minutes and 30 seconds to get out there. The satellite doesn’t care how tall you are and it will be over the mountain this is way. So wave, smile. This is a super highresolution satellite. The highest resolution in the world. And afterwards, Justin, our photographer, will arrange us in the tall to short for the group photo. So thank you. Thanks.