This conference has ended. See the latest ➔

How to Grow the Map: Watching the Map Grow – Transcription

Our next speaker is Jennings Anderson and he is from Mapbox. And he changed the title of his talk on me. And I believe it is now called “How to grow the map: watching the map grow.” There it is. Jennings. Thank you. Hello, I’m Jennings. And today I’m with University of Colorado Boulder and research fellow at Mapbox. Today I’m going to talk about this work that’s done collaboratively between the two. So let’s jump right in here. Let’s start with this rendering of the global road network in OSM. What can this tell us about the map? We know broadly we have global coverage. The differences in colors here show us that we have very different levels of consistency and completeness across the globe. These are two very common measures of map quality. So let’s start an assessment of this. Is this map complete? Or is this map complete? The mapper in us all knows this is a pretty loaded, problematic term that requires much further definition. How about good quality? Is the map of good quality? Even more interpretations in completeness. Quality can always improve, and assessing map quality can take many forms, as we just saw. So I propose a different question here, kind of reframing. How has the map grown and evolved over time? Completeness and quality are contained somewhere inside this complex relationship. So let’s try to explore that. Quality assessment is complicated. What’s special about OSM, however, is that it’s not a single static geospatial dataset. It’s constantly evolving and growing. With time then comes improvement in quality at a large scale. And this is especially fun with OSM data because it’s kind of opposite of some other geospatial datasets that with time grow old and stale and we need to update them separately. This is a general rendering of this is what we see. This is what we expect to see in the community as the map grows. So to this end, we’re performing an intrinsic quality analysis to OSM data with respect to growth over time. We do not rely on external datasets. But assessing OSM data in relation to itself and with regard to this growth over time. These are some of the attributes of the map we’ll be exploring. Contributor information here refers to the specific user ID associated with an edit to an object on the map. We can then count the users, see when users are active, and even identify specific types of edits per the user. For example, did they change the object’s name? Did they add a speed limit? For map objects, we ask questions about their type, count, and relative density. So let’s dive right into the guts here. These assessments are based on vector tiles. This allows us to use the tile reduced framework for efficient parallel process of the map data. It’s an open source framework that’s based on vector tiles. So first we turn OSM data into vector tiles, creating a global tileset at level 12 of the entire OSM database. This is the same process that generates the daily quality assurance tiles. These sets contain about 2.5 million tilesets for the world. We create a tileset that has a snapshot every three months over 11 years to achieve this quarterly resolution of the history. Here’s Boulder at this quarterly resolution. The top row shows growth over the rapid growth of the U.S. tiger import. And this bottom row shows growth in the last five years as the buildings have been added to the map, filling it in. So each map object in these tiles contains the following attributes. These attributes give us an idea of when it was last edited, by who, and most importantly, tell us what it is. So this particular object is that field. One thing that quarterly snapshots cannot tell us, however, is how specific objects evolve over time. We know Folsom Field has been edited three times, but we need the historical vector set. We create a new tileset that has complete tie history for map objects. For map objects with a version greater than one. There’s an additional attribute, history, that tracks changes to tags per version. Here’s the history of the stadium. Added to the map in 2013 with the two tags, sport and leisure. A couple months later a different user came in and added the new tag for the shame, Folsom Field. And last year another tag went on to show it’s the actual grass field. So we developed a scalable method to create these histories and generate a tile history set that’s currently running for all of North America, representing about 20% of the history OSM database. Able to run in a few hours on a fairly modest server. So we’re able to scale it. This process uses the Osmium library and the Facebook data store to create an infection and reconstructs the histories from said index while we render the tileset. I want to talk about this after this session. You have put a lot of thought into it and I know a lot of others in here share this interest. So here’s another example of history. This time a tennis court in San Francisco. In the middle, in April 2010, the tag describing the number of courts was added. And the tag created by potlatch was deleted from the object itself. So these historical tilesets track new, deleted, and modified tags across all previous versions of objects. Now we have multiple tilesets to feed our objects. Object histories and quarterly snapshots. First aggregate the data at zoom 15. This means roughly one square kilometer resolution. And represented by point geometries as you can see. And rendered to zoom level 12 which 64 individual features per tile. This is done to optimize the process for tile reduce. So now for the analysis pipeline. Using tile reduce, specify multiple tilesets as the input. Currently we have the object histories and the quarterly snapshots. We could scale to incorporate more sources such as population data for normalization, or a quality assessment to a comparison to an authoritative source. And for each tile, calculate various attributes. Such at kilometers of named highway, number of contributors, what’s the distribution of work by those contributors within these tiles? And this is where we could normalize by population or comparative reference data if we incorporated those datasets. For each tile that’s processed, results are then returned with the zoom 15 resolution and continually aggregated down to zoom level 8. A lot goes into this process to makes it memory efficient. Another detail I will be happy to get into offline. And then a series of geo JOSM files with the various levels. And finally feedback into Tippecanoe for the statistics for the given quarter. We repeat this process for every quarter since 2007. Generating tilesets about every edit in the quarter and the state of the map at end of the quarter. This is imperative to tell the complete story of how the map has come to be. They are not specific objects, but rather geometries that represent the tile and are organized by zoom level. I want to mention here that we choose this route with vector tiles for a couple reasons. They’re single files. We’re not maintaining a database and tracking that. And at any given point, with vector tiles, at any part of this pipeline, we can then just easily render that data and visualize it and have an idea of what we’re looking at and see that data in spatial context. So now we use Mapbox GL to render these tilesets in the browser to do visual analysis. So this enables us to then both style and query these data by these calculated attributes. So here’s a visualization of where new kilometers of highway were added to the map as opposed to editing kilometers of existing highway. This is just creation of highways. This is in the fourth quarter of 2007. This was the bulk of the tiger import. The small graph on the righthand side shows the average of each render tile over each of the quarters. We’re looking at the spike here. Referring to this graph, we know this quarter was the busiest. A few states were not added during this quarter. This rendering reiterates the wellknown story of the tiger import. We can use our familiarity with the story to validate this approach and now we know it’s working. So then zooming in and out, we see the different levels of aggregation all the way down to the zoom 15 level tile. Each of the features and packed with the statistics about the individual editing activity for the quarter, such as kilometers of new road. As well as the broader statistics of what the state of the map was at the end of this quarter. If we now change the map to instead the number of active users per quarter. We should not be surprised that during the final quarter of 2007, very few users. Mostly one or two. But the bar graph on the right shows the number of users has grown over time and we can see that in this room as well. We can then compare, over time, so see the specific differences across the map. Here we see the editing community’s growth and their relative locations over the last five years. This could be greatly improved by normalizing with results normalizing these results by population. Something that we’re currently working on. But in general we can see that we have growth. If we instead just count all map edits to the last quarter, we see decent national coverage of the continental U.S. And on the right, it’s highlighting the edits and mass imports with each of the spikes. Just remember, these are all calculated in the browser based on zoom level 8 aggregations that was originally calculated at zoom 15. And if we zoom in, we can access all that zoom 15 level data. Now the edits, let’s look specifically at buildings. Looking at the number of new buildings added to the map as calculated from the full history, the bar graph on the right shows the first significant spike in the U.S. map to be 2013 with relatively sustained of the since. So we render that map for that quarter of 2013 and we see that dark spot. And we zoom in and we find this was the Massachusetts import. Recalculating for this region, we see that most of the new building activity was here in 2013. And we’ve seen this new resurgence in 2017. So now if instead we focus on the last two years of just new buildings, we can see a few major building imports taking place. Notably the L.A. building import stands out dramatically. There. Zooming in and then switching to building density, we can to a comparison at that zoom 15 resolution for the first half of 2016. The time of that L.A. building import. Querying over time, we then see what we would expect. That there is just dramatic growth in L.A. building density in the last two years. So let’s switch gears now and look a little bit more closely at users. One day I’m going to have to stop using this example. But this particular dataset and the familiarity of the story make this is example such a good case study for these types of analyses. And here comparing number of users active per quarter in Haiti, last quarter of 2009 and the first quarter of 2010 when the earthquake struck. And then here is the result of the building density improvement over that time. Zooming back out and then rather than per quarter, let’s look at just the total number of unique contributors to date in 2012 compared to the total number of unique contributors to date as of last quarter. I like this one especially because 2012 does not look too bad. There’s a lot of unique contributors over the U.S. to date. And then 2017 just blows that out of the water. So what’s next? Where do we go with this? So two things here. A gold standard tileset and historical geometries. So far, we have shown that relative growth and computed specific completeness attributes, we can show that over time. We can show relative differences between regions and identify areas that need to be improved. Or show areas that have progressed faster than others. The next step is to then qualify what these differences might be. To do that, we’re developing a gold standard dataset through the map by expert users to look at data comparison. We can perform better feature extraction and what characteristics are due to completion over time. Yes, I am hinting at machine learning. For email zoom 15 tile like shown here, the mapper assesses the follows dimensions on the five-point scale. It’s assessed by experts and here’s an example of building coverage. On the left, two tiles rated as five stars with regards to coverage. Building density is obviously different between the two given their urban and rural landscapes. And therefore we can see that building density is probably going to be a bad indicator of coverage on its own. This is something we want to understand in this complex relationship more. The two tiles on the right are rated at two stars because we can clearly see buildings that are not mapped. In sum on that, there is no single attribute that can predict completeness or quality for the map. We know there are areas and we see how the balance changes between regions. The understatement of the day is that the relation between the attributes and the map quality is complex. Getting the gold standard set makes us one step toward developing this relationship. More with specifically, with this supervised data set will help us create machine class for us to make the datasets. And historical geometries. Version one, buildings added to the map in Nepal following the 2015 earthquake. Here’s the same map ten days later. Could you see the change? These buildings are all still version one in the database. Let’s look closer. Ten days before, keep an eye on these buildings specifically. They’re all square. Still version one, but all square. So the geometries were changed because the nodes were moved. Another user came in, squared up all these buildings and This was something we saw happen in Nepal and create some frustration with the more experienced community that the new users were not creating square buildings. Our tools try to track this change in geometry because it doesn’t iterate the version number of the building, rather, just moves the nodes around. We can look at the changes and create tilesets explored along this dimension as well. We’re current by building on that. Thank you. I’ll take hopefully one or two questions. And here’s a screen capture of interacting with this full history geometry tileset that we’re working with. [ Applause ] We have time for five questions. Great. How do we distinguish between buildings being added to the map and buildings being added to the world in the sense of capturing the difference between something that wasn’t mapped before, new buildings, and how do we map something not being there previously and now being created there? That’s a really good question. One piece that would be neat to compare there is if this is being traced by satellite imagery and seeing what when that satellite imagery was made available. Boy, that is a that’s a good question. And that’s going to depend entirely on region. Yeah. I’d love to talk more about that. I hadn’t there’s so much I think there’s a big focus here and there’s so much of the world that is there that’s not only map yet. And we have been focused a lot on just getting that data on to the map. But then I think there’s something to be said about the like the turnover in tag histories in urban areas, especially, where we live in an evolving world and buildings change. And people are editing that type of behavior. Something we can learn from the multiedited objects. Temporality becomes the big quality indicator there of, well, this had the wrong name. But this was superhigh quality last week when it had the name. Now it’s not the name. That’s stale data. That’s complex. That’s kind of fun. AUDIENCE: Hi. Can you use machine learning to help quality control for things like the building footprint identification? So, for example, most buildings would be expected to be at right angles, 90 degrees. Not, you know, great other less than that. And if so, can you at least flag them and have your developers go in and find where that those abnormalities are? There could be tools that specifically look for nonsquare buildings. I’m not familiar of anyone writing one yet. But I think that could be definitely done or something that could be incorporated into tools to kind of flag some of the geometries. There’s not been a lot. Because adding the geometry dimension becomes complex. And there’s so much there in the tagging that needs to be resolved first. But I think that, yeah, the more we can pair on this geometry dimension the better our results are going to be. Time for one more. AUDIENCE: Are the tools you’ve shown available publicly? Yes. These are I didn’t launch I don’t have the URL up right here. It’s a soft launch. We’re working on a couple bugs. But, yes. All the tilesets for North America that are renders just this last week are publicly available and I’ll Tweet this link out later. I need to work out a bug or two that I identified unfortunately last night. But then happy to share that and have people kind of play with that. Yeah. All right. Let’s give Jennings another round. And also a round for helping put this conference together. Because he did a lot of work for that. [ Applause ]