This conference has ended. See the latest ➔

OpenStreetMap Quality Control – Transcription

Do I have my other two presenters in the room? As well? Matt Jiaqi and okay. Not yet. So we will go ahead and start in this room with our first presentation. And if I get the right video in we’ll be able to see it. So our first presenter is Rasagy Sharma from Mapbox. And he is going to share tools that he’s been working on to help with the quality of OpenStreetMap. So, Rasagy, welcome. [ Applause ] Cool. Thank you so much. Let’s get started. Hi. Let me just get this up. Okay. Awesome. So, hi, my name is Rasagy. I work in the Mapbox data team as a designer. And one of the things that I have been focusing on has been to design OSM chart. Something you might have heard thrown around yesterday a bit. So that’s something I want to focus on today. So let’s quickly get started. And we are talking about quality control. And it’s always important to first get the context of what OSM is today. So just quick numbers. 4 million members. 4400 were active yesterday. Which meant that out of 52 million changesets, 32 million were made yesterday. Is this is a big project. To have a project at this scale with so many changes every day, there’s far more need to figure out the quality of changes. So move forward, how do we ensure that data is always highquality data coming in from a crowd sourced project like this. This is not a new topic. It’s something that’s been discussed over and over for a couple of years in the OSM community. Specifically a bunch of folks from Mapbox have been talking and sharing how they are doing validation. Essentially looking at din changesets and trying to figure out if something is a good changeset or not a good changeset. And different workloads. That’s what they have been doing. I don’t want to repeat what they have been talking about. But in case you’re wondering how the data team works and what they have been doing. These are some of the other talks on OSM that my colleagues have been talking about. But being is a designer, I want to focus on designing tools for OSM. And specifically while you are working on designing OSM Chart. An OSM changeset analyzer. Today I want to focus on patterns and contribution that we feel need validation. Essentially things that we feel require another set of Is from the OSM and the community to ensure that the data is good. And then talk on what we need to design and what’s next. So let’s get started. One of the most common things you would expect a new user who is, you know, new to the OSM community and the whole platform is this whole idea of, is this a private map or a public map? And that’s something that tends to happen quite often. So here is an example where somebody mapped a building and added a name as, this is my home. Or this is my school. Et cetera. This is a very simple sort of an error that somebody would make in the starting. And for changes like this, you would want to basically figure out, how do we welcome new users to the community and make sure the first few edits are good. If you can get them going well in the first few edits, there’s a higher chance of them working well in the next edits and onwards. And another error that might happen is not knowing the time schema. There are tons of different ways that you could tag different objects. There’s a Wiki that’s exhaustive and helpful. But not everyone who comes newly to the community is aware of everything. In that case, this person made 27 post offices which aren’t post offices. But they cannot figure out the tags to add. So added post offices and then names. This is a grocery store, a supermarket, this is a gym, et cetera. And that’s a common sort of error that somebody might make. I’m to the sure that I know there’s a data there, I’m not right sure what are the right tags. I’m going to add tags and see what comes out of it. And cases like this, we need tools to examine individual features, individual tags that have been changed on that feature or added. And figure out if that’s a good change or not a good change. And then there are some very, very rare intentional and harmful changes that are made on OSM. These could be either someone who is Spamming or just doing something for fun, like graffiti, et cetera. And here is a typical example. Someone Wednesday into the building and added this. If you have been following Pokemon Go! You know they use OSM. Some people thought we could add things on the OSM that is going to make more Pokemon spawn in my area. These are examples where you can find changes that are harmful. These are not for the good. And in this case, it’s really important to have a repository, or a shared knowledge of what are the different, harmful changes. If different communities can pull together different types of changes and we can share the knowledge, that would be great to build on to the sort of validation process. And here’s another one. So specifically in this case you would notice that this change is in the geometry. You could detect this change by looking at the geometry in the changeset. They’re on top of one another, pretty high probability that someone is messing with it. And here is another example where somebody decided to rename a road and say chicken little was a good movie. Someone is not really making changes to the geometry, but he or she is making changes to the metadata or the tag information. And these are roughly the two major ways in which we can look at a changeset and see if something somebody has done is good or not good. So I’m going quickly talk about how we at Mapbox are validating. We started using OSM chart built in 2015. And then OSM had a global version and improving it in the face of collaboration. So it started off with a basic set of filters. You would see the search results as a list. You would see the details in a single page. Before and at the end of this some of us spent time talking to the data team at Mapbox and trying to get internal feedback. One of the common things that people wanted to do was have a common, or a favorite set of filters. So I regularly go and check the same sort of stuff. Why can’t I just save this so that I can come back again and again? Or can I get notifications for a specific set of changes made in my area or in a specific area. Can I get more context on whether the discussions are actually happening? You don’t to want go ahead and look at a changeset and figure out if it’s good or not. Why somebody has applied back on it asking the user, why have you done this? Et cetera, et cetera. Adding context and giving feedback to the changes. One thing it was doing was automatically using very simple toolbased detectors to flag changesets. That might be correctly flagged, it’s okay. Or incorrectly flagged, somebody made a possible import, which was flagged. But it’s a good import. We want to add a manual feedback loop over here. And then other stuff, can it track the status and action that have been made on different changesets in the whole process. So in case you’re not aware, the Mapbox team has been focusing on improving OSM data quality. They do this throughout the day. But there are community member who is did to probably over the weekend or maybe do it in community parties. So we chatted with a few community members as well. Some of them are actually over here. So it’s news to see them. And they basically came up with some insights. Most of the community members focus on their own local region. They work within small communities. They use mailing lists to discuss specific changes. And they’ve focused a lot on welcoming new users. So we wanted to figure out ways in which our tool could help support this validation process. I’m going to skip through some of the sketches, et cetera. But this is essentially how we got started. Brainstorming. Thinking about the different ways. Show a map of the different changesets and the map. Should it be a list? Click here and here and a box comes up. Talk to me if you’re obsessed with this kind of stuff, it’s what I like to do. Overall the workflow became simple filter, look for changesets, and you can evaluate some sets or all of them. And you look at them on a map and give a thumbs up or thumbs down and mention the change. So this is how the new OSM chart looks like. We have grouped the filters so that people who are new to the tool could easily get started. There are basic filters from which data was the changeset added? Which area was it added? And were there any flags that were added? So in this case, as you can see, I have basically looking for new mappers. Any mapper with less than five edits, see those changesets and that’s easier to highlight. There was a cool feature added this year by Bryan on the ID team allowing users to request feed Pack. When you can do that adding a new changeset on ID or OpenStreetMap, you can see which folks have asked for feedback in the chart. That could be another set of filters that you use. There are tons of similar flags that we have been growing, adding, as OSM compared, another depository we collect into. We figured out Pokemon rule. Those rules have somebody modifying the tag in the wrong way? Has somebody edited a feature that’s mature? That might be a flag. Or has someone requested review? I think there are more than 40 flags right now. And it’s an open system so other folks can contribute more and more flags and the community can basically share knowledge of what could be there. Quickly moving forward, so there are other things that we heard from wanted to add in. So one of the obvious things is to save the filter. So in this case you can see that the dataset in September, set the beatbox as Boulder. And I am looking at any changeset that was flagged. So for any automatic reason that I might thing a changeset needs validation, I would look for all of them. I can save them as this filter called “Boulder.” Shows up in the user panel and I can share this with anybody else. If I’m doing a community party in Boulder and I want everyone to get together and start validating changesets, I can share a link and they can go through them together. And you have an analysis feed. Instead of getting analysis feed of every possible change in your area or across the world, you can look at specific types of changes that are being done and get notified as soon as that happens. And that’s great for folks who are actively involved in their community and want to make sure the changes are really good. So the next thing we did was to redesign the interface. We had the list of changesets on the left, and when you clicked, it would show up on the right. Essentially, we shaded information so you could gauge if this is a changeset you want to see. Sometimes there are names that are familiar in the community. Or this guy is doing great changes, I don’t need to bother looking at changes. Or sometimes they have a weird command that puts you off and makes you wonder if somebody is doing something malicious. Things like that let you decide if you want to look at it. And then drill down into the view. We spent time with the idea way to visualize changes. Figured out how complex sets like this could be seen in a way so you could understand what actually happened here. And other things I obsess over is to make sure this interface is colorblind friendly because that’s a common mistake that a lot of people make. Making a red out of green, like the red light or the traffic light, which is not always easy and directable. This is stuff on the visualization front we have done. And then we made sure that you could click on each and every feature. Sometimes people tend to make huge changesets. So in this case there are over 200 objects that have been added. How do you figure out which one you want to see? One of the things you want to do is the ability to move around, select different objects and see what has been the change. In this case, somebody removed the natural wood tag and added a leisure park tag. And this might be another reason to try to get, like, Pokemon spawns. And then there are other things that we wanted to look at. Can we basically allow a user to focus on different information related to the changeset at one time? You can look at the user and see how many of the changesets by that user have been flagged as good or bad by the community. You can look at a feature flagged by others. And you can look at the ongoing discussions that have been happening on the changeset as well. And to move a bit more, the most important thing that I think we’ve added recently is to make sure that we have a feedback loop. So a detector detects, you know, maybe the size somebody made a possible import. Is it actually a possible import? Is it not? Is something that every human who are verifying it can give a thumb’s up or thumbs down. And highlight whether this is a critically severe change. Needs to be reverted or fixed immediately. Or a small modification, so instead of going ahead and removing it, you want the initial mapper himself or herself fix the mistake and learn. It’s a much better way for the community to grow. And similarly, you would want to deport and how many were sent to DWG, what came out of it. This data is useful for anyone else later to figure out how to improve these detectors and how to do something more interesting in the machine learning space. This is a great humangenerated test data. And the last thing I want to focus on is the fact that design tools is very easy if you sit in your own room and you think you are the best guy who could do this and solve the world’s problems. But that’s never the case. So one of the things that we realized is that a lot of folks in the data team have a different set of tools built by the OSM community. It’s not right to replace them by building everything from scratch. We want to make sure our tool allows someone to move across different community tools based on different levels of sort of evaluation that you’re doing. So if you are at a changeset level, apart from looking at it in JOSM or ID, you can look at it in which is not great way to visualize. And OSM Esri, or level zero. Or look at a specific user and figure out is this user a power user? Has he been mapping for over a decade? Or has this person just started recently mapping and where have they been mapping? Right? So things like that come with this great tool, SDYC, which is a great tool. If you select a particular feature you could also look at the history of the feature over time. So there’s another tool called beepistry. These tools allow you to leverage on the ecosystem tools and changeset tools in the community. So we don’t have to recreate. These are the things we have, and we are getting requests to add more things. We are up for adding. If off a tool that’s interesting or it’s a tool you love to use and would love it to be part of the workflow, we would love to add that as well. And in case you’re wondering, if you can see this changeset, might seem familiar, this is actually this particular place, which is edited very recently to add this particular building. Stuff like this is easy to spot if you look at all the changes in this area near the University of Colorado. And you can quickly find this change that was added. This was tagged as a primary tag added to a mature features. Which is interesting. The fact that it’s a mature and wellmapped area, and if somebody tries to modify that, you would want to get a lot. In this case, no malicious intent. But it’s great to see an example of what could happen with OSM Chart. I want to wrap this up with a bit of internal feedback we received. The overall feedback was pretty positive. Folks became more efficient and distributing tasks. There are ten folks sitting throughout the day trying to validate the whole world instead of trying to be completely lost, you could be different detectors, split it up and do this. You could keep track of severity, which is superimportant. If someone finds a change very harmful to OSM, he could flag it. And someone else could be keeping an eye out on critical changes. That becomes easy for crossteam validation and doing this validation process together. We also got some great external feedback. We have had good users from different countries. Apart from India and Peru, those are the Mapbox team members. It’s great to see folks from the U.S., Brazil, Germany, UK, Russia using OSM Chart. And you heard yesterday, different folks from companies trying to use the tool. Facebook, Tasking Manager, et cetera. It’s great to see the feedback that’s there. While that’s overall positive, validation is still not a solved problem. And one of the major things that we’re trying to figure out is how do we scale these efforts? If you have 30,000 changesets added every day, you can’t have someone look through all of them. If that is the case, maybe it has to be distributed amongst more people or more community. How can we specifically find the changesets that we are interested in that can be harmful? And what can we learn from the change set reviews? This is what some of us at Mapbox are trying to do. But we need more folks into machine learning doing much more analysis on change sets to use this data so the whole API is open. You can look through everything that’s man manually tagged by all the verifying folks and know what was the dataset about. I want to end by saying the only way forward that I see is that we engage the OSM community to not just map which is great but to also validate the changes in the community. That would be the right way to move forward in this direction. So with that, thank you so much. I mean, we could take some questions. Yep. Thank you. [ Applause ] While we take some questions I just want to quickly talk about a few things. Tomorrow we have a validation jam which is open for everyone. If you’re new to validation you can come. If you have been down validation, still come and you will have to know what you have been doing and how we can learn. It’s at 9 p.m. You can Tweet to me and share feedback on OSM Chart or different repositories. And a big shoutout to the contributors who are making this project really active. Thank you. Questions. AUDIENCE MEMBER: You said that for this building it was a primary tag added to a mature feature. Yep. AUDIENCE: How is a feature determined to be mature or a tag primary? These are simple rules that we as a data team have come across. They aren’t standard ways of saying what is mature and not mature. But in a sense, we are seeing so you could get through the whole history of a particular feature, and then you could figure out when was the feature first created? And if there’s a proxy that feature has been there for a really long time. That might have been there untouched or reviewed and improved by the whole community. So someone goes ahead and changes that feature completely, that might be worrying. That might be a good thing, might be a bad thing. But we want to at least get to that level where we can look at some of these features. So one of the things that recently happened was that a major city in Europe’s name was changed. And that was immediately reworked. That’s great. The community is looking at this stuff. If somebody wants to make sure this doesn’t happen in less populated areas, some sort of a flag would really help to figure it out. In this case let me bring this up again. In this case what happened is that the field was not really edited. But it was the additional building added next to the field. So you can see that’s in this line bluish color where the building was added. That was flagged. AUDIENCE: Is there a way to query out mature feature changes? Query out in which sense? AUDIENCE: Like change that’s been to a major feature. A mature feature, I’m sorry. So OSM Chart looks at changesets. You can look at all changesets where somebody edited a mature feature. As simple as going to filters and selecting the change to a major feature, or setting a box if you’re interested in a specific area. That gets you going. And you can add multiple things, not only a mature feature, but a change by a new mapper. And that becomes more interesting and useful. You might have different ways to do this. You can combine and us or, and, to combine filters and do that as well. AUDIENCE: Very cool. Thank you. Thank you. AUDIENCE: I want to thank you for this great tool. Thank you. AUDIENCE: I have used it a fair amount. I do quite a bit hot editing. And in one country I have over 1800 edits with an average of 1200 features per edit. So virtually every edit I do shows up as a possible import. I have a separate import account and I have never imported using my personal account. So this volume of edits that I put, I think is just adds noise in there. I don’t know but where I have been mostly editing, there is no local community. It was just me. So how is there some sort of way to filter so that we really do get at those possible problems rather than just a mechanical, oh, this is so many features, therefore it’s a possible import? Yes. So I think possible import is the most debated filter. We keep getting feedback, we are still not sure what would be the smartest way so that we filter out people who have done import over the years. And they have been completely good. Was it somebody who might be doing it for the first time, for example? Right? A lot of the things that you can try and do is if you are validating these changesets, you start noticing a few folks like you doing it and you can add them in a white list or filter them out so that you don’t get their changes. But I would be totally interested in knowing if you have a better idea how to decide what an import is and a good import versus a bad import from just the changeset metadata. In your case, you are adding hundreds of objects that are fairly wellmapped. Somebody else might add a thousand objects and might be ridiculously bad mapped. And we struggled to figure out a way to fix this. These flags are just flags. They’re not saying that it’s bad, they’re basically helping short list the number of changesets that somebody should try and validate. And you can start getting a knack of, okay, seems this guy is doing good. No need to ask him about anything. Feel free to chat with me. Time for one more really quick question in the back. AUDIENCE: So yeah. So, OSM definitely has a standard of mapping, but as you go to different countries, you don’t necessarily have people who follow that standard of mapping. So you would have local users reverting changesets or being and we have a way that you map. Therefore we could be targeting your changesets and flagging those as bad changesets. So I guess is there a way to also check that you’re not being just targeted? Because you want to connect a gas station to a roof, but they don’t think that is right? Right. I think it’s a typical example of how the OSM community is I think that’s how it should remain. It’s a friendly community. If you see something that’s happening that you feel is not correct, you can discuss that in the local groups. But at advantages of having OSM Chart used in a wrong way is that everything will still be open. In which case you can flag out saying, look, this guy has been actively reverting my sets and mocking them as harmful where that is not the case. How do I get more people involved? I don’t think we are yet at a stage to figure out where we need to be an XType of user to validate or use a tool like OSM Chart. I feel the OSM community doesn’t want to create more values or different levels. But it would be interesting to see if that starts happening. Maybe we wait until I don’t know. Maybe that’s not the right answer. Wait until people start using validation tools a lot more often and figure out a way around this. Not everyone new to the community or someone with a malicious intent goes around with the sets and we can’t do anything about it. We can filter for a specific user that’s reviewing changesets. You can flag a particular user in the community going around saying that this change set is harmful and you can talk about it in the OSM mailing list. All right. Great. Thank you for that great tool. [ Applause ]