Peer-to-peer mapping for disconnected environments – Transcription
Hello, everybody. Hi. Who is having fun? All right. Okay. We have an awesome panel here. And quick back story. A lot of the organizers like Martine, not Martine, just worried. Luckily, we have two awesome panel moderators today. [Captioner attempting to reconnect audio] Today, so welcome our panelists. And [ Applause ] so the way that we would structure this, we would like to have a somewhat openended discussion. We only have 30 minutes. And the questions that you see on the slides behind me are kind of the immediate questions that popped into our minds when we started putting this whole panel together. And in fact there’s one very important one that’s missing here at the very top. There should be a question on what is an autonomous vehicle? And how does it operate? So I would like to get started on discussing what exactly is AV tech. Ethan, you work with Carmera, you guys are deep down in the trenches. Would you like to discuss that? Sure. So you can imagine that the autonomous vehicle ecosystem is trying to replicate what you do as a driver of a car. You need Lidar and cameras around the vehicle, a nervous system back to the brain. That includes the robotic operating system, Ross, you may have heard of. And a core brain making decisions about what’s interpreting in had realtime and moving the tires and turning can the wheels and making the blinkers go. And then you need some sort of a memory. Just like you have a memory to know where the vehicle is going. That memory is where map data comes in. And it gets used for two things, one is for localization, being able to really highly accurately understand where the vehicle is in 3D space, down to sub10 centimeters level of accuracy. And path planning. Where does the vehicle need to go? They need to be registered to one another so they’re accurate together as opposed to localization from one source and path plans from a different source. Okay. So that thank you. That pretty much can lead us to the next question which is, well, what do autonomous vehicles need from a map platform? I’m going put some talking points here. This is mostly for you guys to read and think about and potentially, you know, come up with questions. We’ll do questions from the audience. Maybe one or two at the end of each one of these big slides so that we can have audience participate as we go. So the next step is to really discuss what do autonomous vehicles need from a map platform. Philip? Yes, absolutely. I think there’s a lot of different schools of thoughts on that. But like basically as has been covered before is, what do you use a map in an autonomous vehicle? Because an autonomous vehicle has all the sensors to orient itself in realhad time. As a human being, you can orient yourself without a map. But the reality is like most autonomous vehicle vehicles rights now, their sensors have a lot of loss. And you hear about the Tesla crash on autopilot. They saw a truck like a sign in the road and not a truck crossing the street. And the idea is you use a map to cheat. Because you know what to expect. When you have an expectation of like the reality. When you say at that point there is a crossing, then you know that if you see something, then, you know, it might be a truck and not a sign. Then if you know you have recorded cars going on a certain road and there’s a panel above you, then most likely that’s like their supposed to be a panel and it’s not a truck. As you use it as yourself, and to know what other objects to expect when you know that you go safely [audio cutting out] and then basically whenever [Captioner attempting to get audio back]: there’s also things that are important to humans. Street names are important to humans and things like that. Autonomous vehicles don’t care about that that much. But there are other things that are very important to them. The whole localization, the accuracy and some of those things. You know, one of the things that we get into, many people, when we talk about mapping think that autonomous vehicle mapping is kind of a linear progression from what in the community is a map. I don’t know if it’s linear or discontinuous. It’s a very different thing. And I think that’s one of the primary things that, you know, these guys have been saying. Okay. Yeah. Do you guys have any additional points you want to discuss for this particular here. In terms of the living map, one of the other things to think about is construction events. It’s good enough for you as a human to know there’s a construction event on the road. When you figure out with your eyes, you can figure out where to go. For an autonomous vehicle on a track down the road, it can get there and have no idea how to proceed and need a person to take over. Having a living map that’s able to update in minutes, not years or months, but detect that construction event. Understand what is the new path to get around it. Understand the width of that path. If you’re a big autonomous semi, if you can fit or not. It’s important. It’s not good enough to say we’re going keep this fresh on a daily basis or an hourly basis. You need to be able to detect that event, understand how the underlying data changes and get that change out to an autonomous vehicle in minutes so that the car can actually continue on its trip. Just a question for the panel or I mean, at some point for all the things about driving down a block and avoiding things and whatever, there is also the point of getting from point A to point B. At some point that navigation doesn’t go away. I mean, I’d be interested in your thoughts. But that seems like almost a parallel system. So there’s one part which is, I’m going from A to B. And turn on that street, go on that street. But there’s another part that guides the path. And you guys see that as a parallel system? I definitely agree you will have the traditional navigation. Because the same way before you called here. If you think about an autonomous car, it’s like an Uber without a dude in front. I center the route, enter the destination. How much it costs. This is traditional. Before you call the autonomous vehicle, you have a traditional routing and navigation experience. Show where the car is going and how long it’s going to take you. Even an autonomous vehicle is not going away. That’s one point. And the second point is humans don’t trust autonomous vehicles. Even if it works perfectly and has a perfect 3D map and localizes you and gets construction sites. Whenever we studied that, we have an autonomous driving license in California. When we take people in there, they get freaked out. They have no idea what the machine is thinking. You want to visualize what the machine is thinking. You want to say I’m going to turn the next street to the right. And seeing two cars beside you. There’s a construction site in front. I’m braking and not driving you against the wall. A lot of that you need a traditional map. The car doesn’t care about the street name it’s turning on, the person in the car cares about the street name. It will have the car translate what it understands into humanunderstandable things. Even the machine for operation wouldn’t need that. You need to have the humanlevel map for an autonomous car to interact. Because there’s going to be a human in the car even if they’re not driving. I also like to think about how this is you need a graph of graphs. Imagine you’re trying to take it from here to New York City. You know, trying to do that [audio cutting out] that level of computation at at a lanelevel accuracy. Across the entire [audio cut out] think that the map is so the but I’m saying it’s a different kind of map. The map in the car doesn’t need to have things like street names. It doesn’t need to have those kinds of things. But it needs to have the ability to localize itself. To find a path that’s appropriate. Because, like, otherwise it’s very, very hard. If it comes to a construction site and it doesn’t know what the other option is to go to my destination, if it doesn’t have a route map in there, it’s very hard. There’s different maps for different purposes. And think about it right now when you have a pedestrian navigation map and the car navigation, they have some intersections and some common areas. And then some areas that are very specialized. So what I would see there is a common area. Let’s say the street network is the common, right? As a person you want to have an understanding where the path of a street goes. And as a machine you want to have an understanding where the path goes. But one needs on top of it humanreadable things like names. And the other one needs things like 3D objects for localization. As a human, tell you where you are, there’s a traffic light 2 meters in front and a guard post on the right and a rail on the right. That wouldn’t help you at all. That’s useless for you, but useful for the machine to localize itself. There’s the basics and specialized applications. I think it can be one map ultimately. Do we have additional questions? So you described the map as a form of cheating for the car. Which that’s a really interesting way of putting it. Makes a lot of sense. I was wondering if any of you thought that at some point the car would not need to cheat because, you know, like if you’re able to detect construction sites and figure out how to route around them. Presumably you have to do that fairly quickly, like now. But where you get to a point where you don’t need such highdefinition maps preneed for the car to be safe and whatnot? I think “Need” is a strong term. At the end of the day, you can drive a car without those things. There are startups today just using standard definition maps and realtime on the road to drive. But rooking for an optimal user experience of someone riding in that car, you need the map data. An example of the place we’re working on differentiated experience. Again, this is really something that is five or ten years out. But we drive vans all around New York capturing video. We identify people in the video and localize them. We get a pedestrian density nap for the city. We layer that on to the cost model for the vehicle driving down the road. And for an autonomous vehicle to choose one with less pedestrian traffic over one that does have pedestrian traffic is a huge cost, time and risk saver for that vehicle. Yes, could you navigate the vehicle without that information? Sure. But it’s a differentiator that you would to ride in the Uber or GM car with that information over someone who didn’t have it. One quick thing to add to that, say there’s 250 meters out, say a complete road closure. The car could brake in the distance. But not an optimal user experience if you are a selfdriving car that has to brake rapidly. If it knows by a realtime map a kilometer in advance that it needs to slow down, it can slow down then and do it much more smoother. I think the map will never go away because the sensors will never be able to see a kilometer out. It will be perfectly safe, but not optimal. The map adds advantages. The things that you said one but the sensor distance is that the car will never be able to see it itself. I think one more question for this section and then move on to the next one. Go ahead? Or hi. Questions. So obviously we’re at an OSM conference. So there’s some concept of maybe autonomous maps into OSM. Is there a strategy for dealing with the legality around right now maybe everybody is building walled gardens around 3D data sets? If there’s a map error that leads to a human accident. Are there any strategies to bring HD map concepts into OSM that could protect us? [ Laughter ]
Come on, Mark. There’s so many questions about that versus OpenStreetMap. So there’s the legal side. And a lot of times what does come up is like, who do you get to sue, you know? And how does that fit into OpenStreetMap? Like I’m the chairperson of the OpenStreetMap Foundation. I guess please don’t. And you could presumably do that. But you wouldn’t get any money because there wouldn’t be anything to get. So it’s, like, balancing like where does the liability fit in? And then go away from the legal part, because I always feel like OpenStreetMap is like the legal part and then the people and how they interact is what does the community want as well? So how does that all fit together? That’s right. One sentence to the legal part, right? Powering like fighter machines and stuff like that. Like really machines that kill people. The same with autonomous cars. Open source technology is used in legally critical environments. And typically what happens is you have a company in between. In a Linux space, an IBM, a Redhead, makes it bullet process to have quality processes in between. So typically have somebody to take the open source projects, harden it and then take the liability that nobody sues Kate, which we want to avoid. First public service announcement, whoever has the mic, could you give it back? But the other, you know, the other question, I think it goes to the slide back here. We sort of looked as this, if you’re taking the positive approach, why is OSM a good model for autonomous vehicles? And one of the things that’s being hotly debated is data sharing. Because right now you have a lot of companies that are collecting and using and developing data in silos. So this group has their data and that group has their data. And one of the things is just from a safety perspective, the better the data, the safer, you know, people are going to be. And that’s a huge issue. And so one of the questions that’s sort of floating around is, what about the concept of sharing data? And we were talking earlier that, you know, I’ve heard a concept where cities that want to become autonomous vehicle hubs, because autonomous vehicles probably happen in municipal hubs will offer incentives and whatnot. But may require that people share data so that the overall product becomes a safer thing. We were talking, that has technical and political issues. But there are interesting things about data sharing that I think carry over into autonomous vehicles. Yes. And not only safety is the ultimate goal. But for a competitive advantage, it’s really hard for one company to map the whole world and to collect this whole data and it needs to be fresh. So how do you do that? Are you able to do that by yourself? Or will you collaborate with other companies to do this so you can you all have the same data and then what you build on top of that is your competitive advantage. Absolutely. And I think that also begs the question, you know, if the data is shared, what does the OSM community do with that data. Do you have any comments on that? Like how can data that’s collected by autonomous vehicles, how can it be useful for the community? What gets fed back to OSM and what doesn’t? I mean, we all want the the OSM to improve. And if we have data, that can improve the map. Why not share it, then? To the community to use it. To actually make the map better. Yeah, so we actually partnered with Mapbox and gave a lot of our streetlevel imagery that we run machine vision processes on to be able to do automated updates. But gave the actual raw imagery to a bunch of Mapbox annotators and updated turn restrictions in New York City. And got a 30% improvement of turn restrictions in Manhattan. That’s a pretty good example of the collateral that needs to be made to be able to power autonomous vehicles can be used to help OSM. But the actual machine vision technology that makes those automated updates may not be. Do you guys have anything else to say about that? No. So I think, actually, in our previous comments we have covered quite a bit of why OSM is potentially the right platform for autonomous vehicles. And we also talked about why it could not be. Because of the requirements that autonomous vehicles have today. Are there any additional comments that you guys have for why OSM might not be the right platform for autonomous vehicles? Yeah. I think one of the things we were talking about is if you outline what autonomous vehicles require in maps and with OSM, there’s obvious gaps. OSM isn’t built to do this machinebased guidance thing. The data types are different. Could it evolve there? Conceptually it could. I think there are a lot of gaps to that. This morning there were some great sessions on machine learning and using machine learning to input so the OSM thing. And as you know, there’s some tension in the community about too much machine learning, how does that replace the community? When you go to these, I mean, you haven’t seen anything in terms of machine learning compared to this. Because this is going to be very rapid, low latency machine learning pumped into the thing. And so I think one of the questions that the OSM is, you know, is that a good thing or a bad thing? You know, it may be that it comes to a place where it says that’s not what we want to do. And I think the OpenStreetMap community’s intersection and collaboration with technology is kind of interesting. So if you think about there’s a rule in OpenStreetMap we call “The on the ground rule.” You’re supposed to easily observe what you’re mapping. That’s one of the intersections with autonomous vehicles where a lot of the information needed isn’t easily observable. But we think it’s okay to use a GPS to observe the latitude and longitude. But that’s not human observable. You’re using a machine. What is okay in the community and what is not? And you won’t find one opinion there. I mean, one possible path that I see also there is that in the future you need to distinguish what is automaticgenerated data and what’s like a layer that’s there. I think like things like movement speeds and segments and so on. They might not be easily observable ground truth. But you could just add them in a separate layer and people could choose when they download OSM, do you want to include those layers or not? Those don’t need to change the base map. One thing that I certainly hope is that as a community we figure out how to, like, use this data. Because I think the worst case outcome is that, like, for autonomous vehicles, everybody is going to use proprietary data. Then you’re back. The progress that OSM has made to really make it possible to build a lot of things in the open, if we don’t adapt the data models to accommodate autonomous vehicles, we take a massive step back. Right now every single autonomous company uses proprietary data. And for me, personally, that’s not where I want this community to go and this experience to go. I think autonomous vehicle technology is fundamentally going change our lives. We don’t want it to be completely in the close. I think that’s kind of my perspective. So I think we’re getting close to running out of time here. And obviously the purpose of this panel is to start a conversation, right? We have a lot more questions than we have answers to and we can potentially cover in 30 minutes. We want you to take away the talking points and start the conversations in smaller groups and potentially continue this in an upcoming we have a few minutes. We would like to open it for more questions if the audience has questions. Okay. AUDIENCE: So several of you have brought up safety several times with the autonomous vehicles. Every time you bring up safety, one thing that people think of is security, right? So if I have some data, where did it come from? How do I know it actually came from there? That kind of thing. You know, like, in a data sharing platform for autonomous vehicles, which, you know, we’ve you’ve kind of theorized OpenStreetMap could potentially maybe become, how do you see that working? That come to you? I’m not sure this is a great answer. But my impression for a lot of these technologies is people are focusing on the first thing’s first. And there are a lot of downstream things that people may or may not have fully processed or thought of. I mean, I think right now the number one question and some of these people may agree or disagree the number one question people are working on is this whole question of localization? And how do you do that using a variety of sensors? And my impression for a lot of the people working on that is that is problem number one, two and three. And everything else is problem last. And so I don’t know. Maybe someone else has a different opinion. But that’s my opinion is there are many issues like that. Liability is another one, distribution there are a number of issues like that aren’t front and center in the development when I talk to people. What they’re really working on is localization. I don’t know if you guys have different thoughts. I guess my general thought is, in terms of all of the problems that we have in the space, that’s a largely solved problem. Secure communication between client and server is something there are lots of companies working on today. I used to work at Amazon. It’s a big problem for them. It’s a big problem for banks. It’s a big problem for lots and lots of different companies. Once these core problems, how to get a robot car to drive down a safely is solved, which you can honestly do without any server to client communication. You can store all of that information locally on the vehicle when you’re doing R&D. Once we know that answer, solving security is something we can solve. It’s a known problem. Thank you. Do you have additional questions? So for the safety, we talked about the precision earlier. How do you think like the precision could be improved with some data? Would you mind sharing one or two approaches around this? Thank you. So one thing that I’m really excited about next year a new GPS chip is coming out. They have like an L5 band which gets you down to like a meter localization in some areas. Even in urban jungles, a meter level of accuracy. Qualcomm, maybe, somebody launched a chip. You are going to see humongous improvement in localization on phones. If you collect from cars, you have other sensors. You have a 3D gyro in a car. Over a 2-kilometer distance, at the end of that distance, you can position yourself within 34 meters accurate. This is using a 3D gyro. One of the things is combining sensors. Combining 3D GPS, a gyro, steering wheel angle, speed and so on. So you can get productionquality sensors you can get to a meter, sub meter accuracy. And the second part is doesn’t matter how accurate absolute, but relative. It doesn’t matter Y coordinate this or that. Do you know where you are relative to where you need to turn? Where the building is in relation to you. That’s more important than the absolute one. I wanted to say in terms of the gyros and those technologies, even though they’re powerful. They work way better in X and Y than Z. There are many opportunities where I see a vehicle with a really high-quality GPS driving around and get 2 meters level of inaccuracy in the Z dimension as it goes around the block. That’s really where using other data sources to do the localization becomes really important. We are pretty much out of time. Do we have time for one more question? [ Laughter ]
Any other questions? I think we had one more question in the audience. And then obviously I’m actually going to post the contact information for our panelists. Feel free to come up and ask them questions. Just needs to continue. Just a quick question on how you think about managing change detection across all these different data sources and conflating them together. And their opportunities to learn from that to think about how we can better update OpenStreetMap? Yes. Definitely. Once you have the capability to build a realtime map platform, you can feed those changes also to the OSM community. Ultimately, since the OSM community, most people don’t really like automatic edits, what you would do is create a change set for human to review. And once somebody reviews it, that gets to the map. That’s probably the best you can get at the moment. And I don’t know if I would say it was accurate that most people don’t like automated edits. I think it’s impossible to figure out what people in OpenStreetMap actually think. Because we have so many people in our methods of communicating are relatively oldfashioned. And so I think things will change as we get used to things. Like how I mentioned GPS. We’re used to GPS. It’s in all of our phones and some of these other autonomous or technical tools will be less strange. So I don’t know if this is a closing. But I think it’s an interesting question. You know, one of the things as I’ve gone to the autonomous vehicle things, there’s a huge hunger in the industry for more data to do training for driving and all that. A lot of data is visual. A lot of data is telemetry. A lot of that data is point cloud stuff. It’s highly accurate. Some of the stuff in computer vision around accuracy blows your mind. What we’re seeing is a lot of that has the collateral benefit. As Ethan was saying. It directly produces information that can improve the accuracy of OSM or improve the feature richness of it. So I think there’s a lot of collateral benefit just from huge amounts of data collection. Okay. I think that can count as our closing statement. I want to thank our panel for a very rich conversation today. I know you guys probably have a lot of questions. Feel free to approach our panelists afterwards and continue. So give them a round much applause, please. Thanks. [ Applause ] Okay. Everyone stick around. That was a great panel. By the way, if people want to keep the discussion going, that’s why we have birds of a feather. So it’s closed for today because they’re setting up the reception. But tomorrow we have birds of a feather all day long. If you want to talk more with our panelists or anyone, feel free to sign up.