Google’s Driverless Cars Have Spent 2 Years Mastering the City Streets
Google Inc.’s (NASDAQ:GOOG)(NASDAQ:GOOGL) driverless cars are on the road to mastering a new obstacle: city streets. Thus far, the cars have been able to navigate the interstate and highways comfortably, but city driving has proved to be much more challenging for the software engineers behind the autonomous vehicles, the project’s director, Chris Urmson, said.
Google published a blog entry on Monday that suggests the company’s self-driving vehicles are progressing quickly, though. In the entry, the project leader says that the team has been able to improve the car’s computer software so that it is able to deal with some of the more unexpected challenges that come with driving in the city.
“We’re growing more optimistic that we’re heading toward an achievable goal — a vehicle that operates fully without human intervention,” Urmson said in the blog post. “We’ve improved our software so it can detect hundreds of distinct objects simultaneously — pedestrians, buses, a stop sign help up by a crossing guard, or a cyclist making gestures that indicate a possible turn. A self-driving vehicle can pay attention to all of these things in a way that a human physically can’t — and it never gets distracted.”
Urmson says he’s hoping that Google will be able to unveil the technology to the public by 2017. The tech giant continues to assert that one day computers will drive more safely and efficiently than humans, and the company says that while initial models will need to have a human supervisor ready to take control if the computer fails, eventually, people should be able to read, work, or even sleep in the autonomous vehicles, according to USA Today.
So how does Google’s robot-car work? Well, sensors, including radar and lasers, create a 3-D image of the car’s surroundings in real time, USA Today reports. Google’s software then sorts the objects into one of four different categories; the objects include moving vehicles, pedestrians, and cyclists, as well as static objects like road signs, parked cars, and curbs.
Engineers have helped the software become more sophisticated and more competent at anticipating possible scenarios since the project’s initial attempts. Now, after logging more than 10,000 or so miles, the cars are able to drive autonomously on the city streets in Mountain View, California — with a few hitches.
“We still have lots of problems to solve, including teaching the car to drive more streets in Mountain View, before we tackle another town, but thousands of situations on city streets that would have stumped us two years ago can now be navigated autonomously,” Urmson said. Among those problems are driving in rain and fog, knowing when it’s OK to turn right on red, and perhaps most complicated of all, understanding the gestures and hand signals that drivers give to one another to communicate that it’s okay to merge, change lanes, etc., USA Today says.
Despite all that, the company’s co-founder, Sergey Brin said, in 2012, “You can count on one hand the number of years until people, ordinary people, can experience this.”