Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] Google's Self-Driving Car Can't Navigate Heavy Rain or Most Roads (autoworldnews.com)
36 points by siavosh on Aug 31, 2014 | hide | past | favorite | 56 comments




This is just blog spam regurgitation of the only slightly longer (but at least actually original research) Technology Review article from a few days ago: http://www.technologyreview.com/news/530276/hidden-obstacles...


I bet these blog spammers are making a LOT of money as that was really hard to detect.


The key question is what competitors have as technology. I don't know about the Japanese manufacturers, but in Germany Audi and VW have been heavily working on self-driving and assistive technology for the last decade and only incorporated communication technology at a later point (I was working at Vodafone R&D then and our team participated in joint tests starting six years ago). It is safe to assume that Google's approach was comms and data-first, car-physics secondary, while the manufacturers approach is the other way round with a focus on mechanics and physics.


I love the "road train" ideas some manufacturers are doing.

Here's one video but there are enty of others. https://youtube.com/watch?v=tQnVGOoVvVk


Google acquiring a car company would be an interesting move


The Swedish SAAB was for sale[1] not long ago if I recall correctly.

[1] http://en.wikipedia.org/wiki/Saab_Automobile#Spyker.2FSwedis...


For people who think this is some kind of 'hit piece' on Google cars, please go read the original source, where a lot of the information comes from the director of the google car project.

http://www.technologyreview.com/news/530276/hidden-obstacles...


The real question is whether Google's self-driving car is better (it doesn't have to be perfect) than humans in heavy rain (humans also have difficulties in heavy rain).


That was an argument made in the "Humans need not apply" video.

The question I care about is whether Google's self-driving car is better than me in heavy rain.

Not all humans drive equally. I think Google's self driving car might be better for some people more than it is for others.


Most people think that they are better drivers than other people. Even the terrible drivers think this.

And, really, you want other drivers to be better so Google's cars probably are what you want.


Humans have difficulties, but they manage it decently, driving slower, and more carefully. And the car apparently is unsafe to use in such conditions.


No, the real question is can it drive in heavy rain. I've seen humans do it.


It really depends what is meant by heavy rain. Heavy enough rain, snow or fog has zero visibility (significantly less than 100m), in which the only option is to stop the car on the shoulder and wait until the weather conditions improve.


When humans do it, are they just taking a risk that the autonomous car could take, but won't?


That seems like it might be an overgenerous treatment of autonomous vehicles' (current) risk evaluation. The article reads like humans are making the determination that the vehicle can't drive in the rain, not the machines.


What I mean is that the autonomous vehicles probably just get very noisy signals in the rain. Isn't that also what happens to humans driving cars in heavy rain? The rain limits long-range visibility, fog limits short-range visibility, wet road increases stopping distance, and yet humans think it is safe to drive at regular speed limits. Perhaps humans are (for now) better equipped to deal with those very noisy signals, but my point is that it's also dangerous for humans to drive in heavy rain.


True, but self driving cars have to be much better drivers than the average human for multiple reasons.

If a human has bad driving habits you can fine him or revoke his driving license without impacting the rest of the drivers. A bug is a self driving software would require to stop all the self driving cars at the same time. Obviously, that would be a problem.

Bad drivers only drive one car at a time and thus they have limited potential impact. Again, a bug in the deployed driving software would impact many more cars, with consequences multiplied by the number of cars. This could have disastrous human consequences, thus more care must be taken.

Finally, human are legally responsible, if something bad happen they can compensate for the damage caused. This is unclear who would be responsible in case of self driving cars.


We aren't talking about bugs, though, we're talking about cars not being able to navigate as well. That doesn't mean they're useless, it probably just means that they would have to drive very slowly at those conditions.


I think the scary parts come when we have to define the "max risk" threshold of the self-driving car.

No matter how good a self-driving car is at estimating risk, it is only able to do that: estimate risk. It cannot know, beforehand, if an accident will happen. So the question becomes: if the car estimates a 0.001% risk of accidents happening at 40 mph on a certain stretch of road, and a 0.01% risk of an accident happening at 60 mph, how fast do we drive?

Or do we just make the person who sets the risk tolerance of the self-driving car liable for any accidents that might happen, and let them set whichever risk tolerance they prefer?


Sensors and cameras can't really see something, but the eye and the brain of a human can guess, all senses are at 100% (humans are tired after driving in such conditions). Making an IA which can have the same performance/evaluation must be really tricky


It's probably fair to say that the lidar systems they use can see things. It generates a real time 3d map of the world, which is a lot easier to sort into obstruction/road than 2d sensor data.

The problem is that it doesn't work very well in the rain.


Would it be possible for someone to just address the complaints in the article?

"While Google's fleet has safely driven more than 700,000 miles, the autonomous model relies so heavily on maps and detailed data that it can't yet drive itself in 99 percent of the country, according to an MIT Technology Review report."


http://www.technologyreview.com/news/530276/hidden-obstacles...

> Google often leaves the impression that, as a Google executive once wrote, the cars can “drive anywhere a car can legally drive.” However, that’s true only if intricate preparations have been made beforehand, with the car’s exact route, including driveways, extensively mapped. Data from multiple passes by a special sensor vehicle must later be pored over, meter by meter, by both computers and humans. It’s vastly more effort than what’s needed for Google Maps.


We'd still be banging rocks together for fire if we quit every time we developed a technology that wasn't sufficient for production in its earliest stages.


Even if it can't it still can be incredibly valuable in a lot of climates where it operates well most of the time(ie most of California at this point). Most people don't travel very far and where they do is probably well mapped. Google could solve a small percentage of the problem and still win.


It makes the cars worth less, if people were going to purchase rather than rent them, if they cannot be used for all journeys. If they are going to be rental only it matters less.


From the source: "Data from multiple passes by a special sensor vehicle must later be pored over, meter by meter, by both computers and humans. It’s vastly more effort than what’s needed for Google Maps."

In other words, the mapping required is very specific and doesn't exist. Current maps don't help.


I took a couple of other things away from that:

"We know how to make these maps, we just haven't made them yet"

"We can still see ways to improve this process"


I think the goal would be not to need those special maps at all. (In the long run).

In any case, no-one is disputing that driverless cars are coming. Just reminding the over-zealous that there are real obstacles to overcome.


Huh? In the long run, having the maps is trivially easy. Roads aren't naturally occurring. If they can be helpful (which seems certain, since human navigators also rely heavily on having the area they're driving through memorized), why wouldn't we use them?


The specialised maps are far too noisy for human navigators to use.


So? I'm just pointing out that we don't expect anyone to navigate without maps -- there's no reason for automated cars to be different. amirmc said "the goal would be not to need those special maps at all", but that's contrary to navigation as practiced since the dawn of time. Given the ease of creating the maps, it makes infinitely more sense to have maps and better navigation than no maps and worse navigation.


Does anyone remember that awful Schwarzenegger movie, the Sixth Day? It seems like the cars could work like the ones from the movie at the current stage. Basically they take over for the long, boring part of the ride and then hand control over for the trickier local bits. That way, when I make the long trip to my parents' house or my in laws, for example, I can hand over control for the parts where I might get tired, distracted by my son, etc. That capability would be a huge win for me.


That's a great interim step but ultimately people still have to learn to drive. There may even be an increase in risk as you become less skilled at the things you don't do very often (when the car hands you control, the cognitive load could be overwhelming).

It'll be great when we can do away with the need for learning how to drive at all.


All self-driving technology I am aware of is based on visual input (cameras, lasers, etc.). I'm not sure self-driving cars can get as good as human drivers without any audio input. Noises are an extremely important source of information, especially in critical situations. I guess that a startup focusing on converting audio input into 3D-data (e.g. "truck approaching from behind") could become very valuable.


What are examples of situations in which audio input provides relevant information that 360 degree video input doesn't? Humans rely on hearing because our visual arc is only about ~120 degrees.


The pitch of the brakes and honking of the horn. For the truck from behind, can maybe notify the computer the trucks brakes are failing.

Yes, the 360 would see the truck, but it won't notice the honking so it can not be aware that the truck is trying to "tell" it something.


Emergency vehicles with sirens.


This is a good point, though ultimately emergency vehicles would also be automated and would signal to the other automated cars electronically.


there might be dead angle.


Deaf people can drive well.


plans to develop a temporary brake and steering wheel system for its fleet of test cars

How do you tell an autonomous car exactly where to park or at what exact spot in a parking lot to pick up a friend without a steering wheel?


The common sentiment has been that self-driving cars are just around the corner. The reality is quite different.

The remaining problems to solve such as navigating the elements or obeying construction signs or interacting properly with pedestrians or police are orders of magnitude more difficult than the problems that have been solved thus far. These problems are fundamentally different in that they can't be solved by current AI techniques and vision algorithms. The progress made so far has been quick, but it relies on technology that Google has already mastered. We'd be mistaken to think that the remaining challenges will be solved as easily.

It's a repeat of the classic mistake that has plagued the field of AI since the beginning: we underestimate the difficulty of problems that humans solve easily. We simply aren't aware of the incredible complexity involved in our simplest decisions, such as pulling over to allow an ambulance to pass. This is simple right? Just slow down and move off to the side of the road. But when is it OK to move off to the side -- what if there is something in the way, what if a pedestrian didn't expect you to move there, what if the car behind you suddenly gets in the way while it's pulling over, what if you're on a bridge, what if the ambulance behind you turned already and no one expects your car to suddenly pull over? Similar or more difficult problems arise when there's debris or potholes in the road, other poor drivers, bad weather, jaywalkers, policemen giving orders, road work, detours, etc.

What you find is that the last 5 or 10 percent of the capability required to make self-driving cars feasible represents a category of problems which we don't know how to solve, requiring a level of sophistication far beyond the current state of the art and perhaps approaching general intelligence in some cases (such as interpreting signs).

Better approaches involve shooting for more modest goals instead of full autonomy. Car companies are making investments in these more practical, incremental improvements, like automated parking and advanced cruise control.

But unlike car companies, Google isn't in this game because it thinks it can make a profitable and successful product. Instead, it's obvious that the main function of developing self-driving cars is as a PR tool (and the same goes for the rest of the Google-X projects). Google has gotten a lot of positive press for their self-driving cars, and they even use it to attract new employees.

However, I predict this positive press won't last (this article being an early example) because people's expectations are way too high. As years and years go by without much progress, Google's self-driving cars will increasingly become a PR liability and will be compared to the promised flying cars of yesteryear.


Some of the things you mentioned as not being handled by Google's cars actually are handled, and you can find videos on the Google Self-Driving Car Google+ page that provide visualizations of the car encountering some of those scenarios or similar ones. (some of the others do seem to be unresolved so far, though)

You assert that Google can't make this profitable. Why do you think that? If they can make it work well enough, they can be cheaper and more efficient than (probably) any other form of transport, probably drive all taxi companies out of business, and be raking in some revenue every time someone wants to go somewhere, worldwide. (think Uber, if only it were used for every single trip)

This could be far bigger than their current businesses.


If self driving is otherwise compelling, police and construction can use electronic signs (radio, networked notifications, whatever).

The robot drivers will do better than humans in that case, they will reliably obey the signs (imagine how easy it would be to detour a car that communicates with a regional traffic management system, compared to a human that thinks they know better).


While I agree in principle, in practice this idea sounds like a security and authentication nightmare.


The simplest form I can think of is a beacon broadcasting its location along with a stay back distance (with the routing system in the vehicle left to deal with the details of compliance).

That doesn't address security or authentication, but it should be fairly easy to track down a broadcasting radio, and it should also be easy to log and aggregate the active beacons that have been spotted by vehicles. With that context, traditional enforcement tools should probably work well enough.


“First they ignore you, then they ridicule you, then they fight you, and then you win.”

I guess this means we're moving into the 'ridicule' phase.


Where in the article is it being ridiculed? I read it as a dose of reality, which useful to provide some perspective on the challenges that remain (that many people are probably not even aware of).

The original article might be of interest.

http://www.technologyreview.com/news/530276/hidden-obstacles...


Focusing on the 99% of roads Google cars don't serve is unnecessarily ridiculing the technology, in my view.

Alternatively, had the article said that 99% of US work commutes can't yet be served by Google cars, then it's something to mention. But the 1% of roads that Google cars cover could very well be enough for the majority of US commutes (assuming it were to cover most metropolitan areas).

It's as if someone saying cellphones aren't yet ready for primetime since they don't work over most of the world's surface area. It may be true but irrelevant for the average consumer.


Exactly what i was thinking. The article seemed to me as needless bashing, thr priduct isnt on the market, its still in developement. How scared of google is the author?


Or maybe the author heard Google hype like 'no steering wheel' and 'no accidents in 700k miles', did the reporter thing of doing research and found out that it's as close to a streetcar as a fully driverless car, and wrote a story on it.


And by "research", you mean the reporter read the Technology Review article...


Id expect a longer article if research was involved to be honest.





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: