I really want to see fully autonomous vehicles but I don't really believe we'll see them on the road in the next 50 years. I think Google's concerns with being able to reliably pass control between the driver and the computer is warranted. I think that the technology will get stuck in the equivalent of the CGI uncanny valley where they find that the technology has to leave more control with the driver than its capable of to keep the driver engaged but never is capable of full control.
My guess is that these systems will evolve into assisted driving technologies that will use force feedback to the driver that will suggest the sanest path but won't take full control until the driver is outside the envelope and will predominantly be used to extend the window where baby boomers can drive and also save inattentive and unsafe drivers for themselves. In other words a drunk behind the wheel is still going to look like a drunk behind the wheel but just less likely to kill someone.
I expect the technology will be displace drivers in military convoys but I don't think it will be good enough for general use. Even if the technology is close I don't think the safety will be as good as the "augmented human" model which will also rapidly improve and so insurance and regulation will continue to hamper rollout.
Would love from those in the know to tell me I'm wrong. I want to believe!
>> I really want to see fully autonomous vehicles but I don't really believe we'll see them on the road in the next 50 years.
I agree with you 100 percent. DAS (driver assist systems) are getting to be very common. I've worked at a couple companies that produce them, though I did not work on those products. I rode on the highway in a car outfitted with prototype lane keeping, and it was not quite as steady as I'd hoped. They also had an option that would apply a light force to keep you in the lane but could be easily overcome (and I think went away if you used a turn signal). It felt like a sort of speed bump between lanes. It wasn't trying to be a really complex system, and probably has some real world value - in reasonable situations I'd like to be able to use both hands to eat some food from the drive through, while keeping my eyes on things.
I've also seen video (circa 2004) of an autonomous car driving 100kph down a winding dirt road, staying on its side of the road and automatically stopping. But again, the engineers wanted to do so much more with it, but the rational guys in safety would not allow it.
I also competed in the AUVS autonomous ground vehicle competition in 1994. I wrote a lane follower by taking 20-30 lines of pixels off an NTSC frame grabber on an Intel 486. The core algorithm was on the order of 100 LoC, and we took second place. Super simple algorithm, hardly intelligent.
There is a huge difference between a PID controller maintaining position in the lane with radar assisted cruise control, and being a fully autonomous vehicle fit of unaided driving on public roads. There is a whole range of system capabilities, and the public has no idea what's in any given car. Comparing highway fatalities/mile to national statistics covering all roads and conditions is bullshit and a certain company that's recently killed some people knows it.
But hey, we all want to be able to read a book on the road or have truckers take a nap on long hauls, so let's keep deluding ourselves that this stuff will be ready for prime time in the next couple years.
"I wrote a lane follower by taking 20-30 lines of pixels off an NTSC frame grabber on an Intel 486. The core algorithm was on the order of 100 LoC, and we took second place. Super simple algorithm, hardly intelligent."
How is this relevant to self driving cars in 2016 and the following decades?
I think he/she is arguing that what people are "ooing" and "ahhing" about is not really that amazing or novel, it's the infinite number of edge cases that will kill people regularly that is difficult and will not be solved for quite some time.
Yes, that reminds me of the stretch on the 101 near Palo Alto that has two sets of lane markings (one old and slightly faded out but still very visible, and one newly repainted). Of course, drivers should only follow the newly painted set of lane markings.
Even as a human, I get confused while driving on this stretch. The old lane markings aren't quite faded out enough to easily tell them apart. As I'm driving, I see every other car slowing down and "dead reckoning," too.
How will the run-of-the-mill self-driving car's lane following handle this corner case? If it's a Canny edge detector (like most self-driving cars), it's definitely going to fail and find false positives.
Why do you assume most self-driving cars use Canny edge detection? And even if they did, you can simply have the algorithm choose the higher threshold (with some other sanity checks, suck as are the lines the expected distance apart) and ignore the lower threshold.
The algorithms for lane following are quite a bit more sophisticated than that though. Here's a paper from 2008, and there are other approaches as well.
Hough lines/RANSAC are definitely on the same level of ad-hoc sophistication as Canny edges (in fact, Canny edges are often done as a preprocessing step for the voting scheme used by the candidate Hough lines).
Take a look at Figure 12 for an exemplary look at its failure cases -- cases that are much less forgiving than the aforementioned Palo Alto stretch 101 lane markings.
I'd have thought road-marking interpretation would nowadays be based on something more elaborate, such as HOG, without the need for an initial edge-detection pass.
(But in this case, I'll add that HOG doesn't solve the problem of overlapping old/new road markings)
That's why I disagree with Elon Musk and others who say "autonomous driving systems only need to be like 50% or 2x better than the average driver".
What if I drive 5x better and safer than the "average driver"? Doesn't that mean that these cars would now put me in more danger than I would normally be in with my own manual car?
Self-driving cars should be 10x better than the best (as in safest by number of incidents, say, per decade) drivers on Earth. Then I could begin trusting them ... if only there wasn't that pesky security issue, too, with self-driving cars getting hacked and then driving you off the highway at 70mph and killing you.
We see Teslas getting remotely hacked. If that happens with Tesla, a Silicon Valley car company with a lot of hired top tech talent, how can we ever trust a company like GM or Volkswagen to build safe self-driving cars?
> What if I drive 5x better and safer than the "average driver"?
Then don't use the system?
You are giving thresholds - 50% to 2x better, 10x better. Before what? Before we ban manual driving? "10x better than the best driver" might make sense. Before we allow it on the road? Why not "better than the worst drivers we allow on the road"?
The most relevant point for Musk is probably "before the marketing works out". The market of people who consider themselves "substantially below average" drivers is much smaller than the market of people who consider themselves "not all that much better than average".
> What if I drive 5x better and safer than the "average driver"? Doesn't that mean that these cars would now put me in more danger than I would normally be in with my own manual car?
Because you are unlikely always that.
Are you tired? Do you have a cold? Are you hungry? Are you playing with the radio? Are you having a conversation with a passenger?
The huge advantage that self-driving cars will have is integrated systemic control from multiple sensor sources and multiple transducer outputs--and they will never get distracted.
Human drivers have mostly 1 input: vision.
Human drivers have mostly two outputs: hand and foot.
The lag in that system from input to output is >100ms even if you are perfect (I will grant you an exception if your name is Michael Schumacher).
Automated systems will have far more inputs. Multiple vision sources. Multiple radars. Temperature readings. RPM readings. etc.
Automated systems have far more outputs. Steering is the same. Brakes on individual tires is the big starter. With electric, individual suspension becomes feasible. Transmissions change form and become electrically controlled.
And the lag in the system is probably <10ms. That's huge. That's the difference in reacting in roughly 3 meters (2.7 meters) vs 1/3 of a meter at 100kph (9 feet vs 1 foot roughly for 60mph).
Finally, automated systems will take subtle preventative actions all the time. They will "see" the child on bicycle on the sidewalk (the idiot detector as Google called it) and "know" that they need to find him again when they attempt to turn right or slow down significantly. They will see the dude on a skateboard and know that they need to give him an extra couple feet if they can in order to deal with him falling. I know very few drivers that are that diligent.
>> How is this relevant to self driving cars in 2016 and the following decades?
My point is that basic lane following can be done quite trivially, but it's nowhere near something one should entrust with the safety of human beings. Your question implies that you think all modern systems being deployed involve some really sophisticated AI. You don't know that.
But you stated that anything using LIDAR was not capable of robust tracking. Perhaps you meant that anything using LIDAR was doing so because camera-based tracking is not very robust. But that's not the straightforward interpretation of your statement.
>> But that's not the straightforward interpretation of your statement.
Yes, that would be a communication problem of mine. It's something I'm working on ;-) But I would counter that LIDAR isn't going to fully make up for the problems of the camera based system with the result being a simple sounding statement like "if they're using LIDAR it's not good enough", or grossly oversimplified to "LIDAR sucks".
I will admit that LIDAR could conceivably enhance a really good vision system by offering better depth perception, but I don't believe that's why people are using it today.
Nothing in your comment makes me question the value of upcoming autonomous cars. It seems you're basing this on your experience with obsolete tech (aka the tech that currently on the road built by the non-tech companies). Even Tesla's assisted driving is being outmatched by a guy in his garage [1] doing full-data collection + deep learning. The more data collected in each ride the more the systems can improve their driving ability.
This stuff is limited right now but the sooner these (actually stage 4 autonomous) cars starts hitting the road and being used in real life, the faster the tech will improve. I expect some dramatic improvements once Google car quality sensors are combined with full-data collection on hundreds of thousands of daily rides.
Of course, it won't become totally autonomous until the vast majority of the cars on the road are using assisted driving and potentially interfacing/coordinating with each other. At that point the speeds can be increased and the driver can be disconnected from the backup driver role.
With kits already coming out with the idea of 'back-porting' old cars to use new automated driving tech, I don't think it's a huge barrier to adoption. It won't require everyone having to go out to buy new mid-level cars to participate. People can purchased insurance (and personal safety) incentivized kits to make their cars automated.
>> aka the tech that currently on the road built by the non-tech companies
The level of arrogance in that is pretty staggering. Remember, I mentioned a system driving on (unmarked) dirt roads at highway speeds 10 years ago. I'm sorry, but advanced tech does exist outside of California.
> But advanced tech does exist outside of California.
And tech companies and AI experts also exist outside of California? What's your point?
I specifically addressed your pessimistic comment that dismissed upcoming driverless tech due to your experience testing the current DAS (driver assist systems) built by non-tech companies. I question the relevancy when tiny companies like Comma.ai have demonstrated that widely available deep learning algorithms allow a single guy in his garage to surpass Tesla's own tech. Therefore if that obsolete tech (which is itself is using 3yrs old sensors) is your point of reference plus some 10yr old videos... I'm not sure how relevant that is to what tech we can anticipate in the next 3yrs.
George Hotz' talk opens with the claim that "Tesla's and other companies current DAS tech sucks. They don't even capture the video/sensor data from the rides". Which I fully believe. It shows the wide disconnect between the limited DAS tech being rolled out by car manufacturers and the true capabilities of current technology from people working on real autonomous vehicles. Instead of glorified cruise control with lane switching.
The real question and risks seems to be around price and implementation details. Not whether or not the technology will actually work as intended within the next few yrs.
The system handling dirt roads was an order of magnitude more impressive than anything else I've seen including George Hotz little toy project. Can geohotz system navigate dirt roads? Intersections? Can it read signs? No, no, and no, but hey it can record video... I gave several examples of different levels of tech - all developed by non-"tech" companies.
I'll take the other side of that bet. My view is that it will come down to the insurance companies. They are going to price policies such that only a very wealthy person can consider self driving. This will be in less than 15 years.
Why would insurance cost more than it does today? Presumably new safety systems will trigger discounts to the degree that they cut the number of dollars that insurance companies need to pay out on their policies. But I see no reason that a profitable policy today wouldn't be profitable in the future just because other cars have more advanced safety systems.
Scarcity? If 99% of the cars are autonomous, and the only people left manually driving are driving Sports cars - everything from Mazda Miata's to Ferrari Enzo's, then that will be a very small, very expensive to market insure.
Don't forget 4x4 and vintage vehicles. The number of miles driven is one of the biggest predictors for claims and I imagine that most of these enthusiast manually driven vehicles will have low annual mileages. The insurance rates will almost certainly go up, but I do not think they will be unaffordable.
If 99% of cars on the road are autonomous, it will probably be very difficult to hit one with a self-driven car, so liability costs won't be high. And unless Miata parts and labor skyrocket in price for some reason, comprehensive and collision costs aren't likely to be high either, and one can always skip both.
It's been available for military convoys for years. Oskosh offers it as an option on their large military trucks.[1] There's usually a manned vehicle in the convoy, but it doesn't have to be in front, and usually isn't.
I agree with you, but I think the bridge will be drone drivers as the assisted driver, not an in car driver. I think there will be warehouses full of drone drivers, ready and alert, to jump in at a moments notice to a car that raises an alert to being in a less than ideal situation.
For all those "truck driving simulator" games, maybe there would be a market for "drone driver simulator" where you are dropped into random bad situations and have to successfully navigate out of them.
It surprised me to learn how reliant self-driving cars are on GPS. That makes me wonder how they'll perform in places like D.C., where roads will randomly be shut down or turn into one-ways (going in different directions in different times of day), lefts are legal or not legal depending on time of day, etc. Are we really going to have to count on that all being entered correctly into some database?
Accuracy also seems like a problem: the GPS often has no idea, e.g., that I'm on an access road versus the main road, or precisely where the next turn is.
... or in rural areas where the GPS position of roads is just plain wrong. Hwy 3 north of Steele, ND was off by about 100 yards on the GPS for a couple of miles. Nothing like a GPS yelling at you to get back on the road.
I get the feeling we are going to have to add some "beacons" to the current roads to deal with local conditions and just provide a decent verification.
don't want to sound elitist but a lot of us will consider the driverless car successful if we get it working in places like california and ny and completely ignore rural areas
I don't want to sound pragmatic, but I will consider the driverless car successful if we get it working in places like Kansas or eastern Colorado, and completely ignore highly unpredictable, densely populated urban areas.
If you are taking a long trip, or hauling freight, then I'd rather get out on the open road, turn on the AutoPilot and take a nap, or do something more productive with my time.
You do realize this is an article on freight hauling? Perhaps you might want to look up where a lot of freight originates or passes through (or even the picture in the article).
what i've heard a lot of people say is that that human drivers would still drive in rural and suburban areas. but most of the time spent driving is on large well established highways which are accurately mapped and could easily
be handled by automation.
For example all interstates in the US. I guess a significant proportion of trucks spends more than half their time on interstates. And if the driver can relax there, they are more alert again when driving in the city for the last miles.
I wouldn't be so certain that "all interstates in the US" are uniformly easy to drive. A specific counterexample: the BQE (Brooklyn Queens Expressway) is technically signed as I-278, but poor maintenance, frequent and shortened ramps and interchanges, and idiosyncratic local driving culture would IMHO make it quite a bit more challenging for automated driving than the proverbial four lane interstate through a midwestern cornfield in the middle of nowhere that you probably were thinking of...
For trucks it's still easier to drive there automatically as they have less lane changing than other cars. And your example is from a city. I think the time drivers spent in a city is not what makes the job so hard, it's the endless hours driving through the middle of nowhere. Automating that could be a huge gain already.
As an example, consider freight from LA to SF. The driver would still drive until he leaves LA, the truck would drive up the I-5 and the driver takes over again south of SF. In the end, the driver only had to drive for 30% of the time and could be more alert during that time (and spend his time with sth useful in the meantime).
"I guess a significant proportion of trucks spends more than half their time on interstates."
I would have to look at actual figures, but my gut feeling given the truck routes around us is that might be suspect. I'll echo galdosdi on the rest, the interstates aren't exactly easy.
yes i do. but i believe we can handle those. handling the rural areas between LA and SF is very manageable. It's essentially just driving straight on the 5 freeway. Also, all the tech companies are here and probably would be willing to hammer out the details for the cars to work only in california as it is a lot more manageable. just like tesla has supercharger stations all over california so you can drive anywhere. don't think it's quite the same in other states.
With sensor fusion incorporating signals about position and velocity from devices like accelerometers, optical flow, lidar, and magnetometers, you can get better precision than the GPS alone and this is common to do in practice.
>I confess that in 1901, I said to my brother Orville that man would not fly for fifty years… . Ever since, I have distrusted myself and avoided all predictions.
— Wilbur Wright, in a speech to the Aero Club of France, 5 November 1908.
I'm willing to believe that there could be full autonomy under some circumstances (e.g. highway driving) significantly sooner. Which would probably be a significant safety win if nothing else. (Presumably, it could also increase vehicle utilization for trucking.)
Of course, what that doesn't enable is eliminating the need for a licensed, competent human to be present. So it doesn't allow for any of the shared automated car scenarios that get people excited.
I do know a lot of smart people who are convinced that we're on this exponential technology improvement path and nirvana really is only 5 or 10 years away. But I listen to other experts and just look around busy, congested cities and getting computers to the last 5% looks really hard.
It's hard to tell exactly when fully autonomous vehicles will be ready, but I definitely think it's safe to safe to say that there is not going to be any such thing as a computer-assisted human driver in any significant numbers.
If the intended users are anything else than specialists, it's very rare that anything that is not either fully automated or not noticably automated works out. In fact, even if the user is a specalist - such as a pilot - it often doesn't work out very well. People will misunderstand the system, trust it too much, trust it too little, rely too much on it.
The only solution is full automation. Noting else is going to work.
All new vehicles in the US have computer assisted traction control. For several years now. So a huge portion of drivers are already being assisted by a computer.
It was mandated because it improves safety. Automatic braking is scheduled to be a standard feature in the next 5-6 years.
I'm happy to concede that these are sort of uninteresting in the context of autonomous vehicles, but they are very much computer systems that assist drivers.
You also said It's hard to tell exactly when fully autonomous vehicles will be ready, but I definitely think it's safe to safe to say that there is not going to be any such thing as a computer-assisted human driver in any significant numbers.
I just didn't realize that I definitely think it's safe to safe to say that there is not going to be any such thing as a computer-assisted human driver in any significant numbers. was not intended to be a categorical statement...
I did not write what I wrote in a vacuum, it was a response to a post that should make it more than obvious enough what I was talking about. I'm going to quote if here for you:
>"Confession time...
I really want to see fully autonomous vehicles but I don't really believe we'll see them on the road in the next 50 years. I think Google's concerns with being able to reliably pass control between the driver and the computer is warranted. I think that the technology will get stuck in the equivalent of the CGI uncanny valley where they find that the technology has to leave more control with the driver than its capable of to keep the driver engaged but never is capable of full control.
My guess is that these systems will evolve into assisted driving technologies that will use force feedback to the driver that will suggest the sanest path but won't take full control until the driver is outside the envelope and will predominantly be used to extend the window where baby boomers can drive and also save inattentive and unsafe drivers for themselves. In other words a drunk behind the wheel is still going to look like a drunk behind the wheel but just less likely to kill someone.
I expect the technology will be displace drivers in military convoys but I don't think it will be good enough for general use. Even if the technology is close I don't think the safety will be as good as the "augmented human" model which will also rapidly improve and so insurance and regulation will continue to hamper rollout.
Would love from those in the know to tell me I'm wrong. I want to believe!"
Well, how about automatic transmissions? ABS? Traction control? Cars have had those things for decades and they're not a big deal.
A lot of newer cars have lane assist and other features. I think a half-way there approach works fine. The same way pilots have the option to use auto-land and auto-pilot.
I'm sure that if you read what I wrote again I wrote "not noticably automated" rather than "not automated", specifically for that reason. People don't know that it's there, it's just the way the car naturally handles for them.
Transmission is completely automated. You never have to handle anything that has to do with it.
ABS and TVS changes how the car functions all the time. You have no control over it whatsoever, and it never hands over control to you.
In other words, none of those contradicts anything I said, and I have no doubt that potentially more things like that will come. However, they will still fall under the same two categories: full automation or non-noticable automation.
You make a very good point that we've been making driving easier for a long time now.
But those things don't uniformly work fine, unless you ignore human factors. The big danger here is https://en.wikipedia.org/wiki/Risk_compensation especially with regards to encouraging drivers to feel safe about not paying attention temporarily.
Things like ABS and traction control seem to work great in this light, providing a benefit at little cost -- because (unless they're goofing around on a track or parking lot or something) the driver rarely notices it outside of a rare emergency and thus cannot adjust their behavior to compensate.
Automatic transmissions, lane maintenance, cruise control, "tesla auto pilot mode" etc on the other hand clearly make the car feel easier to drive and harder to crash every moment they're being operated, and make the driving task less demanding of attention, providing plenty of opportunity for the driver to get used to the safety feature and start using it as a crutch that allows the driver to pay less attention. And inattention is the main cause of wrecks.
Maybe these features save more lives than they doom, but it's certainly not a given, so as they are introduced they really need to be rigorously analyzed to ensure they have the hoped-for effect.
> Even if the technology is close I don't think the safety will be as good as the "augmented human" model
Except allowing humans control is, by definition, less safe. Google's self-driving cars have had a total of 1 partially at-fault accident after 1.5 million miles of driving. While their accident rate is about the same as the average driver, what isn't the same is that humans caused all but 1 of the 17 accidents the cars have been involved in.
Google has been testing their cars in conditions that are more ideal for existing technology than the vast majority of the world has to offer, and they do it in a way that avoids directly relying on computers for handling edge cases that normal drivers can't avoid. Their progress is impressive, but it's nowhere close to being able to handle major inclement weather, bad roads, obstructions, detours, and experimental traffic calming or control measures. Graduating from college takes 16 times longer than it does to graduate from kindergarten...it won't be any different here.
No, not really. Watch Chris Urmson's talk at SXSW. See their vehicle faced with someone in a powered wheelchair chasing a chicken with a broom. See the system recognize construction crews directing traffic.
I would be careful to extrapolate their capabilities from that. Nobody is going to talk about the edge cases that they don't handle, but they will talk about the ones they do. It is a pretty safe assumption that anything they can't demonstrate publicly is something they can't currently handle. In other words, if they didn't show the car driving in the snow, it's pretty safe to assume it can't.
And even for the examples they did show, it would be more interesting to know the rate at which they handle them successfully. So sure, they handled the guy with the reflective vest and a huge ass sign directing traffic that one time...will they do it the next time? Will they handle the case where the construction crews aren't doing everything the way they should?
The edge cases they don't handle show up in their DMV accident reports. Google has been very upfront about their sideswipe by a bus. They mis-predicted what the bus driver would do.
The edge cases that they don't handle don't really show up in their accident reports, because they avoid testing them publicly until they have a reasonable chance at success at succeeding in controlled environments. The stuff you see showing up in accident reports are their Release Candidate bugs, not their Alpha bugs.
I think the main point is not that the system can handle all edge cases, but that it can handle more edge cases more reliably than humans can at the aggregate level. That is what really matters. A computer never gets drunk, tired, sleepy, sick, etc.
Furthermore, when autonomous vehicles become common, human drivers will also end up in fewer accidents because they won't have to worry about the unpredictable behaviors of other human drivers (such as suddenly changing lanes without signaling).
Edge cases are important, but at the end of the day, they're just that: edge cases. By definition, they will be very rare.
How they handle them is far more important than if they handle them. Human drivers aren't exactly great at handling edge cases either, but they certainly handle them better than the computers do at the moment and for the near future. Existing technology is by far better at the non-edge cases, but nowhere enough to make up for how bad they are at edge cases. They have better reaction times for people crossing the street, but throw some snow at it and it'll drive off a cliff.
I'm sure they've got some reasonably safe behavior for when it doesn't know what to do (do nothing!), but that doesn't exactly make for a great transportation device. Staying home is safer than that and just as effective at not getting you anywhere.
And while edge cases are by definition rare, how rare are they? I'd be willing to estimate that when I drove trucks for a living, I'd handle upwards of 100 scenarios per day that were not "by the books". Remember that 99.9% of uptime still means you are having downtime 1.5 minutes a day. If they are really bad at handling the edge cases at the 0.1% level of rarity, that's still enough of a chance to fuck up at least once a day.
>>Human drivers aren't exactly great at handling edge cases either, but they certainly handle them better than the computers do at the moment and for the near future.
This is a ridiculous claim. It's like saying someone is a star baker because they were able to reliably make cookies when given all the ingredients and recipe and are only asked to do easy ones. I don't care what a company claims to have done in lab conditions, and neither should you.
The too cheap to meter phrase was from one guy's after dinner speech in 1954 - "It is not too much to expect that our children will enjoy in their homes electrical energy too cheap to meter..." People get things wrong occasionally.
If Google/Tesla/Apple/$company_working_on_self_driving_vehicles offered to defray/pay your insurance bill and operational expenses, would you let them kit out your car(s) with a gajillion sensors and recording equipment, and send all that information to them? Or if someone said if you did it for X years, you get $Y towards a brand-new car, where Y is 50K+ USD. You still drive manually like you normally do, but everything possible about your driving is recorded, and fed into their trainers. If they could get an orders of magnitude larger training corpus, I wonder if they will start to cover more edge cases.
This was my thinking to. Highways first. Then high density city areas become certified, so maybe car drop you off and goes parks itself at a station after you doing the initial drive into the city. As it rolls out into suburbs driverless areas becoming a property value points, like schools or good internet today.
I agree with you that "fully autonomous" is a very distant dream. However I would like to point out the secondary or tertiary effects of such breakthroughs which people often fail to predict in the beginning. Invention of bulb did not replace candles but it helped humans be productive even after sunset leading to unimaginable gain in human productivity that has propelled us forward since then.
I will not see "self driving cars" not just as replacement of our Camry and Prius but it could be modified to do lot more things we currently have not imagined.
One angle I can think of is that building an environment designed specifically for such vehicles. Currently most of the challenges come from the fact that the car has to run in an extremely unpredictable environment. What if we could build isolated lanes or even exclusive roads for such vehicles ? What if we could build housing complexes, airports, parking lots etc.
It seems to be mostly a question of human comfort more than technology e.g. there have been fully autonomous trains (UTO/GOA4) since the 80s[0], yet most of the metros today (even fairly recent ones) have limited to no automation.
[0] Kobe New Transit in 1981 and Lille Metro in 1983
I don't know why you get downvoted. At least in London that's certainly the case. Automating metros is very easy but as drivers can virtually halt the whole city with a strike, no one dares to do it.
Because it's a kneejerk political assertion of a common bogeyman provided with no evidence whatsoever which fails to explain why there are GOA4 systems in production including in countries with a history of adversarial labor relations?
I don't know anything special, but I think you're wrong. Three reasons: convenience, cost, and safety. These are just too nice and compelling to let this technology drop.
Driver handoff is an optional feature. Cars could just stop instead. New cars have to have autonomous braking now (or very soon). A timeout on handoff can just have the car slow down to stop (which will trigger cars behind it to stop too). That's a compelling reason to have the driver want to pay attention enough to take the wheel before apologizing to everyone behind them.
That's a very engineering-oriented viewpoint. But it won't matter. As soon as the $$$ factor tips in automation favor including potential for lawsuit payouts it will happen.
This is my thinking as well. As a Chicago driver I have to admit there's a great deal of fuzzy logic in driving. Considering the last high profile crash was Tesla saying "Whoops, I guess we never tested blue trucks driving against a blue sky!" If they can't handle simple cases like this, how can they handle pedestrians jaywalking, cyclists doing dangerous end runs around your car, black ice, inches of snow, etc. I have little faith that this is a simple computer science problem that will be solved "any day now." Hell, looking at what passes for production code generally is scary. I really don't want to have as many real life crashes as software crashes on my PC.
I think the typical low information futurist brigade is driving this narrative and they haven't critically looked at what fully autonomous really means. The hype will wear off soon as we keep seeing crashes and other issues. I think the idea that cars will go from dumb to autonmous overnight is foolish. We'll have years/decades of smarter driver assist and maybe someday allow autonomous after a very long process of working out the kinks, certification, regulation, etc. Even then it'll probably only be highway driving or in low congestion areas far from pedestrians, strollers, and cyclists.
There are probably other use cases that'll be approved sooner than later like slow moving garbage trucks with autonomous robot arms picking up trash cans. Or autonomous street cleaning cars. That's assuming the crony public sector unions allow them.
I'm a regular Boston driver that enjoys driving here. On any average 15 minute drive, you will see numerous examples of cars, pedestrians, and cyclists breaking laws in sometimes shocking and life threatening ways. The drivers are also extremely smart and competitive, looking for any edge they can to get where they are going faster.
If self driving cars start driving here in large numbers, they are going to be the most slow, weak, taken-advantage-of cars on the road. If tailgating a self driving car makes it pull over, everyone on the road will do it. If a honk makes it pull over, everyone on the road will do it. You just can't understand the mindset of drivers here until you experience it.
This is just Boston, but wait until forums, youtube, etc. of people trading tips on how to fuck with the people sleeping in their cars spring up. In a tech utopia where everyone has a self-driving car and human operators are banned, maybe it can work. But in the real world, where not everyone is a college educated rich HN user, self driving cars are going to require a very difficult-to-accomplish change in behavior to succeed.
And then there will be laws that make this illegal. Plus the car will probably be taking video of the infraction.
Hell, if everyone driving self-driving cars reduces the 38k deaths and 4.4 million injuries, then it's possibly that eventually it will be illegal to drive. At the very least, I would expect there to be autonomous-only roads - and you'll have to pay higher insurance premiums to manually drive a car.
If every self driving car drives like a grandma (which they will be legally required to do, I assume), they will not be allowed to ruin the commute of Bostonians. Why do you think there are zero red light cameras in Boston, despite an epidemic of running red lights? The voters here are very particular regarding driving.
Vehicles driving like grandmothers would probably be an improvement. Lower speeds can increase total throughput, as can fewer rapid changes in velocity.
I'm not arguing about the objective benefits in safety. I'm arguing that your average Joe is not going to let the nerds create a huge hassle for everyone too poor / not wanting a self driving car. This is a social issue.
> Joe is not going to let the nerds create a huge hassle for everyone too poor / not wanting a self driving car.
CalRobert's point is that aggressive driving isn't rational, because cooperative behavior at lower speeds often increases total throughput in congested areas. That point is completely independent of autonomous vs. non-autonomous driving.
> This is a social issue.
Social issues aren't insurmountable. Driving behavior can be changed. For example, I've noticed over the past few years that people tend to be far more likely to behave in the optimal way when merging at a closed lane when there's a sign explaining that everyone should use both lanes until the last moment.
Signage that promotes late merging is sort of a special case though. You're basically telling people that it's OK to do what everyone wants to do anyway but don't because 1.) They think they'll look like an asshole if they do and/or 2.) They'll likely promote various dangerous and aggressive behaviors in others against the person "cutting in line."
While I generally agree that driving behaviors are social issues they can also be pretty deeply rooted. It's hard to turn Manhattan or Boston drivers into laid back drivers somewhere in the rural South.
Rural Southern drivers are hardly safer than Boston drivers though; jacked up bro trucks with ignorant, inattentive drivers aren't exactly a recipe for safety.
Cooperative behavior is the best for throughput, however it can lead to deadlocks. Competitive behavior slows overall throughput, but helps eliminate deadlocks in the system.
If most cars on the road are self-driving, traffic will move a lot faster (assuming that laws allow them to go fast). Stop and go traffic is the result of human latency, over-acceleration, over-braking, and inattention.
> tailgating a self driving car makes it pull over, everyone on the road will do it. If a honk makes it pull over, everyone on the road will do it.
Tailgating is illegal in most of the US. Same for non-emergency honking.
Those laws are never enforced, of course, which I think is a shame. Especially non-emergency honking, and especially in cities.
Hopefully assholes exploiting the conservative driving behavior of autonomous vehicles will lead to law enforcement regularly enforcing these sorts of safety&civility laws.
> where not everyone is a college educated rich HN user
Anecdotally, I haven't noticed that dangerous and aggressive driving is strongly associated with education level or income. I pretty regularly encounter dangerous assholes with "university of blah" stickers on their car.
"Hopefully assholes exploiting the conservative driving behavior of autonomous vehicles will lead to law enforcement regularly enforcing these sorts of safety&civility laws."
The Cloud will take care of that. Autonomous vehicles will submit video, LIDAR, and radar data of human drivers being assholes to their manufacturer. The manufacturer will analyze the data, summarize it, extract clean still images of the worst behavior, the car license plate, and the driver's face, and send a report to a bad-driver clearinghouse. The clearinghouse will look up license plates, do face recognition on drivers against their Facebook profile, identify the policy-issuing company involved, and send them a data update. The policy-issuing company will send a note to the asshole, with pictures, telling them not to do that and that their rate just went up.
Most of this is already deployed. Half the drivers in the Bay Area have already been rated by Nexar's AI from dashcam data in Ubers.[1]
Have you ever driven on a busy road way before? Police cannot, and arguably should not have the ability to, monitor drivers 100% of the time. For the most part they are forced to resort to what can be considered scare tactics just to place the idea in driver's heads that they might be caught.
The idea that doubling, tripling, quadrupling, or whatever is necessary, police activity will enable autonomous drivers is absurd.
Have you ever tried reporting a dangerous or aggressive driver to the police?
> Police cannot, and arguably should not have the ability to, monitor drivers 100% of the time
They should, and ability is only a matter of time.
There is no right to being an asshole in a large, loud, dangerous machine.
> For the most part they are forced to resort to what can be considered scare tactics just to place the idea in driver's heads that they might be caught.
For the most part, they do nothing at all.
> The idea that doubling, tripling, quadrupling, or whatever is necessary, police activity will enable autonomous drivers is absurd.
You don't need to increase police activity at all. Police just need to care and process evidence provided by autonomous vehicle companies (or, hell, by citizens).
> You don't need to increase police activity at all. Police just need to care and process evidence provided by autonomous vehicle companies (or, hell, by citizens).
We can give them (especially the rural and small-town ones) one hell of an incentive by banning speed limit enforcement, or at least confiscating all revenue from speeding tickets, while leaving enforcement of other infractions unencumbered.
>Police cannot, and arguably should not have the ability to, monitor drivers 100% of the time.
Police can't catch 100% of any violation or crime, and they don't have to. We can't completely prevent any behaviour, but we can reduce it to the desired level with the right combination of risk of punishment and severity of punishment.
Right now the risk of punishment for non-emergency honking is so close to zero that most drivers seem to have forgotten that it's illegal at all.
A year or so back John Leonard at MIT--who has done a lot of autonomous vehicle research--took dashcam video of his commute into work from a Boston neighborhood over the course of a few weeks and edited them down. He did this to demonstrate a number of problems (such as police directing traffic around an accident) that, in his view, will be very difficult for self-driving cars to address.
They are already incorporating "unwritten rules" of driving into self-driving cars; things like rolling forward early at a 4-way stop to keep your place in line.
Boston is one of the worst cities to drive in (IMO much of Rhode Island is worse though). The "Boston Left" (also heard it called "Mass. left") is particularly jarring to non-locals.
I actually had to look that up. When I did I said to myself "Well, sure, what else are you going to do when you need to make a left-hand turn at a non-arrowed light when there's constant traffic the other way? :-) (Well, that or sneak left after the light has turned red but before the cross traffic has started.)
It is more likely that a critical mass of well-behaved self-driving cars will make other drivers better-behaved. They will also make cycling and walking much safer and more pleasant. Critical mass is likely to be lower than most people think. When self-driving vehicles reach 10% the perception will be that they are everywhere.
The first self-driving cars will probably drive for Uber (or similar). That means 10% of total number of cars will be way more than 10% of total miles driven.
People used to also be insist and not care about drunk driving but I think the cost of lives lost made that culture change.
I'm hoping if people start harassing automated vehicles the public sentiment would be: "Don't you want a future where we minimize traffic fatalities? Stop making this harder".
I think you are right, it will likely not be an obviously good move until you are at > 70% autonomous.
I think there would be a big market for trucking though to have augmented autonomous, so you can keep going for stretches while sleeping. Plenty of routes in the midwest would allow for that, so cross country travel could be improved.
Also, driving at night. After traveling back and forth to my inlaws a few times late at night, I noticed I saw many more three trailer trucks on the road. a lot more. It finally occurred to me that they likely drive at night to avoid traffic, with such a difficult to maneuver load.
The future of long haul trucking is not automated trucks - its the rail. As it is now, a majority of long haul freight goes piggyback on a train. The only thing that goes on a truck end to end, is stuff that is more expedited than the train can deal with (or where it misses the window), and freight traveling under a certain distance (where the extra day to get in an out of the rail yard pushes it out of time). It's easy to automate the long haul portion of the journey, not so much the bumping the dock portion.
Warren Buffet agrees with you [1], even though I think Otto is trying to automate the rail-to-warehouse segment, not long distance trucking. Kind of like "last-mile" for freight.
I personally see very limited returns in automating the last mile - local drivers often only make 15-25 dollars an hour, and often its drop and hook (drop one trailer, pick up another) to maximize tractor utilization. In addition, the hours of service rules for intra-state drivers are often much more generous than the federal standard.
In short, it makes sense to spend alot of money to eliminate the big cost - but not to eliminate the little cost.
Really? I thought Teamster unions made that cost higher. My father made much more than that in the early 90s when he drove a truck temporarily for a salary. Unless local drivers aren't unionized?
The average salary for private drivers is much higher:
>> The median annual wage for a trucker that works for a private fleet, such as a truck driver employed by Walmart, is $73,000, according to ATA. The Labor Department pegs the median annual salary for all truck drivers at around $40,000. But it isn't an easy job to fill. There's 1.6 million truck drivers in America. Oct 9, 2015 [0]
Also long-haul trucking is the real goal of Otto [1] not just last-mile. The benefits go well beyond just replacing the driver's salary w/ robots. For example, an automated truck fleet could be heavily optimized by algorithms to take optimal routes, utilize time better, work for multiple warehouses at once by operating as a 'floating' fleet instead of with fixed routes (this is happening already with human drivers, but it's a natural extension of automating the vehicles and would make implementation/optimization far easier). Plus speeds could eventually be increased, less accidents, less breaks (washrooms, food, etc), less human management required, zero turnover, no training, no hiring, etc, etc.
AFAIK There are no national union TL (Truckload) Carriers remaining (I believe CF was the last one), the LTL guys (Yellow, Overnite (now UPS) are often, but not always union.
Private fleet is FWIW a very very different ballgame - Walmart is considered by drivers "best in the industry" to work for - you need IIRC 5 years of driving experience before they'll even look at you.
Most TL drivers do not drive a dedicated route, and operate as a 'floating' fleet - even if you're on a dedicated board, while you might be hauling one customers crap - you're still likely to go to different places every time. Though as a non-dedicated driver, working out of the terminal I was out of, I regularly hauled Gatorade to Phoenix, or Coke to Phoenix, or Sports Authority to the Pacific Northwest. An example week for me was leave out of LA with Sports Authority (from Ontario area, CA) head up to Seattle then return with rolled paper out of Tacoma, or Coastal Oregon.
All of this freight was stuff that needed to move faster than the rail could take it, or where the destination was too far from a railhead, or the run was too short for the rail - or where simply, the company had the business from the customer and it could choose to route it via the railyard or via a truck, and it had an idle truck that needed to move to someplace else so it could haul freight from there.
While I do see a labor savings in automated driving - I dont see it as practical for most drivers - the biggest advantage I see is with expedited team drivers - you could replace one member of the team with the automation, and save some labor from the truck. So long as the automation is driving the open road portions especially at night, that could work out as a win win - you still need a driver to fuel the thing, check tires, open the trailer doors, etc - but perhaps the easier portion of the driving could be handled by the computer.
Well, Otto is Anthony Lewandowski, who is very good. He did the self-driving, self-balancing motorcycle for the 2005 DARPA Grand Challenge while an undergrad at UC Berkeley. I met him back then. At least it's not the Cruise crowd.
Indeed, it seems they have some real talent, excellent financing, and political backing here to really accomplish something. Or at least make a strong go at make pioneering efforts to commercialize this.
HN's usual negativity generating machine is in full force in this thread. It's sad because we should be supportive of bold companies making difficult plays. But it seems we're all in a race to reward people who can most effectively dismiss the ideas as not possible.
Yet people constantly criticize Silicon Valley as not investing in bold ideas anymore and instead obsessing over the next photo sharing emoticon apps. You can't wonder why VC/angels are hesitant to capitalize entrepreneurs taking real risks when this is the public reception they get - especially from a site full of technical people and entrepreneurs.
The more money the startup has the more the hyper-critical audience shows up to show the world how much smarter they are then these guys. People who actually went out in the world, built something, and got $640m to implement the idea - and haven't even yet made public releases/test available that can be analyzed and fairly criticized. But that doesn't seem to stop anyone.
It's amusing reading the comments on automatic driving. In Tesla-related discussions, Tesla fanboys dominate and are annoyed at criticism of Tesla. This discussion, on the other hand, is rather negative.
There's been a lot of work on self-driving trucks. Mercedes has demoed self-driving trucks. Volvo has some heavy trucks in operation in mines. Otto and Volvo have some kind of a deal on self-driving. If Volvo's involved, safety is being considered. Their CEO has said that if one of their cars crashes in auto mode, it's Volvo's fault.
Volvo says they will deploy 100 self-driving cars in one city in Sweden in 2017. These will only work on certain mapped roads, but they will be driven by customers, not Volvo employees, and the driver will not be expected or required to pay attention. This will be the first large deployment of self-driving without human backup. (Volvo also leads in self-driving marketing videos.[1])
> It's sad because we should be supportive of bold companies making difficult plays
It's not wrong to ask questions in general, although you can avoid a negative tone.
This is a special case though. They make mistakes here and people will die. Asking questions and pointing out every issue is, at some point, necessary.
The same problem exists for every person who enters a vehicle. That very same risk is why there is an economic incentive to do this in the first place. If they succeed far fewer people will die. And it will drive down the costs of commerce improving the quality of life for everyone.
So I take issue when the default perspective is pessimism and dismissiveness. This site is so hostile to founders who aren't doing totally safe projects or gasp those who get millions of dollars without publicly launched products.
Maybe this is just a self-selecting audience of bored pessimistic people who have time to spend commenting on their vague notions of other peoples projects on HN while the optimistic bunch is too busy building their own interesting stuff.
I find Otto a particularly interesting case because they've been pretty quiet with the contents of their tech, so we have very little idea of what they have accomplished yet other than partnering with Volvo and having lots of $$ in the bank. But I'm sure most people here would be fine commenting dismissively just off the headlines they read without really knowing any specific details of the project or the people behind it.
> "while the optimistic bunch is too busy building their own interesting stuff."
I am also dismayed by the general pessimism in this thread, though I think there is a third (and IMO most likely scenario):
Anyone who has actual direct, inside knowledge of this field will not write publicly about it for a litany of incredibly obvious competition/secrecy reasons. The people most qualified to speak on these matters are staying silent.
He also was the main technical guy at Google for self-driving cars, and in fact his company 510 Systems was bought by Google because he was the one who actually built the hardware that went on the cars.
Uber has, by a huge margin, the best talent working on self-driving cars.
> 2) The robot helps you drive, you start to zone out, and plow into, let's just say as a random example, a truck crossing your lane of traffic.
This is my concern. How many days of sitting around for 12 hours not doing anything (because the computer is pretty good) does it take before the truckers who are supposed to be alert and ready to take over are completely zoned out. I would probably last about an hour.
that's the thing, at the point that the computer is already that good, missing that one accident isn't optimal sure, but it's definitely better than the current state of vehicle operation.
No it's not. People here have a really exaggerated vision of how bad human drivers are. Even the most pessimistic views of how common accidents are suggest that there is one contact collision every 75k to 100k miles or so.
An autonomous driver which was twice as dangerous as a human, then, would still go 37k to 50k miles between collisions. A human who was trying to backstop that robot's fallibilities would be required to pay close attention for weeks between actions. Which is inhuman.
I admit that I have no idea what the accident rate is in Uruguay or Brazil.
In the US:
The Department of Transportation gets reports of one accident per 250k miles (roughly). It is broadly agreed that many accidents are unreported, with estimates of the true rate ranging from 1/200k miles to about 1/75k miles.
For example, this document from the US Department of Transportation:
Wow, yes, there are a LOT less accidents in the U.S. According to other statistics, there are 5 million accidents, for a population of 300 million. In Uruguay there are 50.000 accidents for a population of 3 million, and with a LOT lower average mileage per driver.
That's something we've discussed a lot here - there's NO way self-driving cars can go around South American streets - unless they learn to be very aggressive, beep the horn, cross streets whenever they can, shout and otherwise interact with other drivers.
And we mostly don't have highways. Americans drive a lot in highways, that must skew the per mile accidents.
I don't know how often accidents like fender-benders go unreported in the U.S. though.
Before anyone chimes in, yes 5 million in 300 million is the same as 50.000 in 3 million.
What I wanted to mention is that there are a LOT more cars it the U.S., and the average US driver drives a LOT more than the average Uruguayan driver. (I'd have to look up hard numbers, but that's the gist of it)
The step where you need a human supervising an imperfect AI is filled with landmines, though. All it takes is a semi-autonomous truck or two taking out a family car, and it's over.
Espesialy if its a cute young WASP female - you would see a moral panic driven by the tabloids / press (and don't for get the old school print media are not exactly friends with google etc )
Airplanes have had autopilot forever. In that context we don't expect that phrase to mean "autonomous flying". Anyone who knows anything about airplane autopilot knows to expect the pilot to engage autopilot at certain non-critical moments and always be ready to take over for the difficult parts. Tesla's autopilot is no different. They clearly state it's intended for certain parts of highway driving. Like a glorified cruise control, much like autopilot is used in other fields. I don't believe they've ever tried to market their tech as a totally self-driving vehicle.
So why then should Tesla not call their assisted-driving "autopilot"? When our current common-sense understanding of "autopilot" in other industries has worked just fine?
Maybe because it's not marketing by a single trendy firm so you can't attack it with your superior intellect? Or maybe you also expect airplane manufacturers and pilots to stop calling their tech "autopilot" because it might mean something else, now that autonomous cars are becomes a thing... and we wouldn't want to confuse anybody who can't grasp context and commonly accepted meanings of phrases...
"More than double the fleet" . . . from 6 to 12 trucks in 2017
I have a hard time believing that Otto was more than an opportunity to pick up engineering talent and that their interest in freight - for now - is more than an ongoing PR opportunity.
In a previous HN discussion, there was talk about the last mile being the most expensive and enduring process of automation.
This being posted today makes me imagine there being truck transfer stations where humans will board trucks before city entry points and driving them in and out.
At the moment there's a guy in the truck to take over. A little further in the future you could have a video like to a control center. Full autonomy for that stuff is probably a way off.
I really want to see fully autonomous vehicles but I don't really believe we'll see them on the road in the next 50 years. I think Google's concerns with being able to reliably pass control between the driver and the computer is warranted. I think that the technology will get stuck in the equivalent of the CGI uncanny valley where they find that the technology has to leave more control with the driver than its capable of to keep the driver engaged but never is capable of full control.
My guess is that these systems will evolve into assisted driving technologies that will use force feedback to the driver that will suggest the sanest path but won't take full control until the driver is outside the envelope and will predominantly be used to extend the window where baby boomers can drive and also save inattentive and unsafe drivers for themselves. In other words a drunk behind the wheel is still going to look like a drunk behind the wheel but just less likely to kill someone.
I expect the technology will be displace drivers in military convoys but I don't think it will be good enough for general use. Even if the technology is close I don't think the safety will be as good as the "augmented human" model which will also rapidly improve and so insurance and regulation will continue to hamper rollout.
Would love from those in the know to tell me I'm wrong. I want to believe!