Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Apollo – An open autonomous driving platform (github.com/apolloauto)
246 points by KKKKkkkk1 on July 18, 2017 | hide | past | favorite | 72 comments


These open source driving platforms are an interesting way to test out the limits of liability disclaimers on software. This license has:

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.


I notice the website contains a separate disclaimer that begins:

"The Apollo Open Platform (“Platform”) data, software and code provided or developed on the Platform (collectively, “Platform Code”), may be licensed to you strictly for testing purposes."

http://apollo.auto/docs/disclaimer.html

IANAL, but is it permissible to say that Apache licensed code is only licensed for the purposes of "testing"? Wouldn't that conflict with clause 2 of the Apache license?


The Apollo Open Platform seems to be something separate not licensed under the Apache License. It appears to be the actual data used to design a robot, and the Apollo software is just the UI.


Otherwise nobody would want to work on open source projects if they were to be always liable for the damage their software causes its users...


Count me out on ever contributing to a project like this. I really don't want to be a test case.


This isn't because it's unreliable or because you (the developer) would be a test case. I mean, I don't know anything about this project so I can't vouch for it, but pretty much every open source project in this area is obligated to have this kind of verbiage and it has nothing to do with the quality of the project.

This is so you can contribute to an open source project without worrying about some random person suing you. This is to protect the contributors, which includes anyone who submits changes, which would include you. If you see an autonomous driving project that doesn't have language like this, that's the project you want to stay away from, not this one.


Maybe I wasn't clear. I don't want to be a test case for the disclaimer.

Just because you write it down doesn't make it so. Even if a lawyer wrote it.


But if there's no disclaimer at all, then someone can sue you, even if the whole case is BS and the software you contributed to wasn't at fault. It's still something you'd have to deal with. Never contribute to something that doesn't have this sort of disclaimer- it means both that you would be leaving yourself open to lawsuits (justified or not), and that the people running the project don't know what they're doing.

If an open source project in this area doesn't have a disclaimer like this, the people running that project are clueless, and if they're clueless about that, they're almost certainly clueless about the much more complicated technical stuff, so it wouldn't be a project which would be desirable to contribute to in the first place.


Most open source projects have a disclaimer like this. You don't want to be responsible for the usage of whichever random programmer either.


I hope there is a difference in liability between a self-driving car library and it's left-pad dependency.


There isn't when your the one using the software on your hardware.


Autonomous driving based purely on machine learning from vision is scary. Machine learning is, after all, a statistical method. It's going to do really great most of the time, and really badly once in a while.


The exact same can be said about human drivers. Why do we demand that autonomous driving is 100% safe? I would be completely fine with autonomous cars that sometimes crash, but are still less likely to do so than with human drivers.


We accept that human drivers kill thousands upon thousands of people. We won't tolerate deaths from automation even in the single digits.

That may not be rational but it's also unlikely to change.


The rational fear is of automation introducing systemic failures. Imagine a "flash crash"[1] type event applied to widely deployed self-driving vehicles.

[1]: https://en.wikipedia.org/wiki/2010_Flash_Crash


I never understand why it has to be either one or the other. Either we accept deaths caused by human drivers, or we go full autonomous and we accept deaths by software errors.

The way I see it, AI assisted driving is the way to go. Fully autonomous driving is, at least in my opinion, a pipe dream. No amount of testing will ever be able to prove an AI is safer than a human in all situations. They might be safer in some though, e.g. highway drives, quiet roads at night etc. So instead of pouring in vast amounts of money developing fully autonomous vehicles, why not spend it on improving the existing AI assistance features that already exist, so we can prevent most human errors, without introducing new AI failure modes.

Note that this means we should all forget about the dream of not having to pay attention on the road because 'the car drives itself', and that we should actually dial back things like Tesla autopilot, which venture too close to 'car drives itself'. The driver should be in control, until he/she makes a mistake, then the AI should prevent worse.

On a side note: people comparing plane autopilots to point out the merits of fully autonomous driving are really missing the point, there is no AI in avionics whatsoever.


I do think we'll get to fully autonomous for some significant useful subset of driving such as paved public roads and can probably do things to help out the AIs with corner cases (beacons, etc.). That said, I actually agree that the last 10% that gets things to reliable Johnny-cab level are probably a lot further than many are assuming. Given the level of investment and advances that have been made I'm probably even more optimistic than I was, but I still think it could be 30-40 years before you can order a robo-Uber in Manhattan.


Yet, Robots and automated machines (not the autonomous vehicles) still killed/injured a few dozen people since 1984 according to U.S. Department of Labor. [1]

[1]: https://www.osha.gov/pls/imis/AccidentSearch.search?acc_keyw...

Worldwide stats will be far more than that. We've accepted it, haven't we?


They're freak accidents. We wouldn't tolerate deaths day in and day out to the tune of 40,200 in the US in 2016.

..too lazy to find and paste stat from mobile


You said we won't accept those deaths "even in the single digit", not 40k+ deaths, which is a few orders of magnitude higher.

Also I don't understand why we would accept freak accidents but not other accidents? An accdient is an accident. A death is tragic regardless what caused it.


We don't accept the robot deaths. Any time one happens people scramble to "make sure this never happens again". When someone dies in a car accident, we yawn.


I think it's a matter of sussing out liability.


Will you find acceptable a plane crash because of technical failure?

Determinism rules for aeronautical engineers (software and hardware), and thus, today, flying is the safest transportation mode.


> Will you find acceptable a plane crash because of technical failure?

Oh but they did and still do, but has improved, as every technology matures. My great-grandparents traveled a lot by airplane in the 50's, and always traveled separately so the kids didn't lose both parents in the event of a crash, but guess what happened the one time they traveled together...

> It is estimated that approximately 22 percent of aviation accidents are caused by mechanical failures.

Source: http://www.trial-law.com/aop/mechanical-failures/


Whoa, did they both really die together in a plane crash?! How tragic if true.


> The exact same can be said about human drivers

Maybe not. There are cautious drivers and there are rash drivers. The probability of accident is wildly different for both of them. However the probability of an accident due to a a bug/miscalculation in autonomous driving is same for everyone.


And when that bug is fixed, assuming modern update practices, it is fixed for everyone. In effect, human drivers can only learn from their own mistakes whereas autonomous systems learn from every mistake made by their system's drivers.


But can you really fix a bug in a machine learning algorithm? Sure, you can retrain it on the data which caused the crash, but could that mean you created another bug somewhere else?


That's exactly why I am not looking forward for self-driving cars. Rash drivers in most cases do know they are doing something stupid and dangerous (like going way over speed limit, overtaking where visibility is limited, etc.) AI on the other hand is prone to "honest mistakes" which can be deadly.


They shouldn't be because even as a cautious driver you are subjected to noncautious drivers and, more importantly, no matter how cautious you are a well designed AI is almost certainly better.


Both cautious & rash drivers face the same external factors, but latter are more likely to wreck themselves, sometimes without involving others. So probability of an accident for latter are higher.

AI may be good at reducing the average number of accidents, but when a tricky situation occurs, the probability of accident could be in this order "rash > AI > cautious".


To be honest, i had the same assumptions on cautions and rash drivers. But then i learned to know my new boss- he drives ralley-race cars- and to be honest, i feel much safer in such a ralley car taking to its physical limits by someone who knows the car- then in the car of a cautious driver, who is just cautious because he/she is aware of how little they know/controll the actual physical event of driving a car.

I know such a thing is nont quantifiable thus, the laws min(ability) applies, none the less, one should not judge from appearance here, but rather from actual incidence lists.


Liability for an accident not related to equipment malfunction generally stops with the human driver. Liability for accidents related to autonomous systems will not to be so limited.


To err is human. We can punish erring drivers by law. But we cannot punish a robot. Unless there is law that applies to the owner of the autonomous vehicle companies on erring automatons unleashed by them :-) and they are implementable.


Do you punish a human because of payback or so that it doesn't commit again the same error ?


Of course its revenge. A eye for a eye, a tooth for a tooth. You are not talking with a civilized species here, although the philosophical decoration sometimes can lead one astray.


We humans aren't rational beings. We're rationalizing beings. I would even suggest we've always felt ourselves to be the epitome of enlightened - even whilst committing atrocities.


There are results showing that machine learning can be completely fooled by images that are slightly modified, so little that the human eye can not perceive any difference. Have a look at the images in this paper on "adversarial examples": https://arxiv.org/abs/1412.6572


let's assume that neither humans or machines are perfect drivers. but there is a non-zero probability that the machine can do something nonsensical that a sane human would never do (due to some unknown edge case). who do you trust more to be your driver?


Humans also use statistics of course. You'll have to elaborate on other differences to be able to convey how scary machine learning really is.


How many more projects are going to get launched with the name "apollo"?


Only until we have an ApolloGate scandal.


Or until the Apollo 1 release....


Hey guys, we need a new name. Start calling out names from mythology and we'll pick the coolest sounding/most applicable one! So creative!



Millions.


It'd be cool to have some sort of independent 'weissman score' style benchmark for these systems. Maybe just RMSE or similar against ground truth steering/throttle over a battery of different environments/weather/terrain. It looks like it uses LIDAR and Baidu has a pretty impressive AI team so it'd be really interesting to see how they stack up against, say, Comma AI's openpilot or Tesla's autopilot.


This is by the Baidu AI team.


I'd imagine this could work on a small fleet of miniature cars (rc size?). A model town/city could be built with various obstacles for training a large amount of the ai, I would think.


The autonomous racing league that meets in Oakland is doing exactly that. Typically the R/C car uses wifi to connect back to an AWS instance for the heavy lifting.


> Typically the R/C car uses wifi to connect back to an AWS instance for the heavy lifting.

Wow, and the latency is manageable for the task?


Well, the average human reaction time is 215ms, which would be considered unacceptable in online-gaming, where 50ms would already be on the high side of things, I normally see 30ms or so.

Let's assume 50ms or 20 updates per second, and a car driving at 40m/s or about 145kmh / 90mph, the reaction time of the of the computer is after the car moved 2 meters (about 6.5 feet). A human would only react after about 8.5 meters (about 28 feet) - quite the difference.



Why do that, when you could train in a computer simulation?


Cooler, cuter, funner.


The list of partners looks quite interesting, including Ford, Bosch and Delphi.

It is apparently based on ROS like Autoware.


And so many Chinese automotive companies I'd never heard of!


Interesting, surprised that this isn't submitted yet. This is the project by Baidu. Thanks OP!


Maybe I'm missing something, is this is a platform strictly for developing ML techniques? Or is it intended to actual run in a vehicle... on ubuntu, in a docker container?

I'm no expert, but I would think you'd want a realtime OS for this. Right?


More likely is that the car would have a Linux box running this software that connects to one or more micro controllers. The micro controllers handle any sensors/actuators with real time requirements, and may be running an RTOS or just bare metal.


Hmmm, thanks for the additional info. Still seems like a bad idea. Based on your assertion, what happens when the RTOS/microcontroller does not receive a "decision" before the scheduling deadline? Many things can hang a Linux box... the reason why RTOS exists is because nothing can stop the scheduler. I would be very curious to hear from the Apollo developers. What is the intended platform and developer audience? Is this intended to actually run in a vehicle?


There are way to make Linux an RTOS, including work on pushing RT deterministic scheduler in the mainline[1].

[1] https://rt.wiki.kernel.org/index.php/Main_Page


Ahhh... Found it... The project has it's own Kernel. I stand corrected. This is super cool. https://github.com/ApolloAuto/apollo-kernel


Is it functional as of today? The perception directory is almost empty (only skeleton).


Awesome. Now to just get a couple LIDAR cameras...


Cheapest Lidar I know of with reasonable quality is a Neato vacuum cleaner. Go to Target, get a vacuum, give it a lidar-ectomy, throw away the sucky parts, and you still have the cheapest LIDAR in town. 350 pionts, 5HZ scan rate. Indoor range about 5m, outdoor 2+m with a sun shade, accuracy at 5m about +/- 2cm. Great for hobby robots. Totally not sufficient to drive a car at highway speeds.


$500 lidar toy platform https://www.seeedstudio.com/SDP-Mini-RPlidar-Experimental-Pl...

using the "RPLidar A2" https://www.slamtec.com/en/Lidar

looks interesting.


Would this work on an electric bicycle?

Edit: or more stable tri-cycle/quad-cycle


A team at the University of Washington Bothell (UWB) is already working on such thing :

https://readwrite.com/2016/10/19/autonomous-tricycle-tl4/

http://newatlas.com/uwb-autonomous-trikes/45946/


There have been various autonomous 2-wheeled vehicles, though riding an autonomous vehicle that uses lean steering, like a bike, would seem to entail a little bit more excitement than I'd want to have.


Imagine a driverless bicycle driving back to its base after being used byw the renter. A flywheel keeps its balance while stopped before traffic (lights), the electromotor thrusts it.


An open autonomous driving platform - seems great.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: