Discrediting work being done by others in the realm of self driving cars is rarely a good segway for introducing a company that automates checkout-free stores powered by machine learning / image recognition (which the articles builds up to). Sure, the developments have been quite disappointing for the cars but that's a way bigger challenge that currently has examples running. And what is the source of that disappointment? "X said it should be there in 3 years and it's not so it's a fail", really? That strikes me as dismissive and at best counter productive. Marketing and developping are two different things, don't use the shortcomings of the former to discredit the latter. It doesn't help anyone, especially new people joining the field.
> "The deep learning hype is now deflating pretty strongly too, with even the biggest "enthusiasts" now admitting that things ain't as rosy as they were hoping."
This comes out as bad faith, so a reference would be needed... Andrew NG was pointing at the use of wrong/bad data, and not the models or approach per se...
So yeah, crapping on a trendy topic (deep learning) to market your venture that ironically itself uses that trendy topic (i quote "artificial intelligence"), is at best totally uncool...
I read it in quite the opposite direction, the cautionary note here is that if you overpromise and under deliver you might be able to secure early funding but eventually you precipitate yet another AI winter.
The author, as someone in the space, is cautioning for more realism which arguably is lacking in public communications within that field.
I think his last paragraph makes that very clear and echoes a conversation I was having with someone last week. There are a lot of practical if often boring wins to be had. We're just probably not going to get some of the things like "never need to own/drive a car" that had some people so excited.
As the piece articulates, there has been a big deflation of many high expectations from "just around the corner" to "maybe in 40 or 50 years."
Look at the actual numbers though. I think the biggest writing on the wall that the hype is starting to be weighed in valuation is Waymo.
Waymo is arguably the most technologically advanced of all the self-driving efforts. However they were initially valued at 175 billion, Morgan Stanley downgraded it to 105 billion, and their last external funding round valued them at 30 billion.
You don't need to be a VC fund to realize that is a hell of a drop (most probably because the initial valuations were hyped by the 'just around the corner' momentum).
To be clear, it's doubtless brought benefit to assistive driving systems. And we may even see full autonomy under limited conditions (e.g. specific limited access highways in good weather) in a significantly shorter period of time.
But door to door robo taxis? Seems highly unlikely.
There have been big pullbacks. And I wouldn't be the slightest bit shocked if Google/Alphabet pull the plug on Waymo one of these days--especially if they have to start tightening their belt for some reason.
Articles like this have been written for 5 years now, but the money keeps flowing into these projects. Argo, Aurora and Aptiv etc. are all still burning mountains of cash.
I am shocked the industry didn't take Uber/Lyft selling off their self-driving divisions as the signal to stop funding these efforts.
I'm genuinely curious how the various self-driving efforts broke down among:
- We can do it!
- Maybe we can't do it but even partial success makes it worthwhile
- What a pipe dream but it would be morally wrong not to separate greedy VCs from their cash
I do think it's different from 5 years ago. Yes, some researchers were throwing cold water on the idea then but they were seen as contrarians. And, on boards like this one, you'd have no shortage of people going but Waymo is going to have a taxi service next year! Today, it's closer to being accepted wisdom.
Ha! I kinda stopped paying attention to most consumer-oriented startups a few years back when some kooky poetry delivery service got a few million in funding. (I am trying to find a link to it). Then there was that "pizza robot" company ...
I get that VCs will fund stupid things like that to grow entrepreneurs and build a portfolio of companies that maybe grow enough to get acquired and produce some profit. But funds/corps dumping $250 million+ annually into self-driving seems crazy. The opportunity costs of that alone are outrageous especially when there are ample and ripe opportunities to disrupt "traditional" businesses and business models across all kinds of market verticals.
> I do think it's different from 5 years ago. Yes, some researchers were throwing cold water on the idea then but they were seen as contrarians. And, on boards like this one, you'd have no shortage of people going but Waymo is going to have a taxi service next year! Today, it's closer to being accepted wisdom.
Yes, and this has always been perplexing. I work at an AI startup which actually uses it to accomplish rather straightforward tasks - and it's really hard to perfect with the minimal tolerances for error that we have to adhere to. When you start to scale the problem up to self driving it becomes evident quickly that it will take a looong time to get the level 5; yet people in this business were still insisting it was around the corner.
I'm by no means an expert but a number of years ago I saw a presentation by MIT's John Leonard (he was involved in one of the early DARPA contests). One of the interesting things he showed was dash cam footage he had taken over the course of about a week commuting from Brookline where he lived to MIT. And he pointed out all the things that would be really difficult to do.
As a non-expert but a very longtime driver this made an awful lot of sense to me. But so much money and brains were saying success was right around the corner that I was half-convinced that I and other skeptics were missing something.
I think, in addition to the scammers and dumb money, it's just that a lot of people who should have known better just looked at the pace of advance over the previous 5+ years and figured "How could we not iron out the remaining kinks in a few more?" Add to that the number of people who don't like cars much and just so desperately wanted this future where they never needed to own a car or drive again.
> Andrew NG was pointing at the use of wrong/bad data, and not the models or approach per se...
Except that Deep Learning is equal parts data & model. And he wasn't saying that the data was bad, he was saying that it turns out that the data collected turned out to be so context-dependent that it wasn't usable with even a small change in that context. That's not "bad data", that's "the real world".
The example with the stop-sign on the building [0] really hits the nail on the head.
For a human it is trivial to see that this is attached to a building (i.e. not part of the road-sign network). I can imagine a camera-only system would struggle with this, and i wonder how a camera-only system can solve this in the general case without a breakthrough in general understanding and interpretation of the world.
I meant for a human. If I’d seen that in the wild either while moving or with both eyes open, that wall sign wouldn’t have made me pause and think. Yet it was a single non-moving photo, so for a very brief moment it did.
I have been working as a an AI practitioner since 1982, and have enjoyed the flow of much hype, failures, and wonderful successes over the years.
We will get there, eventually, for fully automated cars. Of topic, but while I don't have a car (I gave mine to my granddaughter a few years ago, and now simply walk in the small town I live in, borrowing a car as needed), my wife got a new 2021 Honda Pilot and I find the driver assist features hit a sweet spot: warning of other vehicles in blind spots, lane following, etc. Pretty decent technology.
I find lane following quite stressful after it got confused by temporary-vs-permanent lines and tried to steer in the direction of a temporary barrier. Warnings are nice, actions - I'm not sure.
Self-driving is another problem that seems like it's a constrained scope. After all, there's a clear rule set and even lines on the ground! But like so many fields, it's very hard to model the human actors, their feelings and intentions.
We understand quite well how to get a 3d representation of the scene and model basic physics. But we need a lot of further work to handle the messiness and complex expectations in real life. You basically need to understand much more of the world than we thought to handle edge cases in driving.
On the other hand, some jobs are so standardized that robots really can
- and do - replace them. A cashier's job really is a lot simpler than a driver's.
P.S.
At the very bottom of the page:
"If you found an error, highlight it and press shift+enter to notify us"
This is the first time I see something like this, it's awesome!
Well there are sometimes lines on the ground. Sometimes there aren't. Sometimes during construction you're directed along a temporarily lane, ignoring the lines on the ground. Other times you want to carefully follow the lines on the ground, except when there's a vehicle stopped in the road, then you want to go outside the lines, but return as soon as possible. Sometimes there's snow or ice and the lines are intermittently visible, so you need to extrapolate where you think the lines might be, and maybe get fooled momentarily by a patch of snow that looks like a white line. Sometimes the snow is heavy and other drivers have decided that a multiple lane highway is now going to be treated as a single lane, and for the sake of safety you should probably follow suit.
Self-driving feels like edge cases all the way down.
The rules aren't that clear either. For example, defensive driving classes emphasize that there are rules that specify which car has right of way (e.g., at an all-way stop sign), but just because you have the right of way doesn't mean that you should proceed. If another driver disregards your right of way, it's your legal obligation to stop and avoid a crash - you can't blame the crash on the other driver.
Also, just because the speed limit sign says 55mph, it doesn't mean that you can always drive at 55mph - there's always an implicit rule that you must consider current road conditions. For example, if you're doing 55 while the highway is covered with ice or there's zero visibility due to fog, a cop can still pull you over for reckless driving.
Then, there are clear rules that change suddenly due to geography. For example, in NY State, you can turn right at a red light after coming to a full stop, unless posted otherwise... except within the boundaries of NY City. And NYC has a default speed limit of 25mph, generally slower than in neighboring counties. Rules can also change according to time. Can a self-driving car understand a sign that says "School Zone - Speed limit 15 when light flashing"?
...and to solve problems with equipment, and to access items that can’t be left unsupervised, and to answer questions, and to observe interactions between customers and intervene if necessary, and to notify management of problems they cannot solve, and, and, and...
You see a pretty clear model today with self-checkout. And it's probably the same even if you make self-checkout better. You have people readily available when things don't "just work."
Self driving is, in my opinion, ironically, a trolley problem. If enabled in mainstream usage, we'd probably see total deaths drop, but they'd be shifted to a more random set of people dying for stupider reasons.
Would you rather two drunk drivers and an overworked sleepy trucker die; or one guy who could drive safely but was instead driven into an overturned truck.
We can cross that bridge when, if, we come to it. At the moment, AI products are nowhere near as safe as human drivers in no-compromises realworld driving situations (ie a foggy/snowy/icy night, driver asleep, mountain highway etc).
If we really cared about using AI to promote safe driving, we would turn it away from the road and towards the human driver. Take the above scenarios. While AI driving is a difficult problem, an AI that can detect a drunk/tired/inept driver is not. Any car could easily be equipped with internal cameras or other systems to tell if a driver is unfit. Any car cold be equipped with a dead man's switch to safely deal with a driver that has fallen asleep. How about a car that calls the cops whenever it thinks its driver may be drunk? Heck, you don't need AI to install a speed governor that would curtail any and all speeding[1]. The fact that the market repeatedly rejects such simple AI implementations tells me that all-up AI driving is a long long way away.
[1] My dream is an automatic switch that turns on a police car's lights/sirens every time they break the speed limit. Why else would a marked cop car ever speed unless it was chasing someone?
> We can cross that bridge when, if, we come to it. At the moment, AI products are nowhere near as safe as human drivers in no-compromises realworld driving situations (ie a foggy/snowy/icy night, driver asleep, mountain highway etc).
I'm not entirely convinced this is true, even though it's commonly stated. People are terrible at driving in fog and snow and ice (and... asleep?). It's intuitive why a self driving car company would not want to release the cars to do this in extra dangerous situations while they're still improving the easier stuff, but we don't exactly have stats to say the cars would necessarily do worse.
>> People are terrible at driving in fog and snow and ice
They are considerably better than the AI systems who currently just stop and say "I see no road" or facetiously stick to markings/signs that mean little during winter conditions. The last time I rented an SUV (2020 Jeep grand Cherokee) it wouldn't let me reverse into a parking spot because the rear camera/sensor was caked in snow/ice. In order to be dangerous at winter driving you must first be able to actually move.
There's a line of can't/won't/shouldn't here though. The AI systems probably could drive in these conditions. Most of them utilize GPS to navigate. But they don't because that's a high risk activity. I'm not totally convinced that even now, the average accident rate of a self driving car wouldn't be lower in adverse conditions compared to a random driver- if the car were just told to do its best and let loose.
To be clear, human driving that is augmented by “AI” (lane following, collision detection, blind spot warnings, etc) is much safer than human driving alone.
To a point. There is an inflection when the AI becomes a crutch that allows the human to stop paying attention. Tesla may be at/near this point. For instance: lane assist is great until it causes people to take their hands off the wheel, to stop looking at where they are going. I'd rather see such systems implanted as enforcement mechanisms. Let the AI prevent a car from drifting out of its lane. Let it monitor the lane position and scream at the driver when they start to drift. Don't let the AI become a comfortable crutch that allows the driver to take their mind off the task.
I agree, driver monitoring should absolutely be a core part of self driving systems. This is something that George Hotz (Comma.ai) is getting right, and Tesla is getting wrong.
The problem is that Elon/Tesla has the hubris to think that their self driving is soon going to be so good they don’t need to worry about driver monitoring, but that’s obviously wrong at this point.
I'm not sure it's a trolley problem so much as an automated system that is statistically safer than manual operation but will "randomly" kill people on a regular basis. Which is a legislative/liability problem because outside of possibly rare drug side effects, we don't normally accept consumer-facing products, even if used and maintained properly, randomly killing people--and just shrug our shoulders because stuff happens.
I think the OP was referencing the trolley problem because autonomous driving AI is often regarded as a utilitarian problem. In other words, "intent" does not matter, only consequences. In that regard, any ML cost function is purely focused on consequences. In the context of minimizing deaths it wouldn't matter if the software deliberately chose to kill one person, if the cost (presumably total deaths) was minimized. Where I think it gets sticky is that our society does factor in intent (see the various degrees of homicide) and if we're thinking in terms of pure utilitarianism, I'm not sure how this plays into the liability of autonomous software.
This is a trolley problem as much as seatbelts and airbags are.
The number of people who get into car accidents is staggering, and it’s certainly not just drunk and tired drivers.
Drive assist features like lane keeping already make driving much more safe. There’s no trolley problem here, just huge numbers of reduced deaths across the board.
>This is a trolley problem as much as seatbelts and airbags are.
I disagree. The trolley problem is concerned with ethical decisions. Seatbelts are passive safety devices and don't make decisions. Airbags make "decisions" based on a sensor input, but my hunch is there is very little ethical dilemma in whether or not to deploy an airbag when a sensor threshold is met.
Current driving assist is probably not much of a trolley problem but future self-driving software almost certainly will be tasked with making choices between "bad and worse" which opens up the ethical can of worms about how to define "bad".
Yes, those (very rare, mostly hypothetical) situations are trolley problems, but OP was making an entirely different argument, referring to self driving as a whole being a form of trolley problem.
I think the concerns of self driving “moral dilemma” problems are largely overblown. First, they are such rare situations: I’ve been driving for 20 years and never experienced one, I imagine few people have. Second, the solution is simple anyway: favor the passengers of the current car.
I almost have trouble imagining realistic scenarios that involve deliberately endangering the occupants of the vehicle in order to protect others. (And any manufacturer who designed such a system is probably not going to find many takers.) And, for that matter, there are very few scenarios where swerving and running into something at speed is a better decision than braking hard.
I'm the same, I can't really come up with any realistic vehicle accident scenario that would be a classic trolley problem. I may just be lacking in imagination though. I'd be interested if anyone could present a realistic example of such a scenario, and one that isn't wildly unlikely ever to happen.
Something that does occur to me is that since humans tend to just automatically react in extreme situations whereas an AI would presumably have precious milliseconds to consider different scenarios, maybe an AI could create a trolley problem where it wouldn't exist for a human.
For example, if someone steps out from behind a bus a human will most likely slam the brakes on instantly and will likely slide into the pedestrian at a slower but possibly still fatal pace; whereas an AI having a bit of time to think calmly about it might slam the car hard into the bus whilst also braking, which might slow the car down enough to prevent too much injury to the pedestrian whilst causing damage to the bus, writing off the car, and possibly injuring the occupant (though the car safety features should help).
>possibly injuring the occupant (though the car safety features should help)
This seems to take the opposite viewpoint of the OP which said the "obvious" answer is to do whatever is best for the passengers.
>one that isn't wildly unlikely ever to happen.
A lot of the comments here are along these lines and, to a certain extent, they miss the point. Risk is a combination of probability and severity even before considering ethics. In the cases where probability can be reliably calculated, there needs to be threshold about what risks are accepted in order to implement informed decisions. I'll try to illustrate a couple examples.
Say there is software that decides to take a specific action based on sensor input. Maybe the action is to accelerate and swerve to the left if an obstacle is detected to be oncoming from on the right at an intersection. Let's make it somewhat more complicated by having a jaywalker stepping off the curb on the left. There are (at least) two prospective events:
1) The car does not perform evasive maneuvers and increases the risk of being hit by the other vehicle
2) The car does perform evasive maneuvers and increases the risk of hitting the pedestrian
Each has a probability and a severity. To make things simple, say the probability of each is 5%. However, they have different severities. In scenario 1) the severity to the pedestrian may be risk of injury or death even if the severity to the driver is fairly low (probably repair work covered by insurance). But in scenario 2) the severity to the pedestrian is negligible while the severity to the driver(s) is moderate. With modern cars, it's unlikely either driver will be killed is fairly small but there is an increased risk of sub-fatal injuries and likely much more damage to the vehicles.
Remember, the risk = probability x severity. So if we take the simplistic approach of only taking in the perspective of the driver ("do whatever favors the passengers of the car") we have effectively minimized the passengers risk of moderate injury by increasing the pedestrian risk of grievance injury. Adding ethics gets even more complicated; does it change if the pedestrian is a child? A mother pushing a stroller? A homeless person?
Another example would be what if the car is driving by a school playground and notices a ball roll between parked cars and into the road? A human can intuit playground + ball = a higher probability a child may run into the street after the ball. This might be enough to cause a human driver to apply the brakes hard even if a child isn't immediately visible. Would software intuit the same? Even if it did, would the car do the same if it increased the risk of a collision from behind?
The safety-critical software world spend a lot of time on small-probability events. The more recent 737-Max issue has similar through-lines, though there are many confounding issues as is typical with safety mishaps. The MCAS was classified as "hazardous" severity and by company policy required two AOA sensors to reduce the probability of failure. This would effectively reduce the risk since risk = probability x severity. However, since the severity was "hazardous" and not "catastrophic" (i.e., they didn't think it could cause a plane to crash), the extra sensor was an optional software feature. Had the severity been appropriately attributed, I think there is a greater likelihood that the second AOA sensor would be a mandatory feature because they (I assume) have a specific risk threshold they are aiming for. All this to say, it's not enough just to say "These are small probability events so they don't have to be addressed" but rather probability is just a part of an overall risk strategy.
The decision space with safety critical software is extremely large precisely because you must mitigate low probability events if the severity is large enough. I sometimes wonder if the fact that self-driving technology targets are missed is attributable to naivety of oversimplifying the problem.
Thanks for the detailed reply. I will say first of all that I am an autonomous sceptic. I think it could be useful for some situations but not in all situations.
I would say that your first example really exemplifies my point about realistic examples. You start off well but then your point about ethics goes a bit off the rails in my opinion. I struggle to think of how to explain this point but I think it all comes down to "intelligence".
Us humans are, for the most part, absolutely terrible in situations like your first example. It's basically impossible to make any kind of rational decision in such a short time frame, we just act instinctively. There is no way we could make any kind of logical or ethical decision we would mostly just slam on the brakes - and possible swerve accidentally depending on what we did with our hands in the stress of the moment.
So you are already asking our autonomous technology to make decisions that a human couldn't make in the same situation. Now that is fine as computers can process information much faster than humans, but then the question becomes "how much faster?"
Lets face it: your example is asking a computer to process a huge information and then make a complex decision regarding that information in about one second. I'm not convinced that is even possible. Not only that you then start talking about ethics. How on earth is your computer going to work out if someone is a homeless person? In my personal opinion you are asking too much from the computer in too short a time frame.
Us humans make cost/benefit decisions regarding safety all the time. No system is going to eliminate all deaths from vehicles; there will always be situations we can't predict. If you start going down the rabbit hole of "let's try to evaluate all risks no matter how improbable" you will never get anywhere because in such a chaotic environment as a busy city centre it is virtually impossible to predict everything that could possibly go wrong.
Unless we can create a general AI (which I'm not convinced is possible and even if it were, why would it want to cart us about) then we will never be able to create an autonomous car that can assess unexpected situations in a way that would satisfy human standards. I think the best we can do is to have them operate in more constrained environments where the risks can be reduced, and have humans drive in other more chaotic environments.
I'd love to write more but I have to get to work and I don't want to bore you :-)
Managing those edge cases effectively is extremely important in safety critical software. Addressing these is often what separates quality critical software from low-quality. Would you want an aircraft software engineer or nuclear power plant engineer to disregard low probability events? It worries me when I get the impression the SV mindset of glorifying “moving fast and breaking things” infiltrating safety critical applications, particularly those that impact the general public.
>Second, the solution is simple anyway: favor the passengers of the current car.
I don’t think this is a given. Would the ethical software, for instance, drive a car through a crowd in order to save a lone passenger? Would you be okay with a human driver being absolved of responsibility for the same choice? My intuition is most would not, because we recognize there is an obligation to others as part of the social contract. It seems a naive oversimplification to not extend the same obligations to software that makes the choices in our place.
How is the car saving the passenger by driving through the crowd? That seems like an entirely imaginary scenario. In almost any real world scenario the solution is going to be to apply the brakes and come to a stop.
It is an imaginary scenario. But so is the scenario where they can always apply the brakes and come to a stop.
What if there is a cement truck tailgaiting and they can't hard brake? What if there are jersey barriers constraining the maneuverability? Either of those would risk the passenger more than running into pedestrians.
Performing failure-mode-effects-analysis on software is largely an exercise in creating imaginary scenarios and then effectively mitigating the risks to within an acceptable level. I agree, in most real-world scenarios the tough problems can be mitigated largely by "slow down and re-assess". But that's not what safety-critical software risk mitigation is always about. You have to mitigate low-probability events as well, especially in a domain with governmental regulation. According to the NTSB report, the Uber accident that killed a pedestrian applied the "slow down and re-assess" scenario that attributed to the death. The software was misclassifying enough that an artificial delay was put in to avoid (I assume) nuisance braking. So in the real-world, they didn't want to always apply the "just brake if in trouble" strategy because of trade-offs to the passengers and it didn't work out well.
Well given that the genetic lottery works differently for different gene pools… I’d say that two drunk drivers or a sleepy trucker are already pretty random.
Most success is arbitrary, even genetic success, and we should have compassion replace self-righteousness. A lot of people who are failures or successes don’t “deserve” their fates.
Self driving is obviously a vain per suit of personal space for those who can afford it. Automated public transport, or even slightly improved public transport is a much better goal for the us.
Exactly this. I'd go so far as to also include honest mistakes. I'd rather have 10x the risk of being killed by a human making a mistake, than by a computer with a bug. The human making the mistake has a skin in the game (they are physically involved in the accident). Accidents happen, but I know humans try to avoid them.
Being killed by a piece of obsolete AI-code done by the lowest bidder for a company that went bankrupt 10 years ago seems much worse.
So not only will AI drivers need to be as good as human drivers, they need to be orders of magnitude better before we'll accept them.
I think I'm the opposite. I'd take the 10x reduction in the probability of dying. I'd take it for myself (I think I'm a fairly good driver, but I know I'm not as safe as I should be). And I'd take it for the 90% of deaths it would save, even if they aren't me. Those other lives matter, too.
Autonomous driving is the perfect example of 80/20 (or more like 99/01)
I'm sure autonomous cars can drive fine on Mountain View's straight wide streets, with 360 days of sun a year, barely any rain and no snow.
Put them in Swiss snowy mountain roads, in the middle of narrow historical Italian towns, or on an unmarked multi lane roundabout [0] and it's a whole other set of problems.
Autonomous cars is a good toy project but when you think about it's only going to make us even more addicted to cars, commute longer, pollute more. We'll need to redesign major portions of out infrastructures so that smart (ie dumb) cars can read the roads properly. I can't even imagine the amount of time and money we spent on this, realistically all you need is some kind of dumb algorithm to take care of stop and go traffic and emergency braking. Everything else will be too faillible and require someone to constantly make sure the car doesn't suddenly decide to commit suicide
You don't even need to get to the genuinely tough scenarios that I almost certainly would have sweaty palms driving in.
There are just a lot of scenarios with utility or delivery trucks blocking lands, even modest snow, construction work, unprotected lefts with various complications, pedestrians doing crazy things, roads not on maps, etc. But an autonomous vehicle needs to be able to handle pretty much all those things even without a human present.
It's just a matter of whether it's 40/60, 50/50 or eventually 80/20. The point I think is still valid: some fixed cost gets you half the way from where you are, towards 100. You will never get to 100, and the place where you might stop and think "This will be more expensive than a private driver for the coming decade" is probably lower than we think (say 90/10 or even 80/20).
Making a car that can do 99% of my highway driving or 90% of my driving time will be easy, possibly even in my lifetime. But neither of those makes for truly autonomous taxis, and they won't drive me home from the pub when I sleep in my back seat, so I really don't see the point.
I actually think fully autonomous driving for some subset of highway driving would be a really big deal--both for reasons of convenience and safety. But you're right. It doesn't buy you a taxi service which is what a lot of people care about.
That's what I was just thinking. You could potentially have autonomous geofencing. You enter your destination into the satnav and then start driving; when you get to a geofenced highway that is considered "AI safe" the car will give you a signal. At that point, if you wish you can set the car to autonomous mode and your seat will automatically move back a bit and tilt back so you can relax; maybe even take a nap (not sure about that though). When you are about 10 minutes away from exiting the geofenced section the car will return your seat to the normal driving position and sound an alarm so you have time to get your brain back into gear and take over control again.
I'm disappointed about the reluctance on Lidar. It seems like a great safety net, allowing the car to say "I don't know what this large white blob on the camera is, but it's solid and coming up fast, so I'm going to do an emergency stop..."
I suspect this is cost-cutting more than anything else.
> "The deep learning hype is now deflating pretty strongly too, with even the biggest "enthusiasts" now admitting that things ain't as rosy as they were hoping."
This comes out as bad faith, so a reference would be needed... Andrew NG was pointing at the use of wrong/bad data, and not the models or approach per se...
So yeah, crapping on a trendy topic (deep learning) to market your venture that ironically itself uses that trendy topic (i quote "artificial intelligence"), is at best totally uncool...