Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Honda claims it will launch the first Level 3 autonomous production car in March (carbuzz.com)
91 points by evo_9 on Nov 12, 2020 | hide | past | favorite | 109 comments


I think people are mis-conceiving L3 as "the car might yell 'grab the wheel at any moment.'" At least for L3 systems I've worked on, the model was - there will be exception cases we don't know how to navigate, but we have a plan to safely transfer control to the driver in them. Examples would include: * a human construction worker or police officer is directing traffic, so safely slow to a halt or near halt and ask the driver to follow directions * the route takes the driver from a supported road (large, well-mapped) to an unsupported road - tell the driver they have to take the exit, and if they don't then you'll miss the exit and stay on the larger, supported road * broken traffic light - tell the user and prepare to treat as red light unless they override

These are not sudden dangerous handoffs, but they do mean that a driver can't be drunk or asleep when behind the wheel.


Thanks for the inside scoop! Not surprisingly the line between Level 3 and Level 4 is fuzzier than I had in my mental model. I view your first example as a Level 4 capability -- AFAICT, stopping to let a human take over is acceptable for Level 4 vehicles. The other two are interesting.

Perhaps we would call the system you worked on "high level 3", and Tesla type systems where you have to be able to take over in milliseconds as "low level 3". Marketing departments would be tempted to call your system level 4.


If you have to be alert to take over at short notice, is level 2. Tesla "Autopilot" falls into that category, as far as I know.

The categories are well defined (SAE J3016) and have legal implications. If the marketing department is tempted it'll have to go through the legal department.


I find the commercial use case a reasonable distinction: A level 3 car, you buy for personal use. A level 4 car, you rent like a Taxi.


What also might go unsaid here is: the practical effect of Level 3's requirements vs. its capabilities is that it probably will not be able to engage often. If the car can't safely do it, it's not going to do it. So you'll invest all this money in a Level 3 car, only to find out that it only engages maybe once or twice a year, when the planets line up and it has all the info and environment it actually needs to work.


I'm not saying you're wrong, but I don't think any major manufacturer would release it if that's the case.

I can't imagine the customer response to a self-driving system they can only turn on once a year would be good.

Seems like an obvious PR/Marketing nightmare.


To be fair, one would also think that releasing a self-driving system that drives into the back of a semi at speed would be a PR/Marketing nightmare, but it hasn’t slowed Tesla down.

https://arstechnica.com/cars/2019/05/feds-autopilot-was-acti...


I don't actually agree with you here.

Drivers get into accidents ALL the time. Having a self-driving crash is entirely expected. It's also a relatively rare occurance.

Having the system not work in the majority of cases will make literally every buyer upset.


This is an unlikely outcome.

Already, Toyota has released their third version of driving assistance, TSS2.5, which has reasonably decent ability to stay in lanes on highways. For anyone who does any highway driving, it's a competent system that a driver can use a good portion of highway driving. And it's likely less capable than Honda's system.

OpenPilot is another example - pretty good at most roads on highways or large boulevards. Like others, not for use in city streets.

At least for the American road system, you may be underestimating how much time people spend on large, well-defined highways that could use this system. Some other countries rely less on big highways and this could affect the portion of time available to use it.


These converations tend to be unproductive because self driving "levels" are only one dimension and don't consider terrain diversity.

A car that works at L3 only a sunny day on low-density highway is useful to have.


Is this formally quantified, e.g., in an amount of time the human must be able to take control within? If that amount of time is like 5 seconds or longer, and the system is empirically demonstrated to maintain safety so long as the human is always ready take over in 5 seconds, that would be very compelling.


> there will be exception cases we don't know how to navigate, but we have a plan to safely transfer control to the driver in them

I don't know how anyone can claim this.

If someone steps out in front of the car with no notice and no stopping distance or room to drive around, how will you safely transfer control to the driver?


In that case, why hand over control at all? It's not like the driver is going to be able to stop the car more efficiently than the control system or anything.


> In that case, why hand over control at all?

Well that's my point. The car doesn't know how to navigate it. Neither will the driver.

There are some cases where you can neither navigate nor safely hand over control. So 'there will be exception cases we don't know how to navigate, but we have a plan to safely transfer control to the driver in them' cannot be a true statement.


> Well that's my point. The car doesn't know how to navigate it. Neither will the driver.

I certainly hope both know how to handle it. In your scenario there is no "good" option, but there definitely is a least-bad option. Likely it's to stand on the brakes and scrub off as much speed as you can before hitting the pedestrian. It might change the collision from a fatal one into "just" broken bones.


I don't think ploughing into someone can really count as successfully 'navigating' a situation.


And the car can't dodge bullets or land mines either. It's taken as given that cars can't do the impossible


> These are not sudden dangerous handoffs, but they do mean that a driver can't be drunk or asleep when behind the wheel.

Or that the driver dont know how to drive...


There's an ACM article which always comes to mind whenever I see discussions about self-driving cars. The title is "Automation should be like Iron Man, not Ultron". The idea is "augment the human, don't try and replace the human," since the combination of the human and computer will always be better than either one alone (at least with our current level of programming/AI/hardware capabilities).

To apply this to self-driving cars, we need more driver support. Visually identify cars and obstacles which could be an issue to supplant the driver's identification. Help the driver keep in a lane. Help the driver keep a speed relative to traffic around them. Help drivers see in the dark, and past bright headlights. Help drivers see lane lines through snow packed roads.

Help the driver see and hear things, but don't override their decisions.

Some of these technologies are available today in luxury cars - let's make it available to everyone on the road.


Except most accidents are cause by drivers not paying attention (distracted / fatigued / alcohol) or from speeding[0], not from being unable to identify the traffic around them. So giving them more information wouldn't help them reduce accidents, reducing the requirement of driver attention would (i.e. self-driving cars)

[0] https://www.natlawreview.com/article/most-common-causes-coll...


Adding information - warnings - would help:

- Distracted drivers, bringing their attention back to the hazards on the road.

- Fatigued drivers, by alerting them to a hazard they missed.

- Drunk... yeah, probably won't help much here. Hopefully we can get to the point where you can't start such cars while impaired by alcohol.

- Speeding - again, improving knowledge of hazards, such as corners they can't navigate at the current speed, or vehicles which are at drastically slower/faster speeds.

Don't be quick to dismiss how technology can augment humans. Anything a car can learn to eventually handle on its own, it can notify a human about on a much faster time scale (as in it won't be the perpetual 3 years away, it could be now, since we don't have to perfect the reaction, just the detection).

And, perhaps most importantly for you, this doesn't preclude the development and deployment of fully automatic driving systems when they're actually at level 4/5. In the mean time, we can make a meaningful and broad drop in the vehicle accident statistics.

EDIT: Since this may not be clear - I expect this information and these warnings to be coming 4-5 seconds before an incident would occur, giving the driver classes you've pointed out sufficient time to react.


Or putting all that money into making cars that can't speed and that can track if the driver isn't paying attention.

If the goal really is just to reduce how many people die in traffic there are far easier ways than self-driving cars. The goal of self-driving cars seems to be far more than just reducing traffic deaths, that's just the part that sounds good in public.


I'll just put this here: "The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment" – Warren Bennis


Marvel Studios writing it as a baddie is not a very compelling reason to avoid automation. As of today, the vast majority of autonomous devices and robots are factory devices that have minimal input from humans.


It might be worth reading beyond the title and into the article itself.

It's a good article with coherent arguments.


> It might be worth reading beyond the title and into the article itself.

I don't know why authors give their work flippant titles and then are confused when people assume it's a flippant piece of work.

'Don't judge a book by its cover' isn't a reasonable complaint when the author has deliberately given it a silly cover!


Or, it's a surprisingly relevant comparison (on the topic of the article) using pop culture references? Augmented humanity vs. the best AI they can create.

When it comes to titles for articles, this is both memorable and far less click-baity as is normal these days.


Deliberately misinterpreting things isn't useful.

"Don't judge book by a cover" doesn't apply to judging an article by it's title. See how silly it is?


Honda has spent decades building consumer trust based on safety and quality. This has me suspecting that many consumers will trust them, over Tesla or another newly established brand when it comes to autonomous driving. Personally I'll take "works ok but is bulletproof" over "cutting edge but buggy" any day.


Bulletproof on Japanese roads where immaculate paint is given or also on rural third world country roads?


> or also on rural third world country roads?

I drove 40,000 miles from Alaska to Argentina and 54,000 miles around Africa [1].

From experience, I can tell you the vast majority of vehicles sold in the developed world wouldn't last a couple of months on 'rural third world country roads'. The potholes, corrugations, mud, dust and everything else are intense.

For eg, this is how I crossed the Democratic Republic of Congo - https://www.youtube.com/watch?v=OV8V3GdOcPU

[1] youtube.com/theroadchoseme


Just checked this out. Awesome channel!


That's a key distinction, "the road" is an ambiguity that these car companies love to deploy.


Personally I trust that before Honda rolls this tech out in the United States, it will be in fact bulletproof. Or at least that's my perception as a US consumer who has owned multiple Honda products.

Given what I read about Tesla, they'd most likely blame the roads, the driver or simply lie about the capabilities and push forward because "it's safer than humans!".


I guess, to re-hash the same conversations people have been having for years -- if the expectation is that drivers must be alert and prepared to intervene, that places them in the gray area of not knowing whether or when to trust the system. Even if the self-driving system is relatively good, is the human+self-driving system really better?

In some sense, the self-driving system gets to be trained with experiences of failure, probably under the expectation that no intervention will occur (i.e. the "policy" cannot include human intervention). But the human driver doesn't get to learn from experience when to not trust the self-driving system (because getting it wrong can be disastrous).

Perhaps a missing piece that human drivers _should_ participate in, but almost certainly won't, is training in a simulator. There's a methodological challenge here -- ideally the simulator training should focus on scenarios where the self-driving system is weakest (i.e. where it's plausible that the driver will need to intervene), but those are likely also cases where we're worst at simulating the behavior of other cars/pedestrians/cyclists/loose object.


> is the human+self-driving system really better?

Definitely. My last couple vehicles have had Lane Keep Assist and Adaptive Cruise Control. I believe those two equate to about "Level 2" of autonomous driving. Whatever the technical definition is doesn't matter though, the important point is that those two features alone make driving long distances so much less mentally taxing.

The Volvo system in particular is great... keeping your vehicle firmly in the middle of the lane like it's on rails. Frankly, it's better at that task than I am and only requires slight and occasional steering wheel input to stay engaged -- just enough to know you're paying attention.

Before these systems, I'd be mentally exhausted after driving 2.5 hours to my parents house, especially if there was stop n go traffic and it took even longer. Since these systems, I arrive feeling mentally refreshed instead of drained. There is no way I'd buy a general purpose vehicle without these systems ever again and I look forward to any other ways that automated systems can augment my ability to drive safer, more precisely and with less effort.


ACC and Lane Keep was incredibly awesome when I was driving at night through northern Ontario. I was able to focus 80% of my attention scanning the ditches for moose and other large critters.


I crossed into BC from Idaho at night a few years back and shortly after noticed a LARGE animal off the side of the road. Pretty sure it was a moose, but it may have been an elk. Either way, I spent the next few hours paying extreme attention to the sides of the road. It made the drive much more stressful and having a little backup would have made a huge difference.


I relax more on longer drives with ACC.

I tend to drive faster, so slow drivers in the left lane drive me a bit batty.

With ACC on, that tendency is blunted heavily, and I just roll along, making occasional adjustments. But my temperament is much improved.


Agreed. I did a 1,500 mile (2,400 km) round-trip drive from Los Angeles to Utah a couple weeks ago using these same features in my Tesla Model 3 and it's a very different experience from previous trips in other cars.


LKA and ACC are two separate level 1 features. LKA is lateral and ACC longitudinal only.

A lane change assist which speeds up and steers to the other lane is level 2.


Great points.

If we had enough self-driving crash data (like a black box), where the fault was of the autonomous system, and where human intervention could have prevented the crash; that data could be used in the simulator. So the player would be experiencing simulations of actual crashes.


On the other hand, every crash we could simulate would be one that the AI should be able to learn to avoid, so it's not very clear if there are scenarios where it makes sense to 'train the driver' instead of the AI directly.


I imagine a good chunk of crash scenarios are along the lines of "easy for human intervention, difficult to program an AI to learn" such as broken/misleading road-aids (traffic lights) like the OP comment gave. These would be laid along a user-manual for licensing like a standard drivers' license I imagine once the AI restrictions are more obviously understood.


>Even if the self-driving system is relatively good, is the human+self-driving system really better?

Say for example, the designers never trained the car's system to recognize someone on a skateboard as an obstacle.

Obviously a lidar based system would spot the skater, but a camera+ai system depends on being trained to identify each type of obstacle as an obstacle and avoid it.


A person on a skateboard is still a person, would any AI trained to spot people not recognise that (at the same level as it spots people in general)?

Do you really need to train your AI to spot people on skateboards vs rollerskates vs rollerblades vs heelies vs sliding on ice vs traveling on a travelator vs standing on a moving vehicle vs ...?

OK, being able to recognise the differences and act accordingly might be useful but the principle of "person getting closer to vehicle" should hold sway for most situations?!?


> "person getting closer to vehicle" should hold sway for most situations?!?

I think the big thing is that not all of these 'persons' move in the same way. A person in the exact same location could both be a 'hazard' and 'not a hazard' based solely on their velocity, and their ability to change direction quickly (and are they chasing after something which isn't a hazard, but which is intercepting your driving path). They don't just have to identify the person, they have to be capable of forecasting that person's movement.

IOW, someone walking and someone on a skateboard are different risks, because the skateboard could move into your path of travel before your pass them, whereas someone walking would not.


Yes, but the skateboard is irrelevant is the predicted trajectory. I can see how it's useful to analyse the mode of movement to try to improve predictions but the human moving towards you at X speed should be the principle analysis; it seems, naively, to me; rather than worrying if they're on a skateboard or a snakeboard, etc..


The skateboard is relevant because it could theoretically make a neural network fail to identify a person that needs to be avoided

>human moving towards you at X speed

Lidar systems will work this way. Camera+AI not necessarily, it will still need a way to sense relative speed otherwise you are banking on an AI to identify an obstacle in an image.


Uber's system famously couldn't decide if someone pushing a bicycle was a pedestrian or a cyclist. It flip-flopped for a while before resolving the dichotomy by running her over.


I'm not familiar with the instance you're referring to, but at face value I don't understand the problem. It doesn't matter if it's a pedestrian or a cyclist—in both cases, the car should avoid crashing into them.


It shouldn't matter. But the system was designed to track trajectories of recognised objects. So every time it changed its mind about what it was looking at it started recalculating the trajectory. By the time it settled down it was too late to avoid the fatal accident.

Don't assume that self-driving software is designed with even a minimal level of competence.

Footnote: there have been many articles about this incident posted to HN. Here's one: https://news.ycombinator.com/item?id=19333239


The AI failed to classify what it saw as an obstacle to be avoided because it hadn't been trained for that particular instance.


More precisely, it wasn't program to recognize confusing objects as a hazard


,, Even if the self-driving system is relatively good, is the human+self-driving system really better?''

Probably yes, but they are not relatively good yet and certainly haven't been in the past. Most of the 3D video processing algorithms that are good enough to be used for self driving were created this year, so until they are productionized, we don't have any data on it. Still, probably we're just 1 or 2 papers down the way (to quote from 2 minute papers youtube channel :) ).


"The company promises the tech will be on the road early next year. Autonomous driving technology has proven to be a lot more difficult to bring to market than any of the major automakers seemed to have planned for."

I honestly don't understand how this has proved to be "a lot more difficult to bring to market than any of the major automakers..." I'm not an expert, by any measure, but this seemed to be a nearly impossible thing to bring to market anytime soon. The technology seems impossibly hard, sure, but more importantly this is something that is new and looks super scary to most people. Every single time one of these things fail it's going to make headlines, and that's going to make it even scarier. I just can't believe Honda (or anyone else) got this right so fast. Even if they're like 95% of the way to having it perfect (or whatever the measure might be), that last 5% seems like it's going to be REALLY hard to get right without some serious trouble and some bad PR.


I too am skeptical (but happy to be proven wrong) about Honda's L3 claims. I think the "automaker optimism" is really just referring to Tesla loudly claiming they "will hit L5 this year" every year since 2015 while continuing to struggle. I don't see other automakers making such aggressive promises, thankfully.


I am hopeful coming from Honda that they have solved some of the critical issues. I was really impressed with their rider assist [1] for their new motorcycles and the ability to balance a bike. Perhaps that is a small feat compared to what level 3 will require but I am confident Honda will pull it off. This opinion is also based off the fact that I have a couple 37 year old Honda motorcycles that still to this day run strong and hard. I am just optimistic they can do what they say.


Well one issue is that too much of this development has been one sided. What needs to be done is force standardization of road markings and signaling and correct it where it is not compliant. So these systems have had to accept the fact that ideal conditions never exist.

The easiest opportunity in the US would have been to dress up the HOV and Express lanes and use them. They have controlled access and are specifically marked already to distinguish them from other lanes. however too many seem bent on doing it all at once for everywhere.

The first three tiers are not difficult. This claim by Honda doesn't not provide any more details other than they got a certification. We do not know if anyone else even applied for it or exactly what it entails.


> The first three tiers are not difficult.

Why is that? Afaik at least according to European and Japanese [1] laws, L3 is the first level where the larger liability of the OEM comes into play. That is because the driver can remove his hands from the wheel and therefore will need significantly more time to prevent an accident.

[1] https://hsfnotes.com/cav/2019/06/06/japan-allows-level-3-aut...


>The easiest opportunity in the US would have been to dress up the HOV and Express lanes and use them.

These are in very limited areas. An autonomous vehicle that can only drive in these lanes will be autonomous a small portion of the time.

>What needs to be done is force standardization of road markings and signaling and correct it where it is not compliant.

Maybe it's different where you are but road marking and maintenance, especially as you get into secondary roads, can be rather haphazard. This is not going to change markedly so that people can use autonomous vehicles.


How's this different than better maps in the car? Auto update them and you'll be in good shape.


I would guess you likely are more of an expert on computer technology than auto executives are. They probably saw demos of the 95% case, and have no real way to think about how much work that last 5% is, because no car companies other than Tesla treat computers/software as a core competency.


> This certification means that you can legally sit behind the wheel without actually looking at the road. Under certain conditions, Level 3 autonomy allows a car to drive itself as long as the human behind the wheel is able to take control at any time

Not looking at the road and taking over at a moments notice seem to be contradictory items. Also who/what is auditing the drivers attention?


"At any time" is not the same as "at a moment's notice". If the system is robust enough such that the human is always given, say, 5 seconds warning before they need to intervene, they can safely read/write (but not sleep), which would be a huge benefit.

Elsewhere in this thread, someone gave examples where a human takeover is needed but there is significant warning time: (1) exiting a highway to an unimproved road and (2) coming upon construction work with human-directed traffic control.

Driver attention is monitored with cameras in essentially all Level-2-and-above autonomous cars, I believe. The machine vision is good enough to detect sleeping.


It could be that the car alerts a driver the moment the driver needs to pay attention to the road. So presumably it could be ok to use phone, but not sleeping.


Honestly, no one should be surprised by this. If we look t the history of automotive development and manufacturing over the last 40-50 years, every major change has been 'made real' by Japanese automanufacturers. That doesn't mean they are first to market, but it typically means they are first to market with something that works for the main stream. This includes manufacturing techniques, platform engineering, hybrids, navigation and in car computing, etc.

They are just quiet about it. In part, this seems to be cultural, in part it seems to be where our major news comes from and the fact that it rarely reports on stories in other languages (and even more rarely reports on them well). There have been many discusisons on HN of the goods and bads of Japanese work culture, but one possible outcome of that hierarchy and control being a longer term strategic focus, and internal commitments to quiet & purposeful innovation.


> every major change has been 'made real' by Japanese automanufacturers.

The TDI engine (for all its faults) was first made commercially available on a Fiat, and then was made (in)famous by VW (among others). Audi's four-wheel-drive system (Quattro) is imo still unmatched, and when it debuted in the World Rally Championship it blew away all competition.


those aren't mainstream by any stretch...they haven't fundamentally changed the basics of the automotive industry

you could make the claim (maybe) about diesel's and Europe, but separating technical innovation from industry level innovation here is important.

Edit: and I say that as a RABID WRC fan who is currently wearing my Peter Solberg hat.


The Japanese made some good advancements in manufacturing technology and all the ancillary systems you need to make a low displacement 4-banger palatable to the mass market (ohc and distributorless ignition come to mind) but your comment is just fanboy-ism masquerading as history.

The Japaneses didn't really bring electronic systems and fuel injection into the world faster than anyone else. They clung to their vacuum line spaghetti control systems embarrassingly far into the 80s.

They also got blind-sided hard by the minivan and the midsize SUV.


SUVs are an uniquely American-originated thing. Let's be real, 3/4 of SUV owners don't need the benefits of an SUV.


3/4 of car owners could get by with a moped.

Can and "should do so despite being able to afford otherwise" are very different things.


I didn't say faster. I said they were better at maturity. They have been typically slower to switch...to publically experiment, but when they do it seems to stick.

See Honda and... * variable cam timing * hydrogen vehicles * aging in place technology

as someone shopping for a family rather than couple vehicle...we are pretty much solely looking at Japanese minivans and small SUV's as for reasons that mostly have to do with repair costs and mature features.


Is this still true? It seems like other countries' manufacturers are sprinting ahead nowadays. The big Japanese manufacturers have been describe as stagnating when it comes to tech.


Described by whom?


Auto magazines/blogs, enthusiasts, etc.


Can you provide a source?


I'll provide some sources about why this is wrong:

Innovation in automotive is as much in things consumers don't see as it is in the features we perceive and use:

https://en.wikipedia.org/wiki/Single-minute_exchange_of_die

https://dailykanban.com/2015/03/27/toyotas-tnga-tps-2-0/


Many recent Lexus reviews for example. Halo models aside, there's a lot of mention that Lexus is technologically behind the Germans products.

Now the plus side of the argument is that the Japanese are using more tried and tested stuff and thus slower to adopt the latest whizzbang gizmos and gadgets.


Given how terrible their lane holding technology is in a 4 year old car, and how their collision detection alert constantly false alarms on white cars coming the other direction on sunny days, I'm not going to be using this any time soon.


I concur. The safety features of my 2019 Accord are pretty worthless. The lane keep assist works in maybe 20% of cases and once even tried to stear me into a barrier, the collision detection like you said has many false positives, the rear parking sensor didn't detect a huge rock wall which made me hit it while reversing because I got used to rely on the beeping sound, the app that lets you track the car's location is so full of bugs it's not even funny, the Android Auto implementation for Maps likes to bug out... ah I could go on and on, the problems with the electronics are endless. And now they are supposed to suddenly have autonomous cars which is several orders of magnitude more difficult to do than all the stuff they couldn't implement properly so far? Sorry but I am extremely sceptical. Not that I expect any other car manufacturer to get it right any time soon either.


I have a 2021 model year Honda with the Honda Sensing package and it's awful. It can't hold lane position on a straight and level freeway, which is compulsory. There's a specific place on the freeway near my home where the car slams on the brakes every time, if I leave the ACC engaged. In the city (of Berkeley) it freaks out constantly about oncoming cars in their own lanes on curves or parked cars to the right of narrow driving lanes. Sometimes it just slams on the brakes if I'm pulling up behind a car at a walking pace at a red signal.

If Honda delivers L3 driving six months from now, it would be a huge leap over what they are selling today.


Tesla full self-driving beta is Level 3, and it's currently driving on the roads in a very small percentage of customer Teslas. https://www.google.com/search?q=youtube+fsd+beta

That being said, Level 3 has limited utility. If a driver needs to take over at any time, that means they need to be taking over in milliseconds, so it means that they need to be 100% focused on everything. Level 3 is a fancy cruise control. Level 4 & 5 are the ones that will fundamentally change driving.


From this article (https://www.forbes.com/sites/bradtempleton/2020/11/11/honda-...): "Most importantly, they allow you to take all attention off the road — you can do e-mail or read a book, and do not have to watch the road. When traffic opens up again, they alert you and you start driving. "

It looks like they have level 4 and are calling it level 3?


Sounds like it's "level 4" in slow bumper to bumper traffic only. I'd trust Tesla Autopilot similarly in that situation as well.


People already almost entirely disengage from driving in slow traffic, about 1:20 drivers in traffic queues I walked past (last year, during rush hour) looked at their phones whilst moving.

It should be relatively easy a task not to crash as often as people in such situations? Couple of parking radars and a lane follower would probably be close to human levels of contact avoidance in crawling traffic?


My mother crashed in bumper-to-bumper slow traffic playing PokeMon Go a couple years ago. I wonder how much a self-driving car would have helped avoid an accident.


Likely all that would be needed was automatic braking.


The problem is many of the "self-driving cars" are basically automatic braking and lane keeping assists. Add in a bit of marketing, and suddenly it is an AGI that would herald new wave of automation


Does tesla allow you to take your hands and attention off the driving? Because this one does


Interesting. Officially according to the NHTSA, there are not any Level 3 systems on public roads in the USA.


The keyword here is "...Under certain conditions,...".

Probably this means on the high-way only, while driving less than 50km/h... e.g. during a traffic jam.

City road Level 3 would require plenty of testing that will not go unnoticed.


That's still pretty useful - stuck in a jam for an hour: you can read a book or get on with some work.


Very little information in the article other than the announcement.

I suspect that Self Driving Vehicles in Japan would have an advantage over the US. People drive well in Japan, it is no easy task to get one drivers license. In Tokyo, drivers seem to be aware of all their surroundings. Pedestrians also obey the rules and don't walk where ever it suits them.

A lot of US drivers are terrible. Even lvl 1/2 is an improvement over most drivers on the roads who are looking at there phones while driving down the center line or against traffic.


I've cynically always assumed when L3A vehicles are genuinely ready for mass production they will first be sold in a 3rd world 'test dummy' location with lax regulation and a loose approach to human life. I find it hard to believe L3AV's will be launched at scale in litigious western cultures- including Japan- until there has been a lot more 'real world' mileage and experiences to iron out the kinks...


Tesla has already launched a beta of their Level 3 vehicles on American roads and is promising to have it rolled out completely by end of year.

The big problem with testing in 3rd world countries is infrastructure is far different from in the US/ EU/ Japan so it's not a great proving ground. When you have cities where you have to deal with chaotic streets packed with people and slow moving cross traffic you need to just kind of push your way through, autonomous vehicles are going to struggle.



Level 3 tech is expensive. You can only sell it where people are rich enough to buy luxury cars.


https://en.wikipedia.org/wiki/Rimac_C_Two Rimac claims to have level 4 autonomous driving, and is a production car. Rimac also claims to have a 1.85s 0-60 compared to Tesla's roadster 1.9s 0-60.


The Rimac is a little bit more expensive - to my eye at least it's a very good looking car (Unlike any Tesla I've seen so far)


Like the name implies, it's only meant for traffic jams. Hardly a level 3 system.


Right, it sounds like an alternative to Tesla autopilot on the highway without the hand on the wheel requirement. I really doubt this is capable of making turns or stopping for traffic control.


If Honda is doing this, they must have been testing it extensively. If they got a safety certification in Japan, they must have demonstrated something. So, what's known about what they're doing?


Since its Japanese you can be assured it will be all hardcoded decision trees instead of unpredictable Deep machine learning blackbox.


> Honda has just claimed that it will launch the world's first Level 3 autonomous production car in March 2021

The title needs to be changed, as nobody's beaten anyone, yet.


>Japan's government has awarded a safety certification to Honda's autonomy tech called Traffic Jam Pilot.

I don't think that is an entirely fair representation. The claim you are referencing in that sentence appears to be about when they will launch. They seem to clearly be the first to actually achieve a certification. Which is what the title seems to be referring too. We can discuss about the potentially too close relationship between the Japanese government and companies, but the certification is the certification and is a fairly objective measure.


Waymo has had the approval of authorities to drive cars with no driver at all for quite awhile, if we're competing on "whose piece of paper says they can do the most" honda did not win.


Waymo isn't a car manufacturer and isn't selling vehicles to people


If the carmaker was "Tesla" not "Honda" then this would not have as many upvotes by the HN crowd

Fact: this is not available yet so nobody has beaten anyone




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: