Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

During the last MAX fiasco, I said to someone that if there is a button you have to push every five minutes for the plane not to explode, then failing to do so would be “pilot error”, instead of a gross design failure. It turns out this is not a joke…


There was a great paper[1] I read about the human components in complex technical systems which argues that one of the roles of the human is to take the blame when the entire system fails. This does real valuable work for the companies involved and helps them avoid needing to answer the most uncomfortable questions.

[1] Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction by Madeline Clare Elish


"Moral crumple zones" is an incredible description.


This goes for self driving cars just the same.


Yeah it leads to real problems for fully driverless cars, as there is no longer a human to blame.

Also note that blaming the human has been Tesla's strategy for avoiding responsibility on their software drive system failures.


Most car - pedestrian crashes in the USA never end up with the driver at fault. Not sure this will change when cars are autonomous.


Well I am talking about all crashes, not just car-pedestrian crashes. For example Teslas crash in to other cars and then the company blames the human driver.


An example used in the paper are the human drivers of Uber self-driving cars - one of which killed a pedestrian in Arizona.


The flip side to this is that every system involving software seems to inevitably devolve into a situation where the human is expected to no longer be responsible.

Oh, you floored it while the car was pointed at the wall? It has cameras, why didn't the car disable the go pedal?

This is happening more and more with cars, and it seems inevitable that it will happen in other spaces as well, as software is expected to protect us from ourselves.


There is a big difference between software being the safety net for humans, and humans being the safety net for software.


In the extreme, yes. But it looks like it is actually a continuum, with a shockingly blurry line forming the threshold.


This is literally a joke on the excellent board game “Space Alert” - someone has to wiggle a mouse every so often or everything on your Sitting Duck Class Explorer turns off.


This is a real thing in modern trains:

>The device sounds a warning after 25 seconds of inactivity by an engineer. If the engineer fails to respond to the warning tone within 15 seconds, the system applies the brakes and stops the train.

https://www.newstimes.com/local/article/Alerter-system-preve...


That's still prompt -> response. This is a discussion about mandatory responses to no prompt at all.

Although on reflection, that undersells the level of stupidity being proposed here. The pilots need to not only respond to no prompt, but be actively monitoring a condition changing state so that they can then perform the unprompted action.

So, in all seriousness... by what algorithm are the pilots performing this assessment that is not something a computer can perform? How on Earth is it not cheaper and faster to add that to the system than petition a government agency for an exemption? What are all the computers on a plane even for other than monitoring state changes and performing actions in response? Even high-assurance, safety-critical coding should be able to outpace a Federal bureaucracy on something like this comfortably.


Boeing is no longer capable of building safe planes.

If you are no longer capable of building safe planes, your next best option is to petition the government to accept unsafe planes.


That's only the next best option in the very short term. Boeing will suffer significant damage if there's another Max fiasco - more than they did from the first one. Probably much more.

If your company can't build safe planes, the real "next best option" is to fix your company.


It's usually not possible to fix a company that is broken, simply because of Gall's Law. ("A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.") Large companies are complex systems; they develop their own set of internal incentives, communications architectures, org politics, membership tests, etc. Over time, these incentives inevitably adapt themselves toward maintaining the organization rather than delivering the product or service that is the reason why the organization exists. At that point, everyone who actually wants to deliver the product leaves, leaving an organization consisting solely of people whose full-time job is maintaining their position in the organization.

Ask yourself: would you take a position at Boeing trying to "fix the company"?

The only way out of this is to poach the few remaining employees that still have technical knowledge, setup a new company that refuses to employ everyone with a vested interest in Boeing, and take their market. This is hard for aerospace because of the sheer complexity of the product and the baseline quality levels needed to deliver a safe experience.


I did, and it was one of the worst periods of my career. And I went in knowing it was going to be challenging, I wasn’t naive, it was just worse than I expected.


> Ask yourself: would you take a position at Boeing trying to "fix the company"?

As an executive with guaranteed $xM in pay over a few years?

Sure — if I fail, I’ll just use that position as a stepping stone to my next executive role.

Why wouldn’t I take a shot at something positive, when there’s little to no downside?


Are you actually fixing the company then, or just extracting value while maintaining the organization?

This is why we have the world that we do.


I would genuinely try to fix it, based on descriptions by my mentors who were Boeing engineers.

I believe the board and executives would genuinely want me to fix it.

But it may nevertheless be impossible due to organizational mechanics, entrenched bureaucracy, short-sighted shareholders, etc.

I was just pointing out that it’s ridiculous to pretend nobody would want the job because it’s likely to fail when there’s only upside, for both yourself and the company. A literal win-win.

Implying there’s something negative in my comment because it’s easy to be cheaply cynical (and the cheap cynicism in the comment I originally replied to) is what’s actually wrong with the world — and why things are so bad.

Who gives up on something that only has upsides without even trying?


Your line "Sure — if I fail, I’ll just use that position as a stepping stone to my next executive role" indicated a certain cynicism in your own post, so I was reflecting that in my response.

So here's the more detailed sincere response, based on experience working in a similar large, similarly dysfunctional technology company (and also knowing people who spent several years at Boeing specifically):

It is usually not possible for a chief executive to fix a company. The reason is simply sheer complexity. A company of 100,000 people has potentially 100,000! (factorial) different working relationships within it. In practice it's less because not everyone communicates with everybody else, but even a small department of 100 people has more different relationships than anyone can possibly keep track of. No one executive is going to know every single employee, every team, every project. And without them having those personal relationships, they don't have enough trust to convince people to alter their behavior.

If the company is in trouble in the first place, that means that the way they do business is no longer adapted to the marketplace. So you need to get the company to make changes. But if you root cause each individual problem, you find that the company is fractally fucked up. The employee is usually acting according to the incentives available to them; if they did things differently, they would fail to get the cooperation needed to accomplish their goals (at best) or lose their job (at worse). And that's the key part: in a big company, achieving any goal, regardless of how small, requires the cooperation of many different people. In a normal functioning company things mostly work because these habits of cooperation grew up in good times, working culture & processes adapted themselves to the activities that actually made the company money, and so when people just do their jobs good things basically result. But as the company grows and ages, it ossifies. Over time they want to do things like introduce a new jetliner, but find that the right combination of people with the right skillsets to do things like make engine nacelles that don't explode no longer exist.

This is why advice for turnaround CEOs is "get the wrong people off the bus and the right people on the bus". And they frequently hire outsiders, or folks from much earlier in the company's history. Their first task is to stabilize finances. Their next task is to identify the parts of the company that are still functional, then double down on them (often made harder because these folks were often laid off as part of stabilizing finances). Their next task is to sell off or lay off all the folks that are embedded in organizations that are no longer serving the company's purposes. Remember that there are > 100K employees, and you're building a product of exceptional engineering complexity, and that nobody knows everything the company is doing. It's pretty hard to have enough visibility into the company's product, engineering, supplier relationships, employee base, finances, etc. to do this correctly.


Fixing the company sounds good, but you have to remember that the people who would be fixing it are the people who got it to this point in the first place.

I think it's very likely that nobody currently at Boeing has the ability and willingness to make the kinds of changes they would need to make in order to become a functional company again, because Boeing has spent over two decades systematically purging senior engineers from management and leadership in order to become another crappy company full of empty suits with MBAs, who don't understand the product they're making, and don't care if they're literally killing people and the company is rotting out from under them as long as they can monetize the rot to make their quarterly numbers.


What damage can they actually suffer though? Boeing is a strategic asset of the US government, they would never allow any harm to come to it. Some heads would roll for sure, maybe even the government would step in and assume direct control of certain parts of the company, but it's not like it would go out of business, or like companies would cancel all of their orders and buy Airbuses instead - they could, but again, the US government would never allow that to happen, either through direct monetary action or promises and guarantees that whatever the worry is won't ever happen again.


> What damage can they actually suffer though? Boeing is a strategic asset of the US government, they would never allow any harm to come to it.

Boeing's staff and plant are strategic assets, its executives and shareholders aren't. The US government could totally let harm come the latter group.


US government is much more broken than Boeing.


> What damage can they actually suffer though?

Loss of market share. As in, customers actively looking at the type of aircraft when they book a ticket. Airplanes becoming reluctant to ordering Boeing.

At this time, every $1 you invest in making it known what Boeing does since 15 years, results in $2 or $3 of loss of market share for Boeing. Absolutely the time to buy ads to promote articles about Boeing.

I would actually trust Comac more than Boeing, as Comac has something to prove, whereas Boeing has been proven to crash planes and bribe the FAA.


Can someone make a list of unsafe Boeing planes?


Boeing’s processes, not planes, are unsafe.

In 2013 they received frames (the circular structures that make the fuselage) which instead of being machined, has been created manually. Voids were all in the wrong place because the workers had taken the blueprints symmetrically.

Did they ditch the frames? Of course not? They remade the cuts so that it fits upside down! And soldered the I-beam parts that had been cut! Result: The circular frames, which are a critical structural component, have twice as many gaps and holes and soldered sections than the designed piece.

Bulkheads, circular frames, MCAS… It’s appalling engineering practices.

And the dinner party system with FAA, introduced by Mac Donnell’s practices when it was bought over, has to stop. They should not be friends in real life.


The 737 MAX and 787 have both had significant issues. Basically it's anything introduced after the merger with McDonnell Douglass in 1997.


Not sure about that. I suspect that Boeing is considered a domestic strategic asset and is not allowed to die. No matter the incompetence.


I agree, but you underestimate the cost of software in "safety critical" applications.

I've heard stories of cases where development orgs were given the option of changing a line of code or re designing the hardware. They of course redesigned the hardware.


Yes it's different, but similar to the comment I replied to in the sense that the machine just turns off if the user is idle for a time.


And in large ships, the Bridge Navigational Watch Alarm System or BNWAS.

Over night most normal operations cease, so it's quiet on the bridge. But an officer is on duty to keep watch, because the ship is still moving, often relatively fast, and it needs a human to obey pre-existing route plan decisions, observe changing conditions, stay alert to other vessels and so on. However, the bridge (on a modern large ship) is warm and dark and at night tired humans in warm dark rooms will sleep.

So BNWAS will (if operating correctly) periodically need to be "nudged" to show that the officer on watch is awake and somewhat paying attention. If they do nothing for a while the BNWAS will alert them and if they ignore that it will eventually alarm critical crew, often the Master ("Captain") of the ship or other senior officers who are asleep in their cabins.

Now of course nobody wants to leave a nice dream to discover that instead of your teenage girlfriend agreeing to go on that picnic you never got to that sound is their boss, very angrily demanding to know what the fuck you're doing curled up by the radar console. So unfortunately sometimes after a serious incident (e.g. cargo ship has "decided" to wait right next to a small island a few miles from the usual route from 4am until midday, and then when it gets to a dock it seems very smashed up at the bottom as though it was grounded and had to wait until higher tides lifted it clear...) we find the BNWAS has been disabled crudely (it's not as though ship's crew tend to be IT experts)...

But this is a completely different scenario. The BNWAS is not something which causes a disaster if you forget to react, it's an alarm to prevent such disasters which would otherwise be commonplace.

https://en.wikipedia.org/wiki/Bridge_navigational_watch_alar...


Back in they day sailors would stick a lit fag between their fingers when they were sailing home from day of fishing.


Even in not-so-modern trains. At least here, a dead man button or pedal has been mandatory on trains that could have a single driver since 1942.


If the user fails the train stops.

If the use fails the plane crashes.

Two very different problems.


I was responding to a comment about a board game, I did not compare a train to a plane.


Except that's the complete opposite.


depends whether you consider trains rudimentary sentients or not. If the train fails to follow procedure to alert the gut bacteria/engineer every so many minutes, the train runs a higher risk of running into something because the engineer is asleep or incapacitated.


Ok, I don't know much about trains, but it seems like if they can build this kind of system then they could also build a system that only allows trains to go as fast as track "speed limits".


That's pretty much standard on European high speed trains, and many lines at slower speeds.


They have. There's only so much you can do. Crew have been known to throw breakers to disable systems they find "annoying".



This is too ensure that the driver is still there and hopefully watching. If they do not react then it means they are not fully abled. So the train stops.

This is very good.

The correct analogy would have been: if the driver did not press the button then the train accelerates until they do. Not good.


Or even more accurately the engine explodes and the entire train derails it's self.


I made no analogy.


Trains have had a deadman switch for literally centuries now.


Dead man’s (safety) switch


Like the "the button" in the TV series Lost


Technically the screen saver/lock screen comes on and someone has log back in before anyone can take any actions.


Made me think of that one Lost episode with the button


> if there is a button you have to push every five minutes for the plane not to explode

Instantly reminded me of: https://en.wikipedia.org/wiki/Sifa


Dead man's switch, but when not pressed in time, ensures failure, instead of preventing it.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: