Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

WalterBright wrote: "engineers are often not aware of basic principles of fail safe design."

I would suggest they should not be called "engineers" then. And in many countries, they're not. Part of the problem is that the tech community includes a lot of different people. Some are programmers, some are program managers, some went to engineering school, some are licensed engineers (in some other discipline). In the US, these are all commonly called engineers. Sadly, I think a lot of web programmers just don't know the true scope of the software industry and its practices.

If you want to design/build a bridge, you need a state license and insurance. The software industry isn't regulated like that. Anyone can design the software that controls a car. That's probably OK since web apps are non-critical systems. But I can't help but wonder if net security wouldn't be better if more programmers had better training in recognizing and improving the total impact of a system.

It is only the reputation of the company and potential damages in a lawsuit such as this one that put pressure on the car manufacturer and web-app startup to test their code in depth. Actually, I do wonder how much the US auto safety regulations are involved with firmware--or do they just test the macro behavior of the car?



> test their code in depth.

Failsafe design flaws are not uncovered by testing code.

Failsafe systems are designed not by "the code works therefore it is safe", but by "assume the code FAILS". Regardless of how much testing is done, you still ASSUME IT FAILS AND ACTS PERVERSELY. Then what?

(Note that acting perversely is hardly farfetched in these days of ubiquitous hacking.)


I have a quick question for you that's a matter of personal curiosity and one I think you might be delighted in answering: What sort of failsafes are there in a fly-by-wire system? Is it a matter of redundancy or another mechanism that ensures pilot inputs yield expected outputs?

I've really been enjoying the posts you shared relating to your time in aerospace. I think there are a lot of lessons the entire software industry should learn from...


All I know in detail is the 757 system, which uses triply-redundant hydraulic systems. Any computer control of the flight control systems (such as the autopilot) can be quickly locked out by the pilot who then reverts to manual control.

The computer control systems were dual, meaning two independent computer boards. The boards were designed independently, had different CPU architectures on board, were programmed in different languages, were developed by different teams, the algorithms used were different, and a third group would check that there was no inadvertent similarity.

An electronic comparator compared the results of the boards, and if they differed, automatically locked out both and alerted the pilot. And oh yea, there were dual comparators, and either one could lock them out.

This was pretty much standard practice at the time.

Note the complete lack of "we can write software that won't fail!" nonsense. This attitude permeates everything in airframe design, which is why air travel is so incredibly safe despite its inherent danger.


This is such a cool comment. Thanks for writing it.


The shuttles had similar concepts - various flaps had multiple redundant hydraulic pumps to control them so that even if one went nuts and started going in reverse that other pumps would over power it, and the result would simply be slower response times.


Gosh, this is an incredible comment. I see in greater detail what is meant by your illustration of "dual path." I had no idea the systems-level design was so thoroughly isolated.

Thank you very much for taking the time to share and answer my question!


I'd be surprised if there was a single question on a state licensing exam on failsafe design.

The sample tests I looked at had none. The GRE exams I took had none. The engineering courses I took never mentioned it. I don't recall ever seeing an engineering textbook discussing it. I've never seen it brought up in engineering forums or discussions about engineering disasters.

And, I see little evidence of awareness of it outside of aerospace - Toyota, Fukushima, and Deep Water Horizon being standout examples of such lack. You can throw in New Orleans where hospitals (and everyone else but one building) put their emergency generators in the basement. And in a NYC phone company substation was entirely destroyed because a vital oil pump was in the basement that got flooded during Sandy.


I looked into licensing at one point when it seemed like my career would be heading in a different direction. As I understand it, my state defines "engineering" as anything that could affect public safety, and says that all engineers have to be licensed, but there are only exams in certain subject areas.

I think the guiding principle of engineering regulation would lead one to believe that software controls for a car should be covered by licensing, but that this has not occurred in practice due to the regulations not keeping up with technological change.


> Anyone can design the software that controls a car. That's probably OK since web apps are non-critical systems.

Wat?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: