Hacker News new | past | comments | ask | show | jobs | submit login
Allan McDonald refused to approve Challenger launch, exposed cover-up (2021) (npr.org)
562 points by EndXA 7 months ago | hide | past | favorite | 391 comments



I wonder how often things like that happen.

The launch could have gone right, and no one would have known anything about the decision process besides a few insiders. I am sure that on project as complex and as risky as a Space Shuttle, there is always an engineer that is not satisfied with some aspect, for some valid reason. But at some point, one needs to launch the thing, despite the complains. How many projects luckily succeeded after a reckless decision?

In many accidents, we can point at an engineer who foreshadowed it, as it is the case here. Usually followed by blaming those who proceeded anyways. But these decision makers are in a difficult position. Saying "no" is easy and safe, but at some point, one needs to say "yes" and take risks, otherwise nothing would be done. So, whose "no" to ignore? Not Allan's apparently.


Often.

I used to run the nuclear power plant on a US Navy submarine. Back around 2006, we were sailing somewhere and Sonar reported that the propulsion plant was much, much louder than normal. A few days later we didn't need Sonar to report it, we could hear it ourselves. The whole rear half of the ship was vibrating. We pulled into our destination port, and the topside watch reported that oil pools were appearing in the water near the rear end of the ship. The ship's Engineering Officer and Engineering Department Master Chief shrugged it off and said there was no need for it to "affect ship's schedule". I was in charge of the engineering library. I had a hunch and I went and read a manual that leadership had probably never heard of. The propeller that drives the ship is enormous. It's held in place with a giant nut, but in between the nut and the propeller is a hydraulic tire, a toroidal balloon filled with hydraulic fluid. Clearly it had ruptured. The manual said the ship was supposed to immediately sail to the nearest port and the ship was not allowed to go back out to sea until the tire was replaced. I showed it to the Engineer. Several officers called me in to explain it to them. And then, nothing. Ship's Schedule was not affected, and we continued on the next several-week trip. Before we got to the next port, we had to limit the ship's top speed to avoid major damage to the entire propulsion plant. We weren't able to conduct the mission we had planned because the ship was too loud. And the multiple times I asked what the hell was going on, management literally just talked over me. When we got to the next port, we had to stay there while the propeller was removed and remachined. Management doesn't give a shit as long as it doesn't affect their next promotion.

Don't even get me started on the nuclear safety problems.


The correct answer in that case is to go to the Inspector General. That's what they're there for. Leaders sweeping shit under the rug that ends up crippling a fleet asset and preventing tasking from higher is precisely the kind of negligence and incompetence the IG is designed to root out.

And I say that as a retired officer.


Honest question: what are the plausible outcomes for an engineer who reports this kind of issue to the IG?

I'm guessing there's a real possibility of it ending his career, at least as a member of the military.


The IG is an independent entity which exists to investigate misconduct and fraud/waste/abuse. There are Inspectors General at all levels from local bases up to the Secretary of Defense, and they have confidential reporting hotlines. The only thing worse for a commander than having shenanigans be substantiated at an IG investigation is to have been found to tolerate retaliation against the reporters.

Generally about every month or two, a Navy commanding officer gets canned for "loss of confidence in his/her ability to command." They aren't bulletproof, quite the opposite. And leaving out cases of alcohol misuse and/or sexual misconduct, other common causes are things within the IG's purview.


Much more realistically:

Individual A reports a unique or rare problem. Everyone knows it is reported by person A.

Nothing is done.

Person A reports the problem "anonymously" to some third party, which raises a stink about the problem.

Now everyone knows that person A reported the problem to the third party.

This is why I (almost) never blow the whistle. It's an automatic career-ending move, and any protections are make-believe at best.


Then Person A needs to haul their butt to the Defense Service Office, call their Member of Congress, and tell the "anonymous" hotline that they've been retaliated against.

I'm not pretending this is some magic ticket to puppy-rainbow-fairy land where retaliation never occurs, but ultimately, how much do you care about your shipmates? I had a CPO once as one of my direct reports committing major misconduct and threatening my shop with retaliation if they reported it. I could have helped crush the bastard if someone had come forward to me, but no one ever did until I'd turned over the division to someone else, after which it blew up. Sure, he eventually got found out, but still. He was a great con artist and he pulled the wool over my eyes, but all I'd have needed is one person cluing me in to that snake.

Speaking from the senior officer level, we're not all some cabal trying to sweep shit under the rug. And the IGs, as much as they're feared, aren't out to nail people to the wall who haven't legitimately done bad things. I'm sorry you've had the experience you've had, but that doesn't mean that everyone above you was some big blue wall willing to protect folks who've done wrong.


heck, you're in the ship too. I'll take all the retalitation if I get to keep breathing. If they wanna kick me out over saving my own skin, fine. Saves me from deserting.


The US Navy has over 300k active-duty personnel. I suppose it's easier to just go somewhere else where no-one knows who you are.


The person ignoring their subordinate’s reports to protect their own next promotion has entered the chat.


It sounds like a certain commercial aircraft manufacturer that starts with a B and ends with an oeing could really use an effective Inspector General system.


Probably. The biggest blind spot internal auditors have is things that didn't leave a paper trail.

It is too common that such investigations don't even start because there is just one connecting piece of evidence missing.

Leave a paper trail people!


I seriously believe what I've heard about upwards failure. Being competent seems to be an impediment, and the goons at the very top are ludicrously malformed people.


The incompetent group together, they have to in order to survive.

The competent don't group together, they don't need to. They can take care of themselves.

The former uses their power as a group against the individuals in the latter.

Basically the plot of Atlas Shrugged.


Atlas Shrugged? The book written by that demented woman who couldn't deal with her own feelings but told everyone how individualism was the answer to everything while living thanks to other people's support?

That book?


Yeah the one were people attack the author rather than the idea because they aren't competent enough to do so.


Objectivism, like many philosophies or political beliefs, only works in an absolute vacuum.

Maybe the one person who survives the first trip to Mars can practice it.


I'm not an objectivist. My comment is the extent of the Ayn Rand beliefs I hold for my most part.

When you work on ideas instead of personalities you get to do that.

Nobody here tried to disprove my comment. Just a few people starting complaining about a dead woman whose book I mentioned in passing.

They got together and argued, incompetently. Demonstrating the effect I was attempting to illustrate.


i guess the true fate is the competent arguing amongst one another in an attempt to establish who is most competent, while the incompetent group together and bask in the real rewards. The goals of the incompetent are simple and tangible. The goals of the competent are abstract, as they seek acceptance from their fellow competent peers


Objectivism: that fart-huffing philosophy that leads people to think everyone else is incompetent to judge it, when it's just a bunch of hateful trash that is to the right as Marxism is to the left.


That doesn't hold water.


How long retired? Things have gone in what can only be described as an.. incomprehensible unfathomable direction in the last decade or so. Parent post is not surprising in the least.

Politics is seeping where it doesn't belong.

I am very worried.


Tell us more... what has happened?


To a first approximation: https://www.youtube.com/watch?v=KZB7xEonjsc

Less funny in real life. Sometimes the jizzless thing falls off with impeccably bad timing. Right when things go boom. People get injured (no deaths yet). Limp home early. Allies let down. Shipping routes elongate by a sad multiple. And it even affects you directly as you pay extra for that Dragon silicon toy you ordered from China.


Just google the Red Hill failure.

The Navy's careerist, bureaucratic incompetence is staggering. No better than Putin's generals who looted the military budget and crippled his army so they couldn't even beat a military a fraction of their size.


Recently. For those who've served, it's not a surprise to see the constant drumbeat of commanding officers being relieved of command every month or so. COs are not bulletproof, and the last thing anyone in the seat wants is to end up crossways with the IG. And there are confidential ways Sailors can get in touch with them if needed.

Or with their Member of Congress, who can also go to Big Navy and ask "WTF is going on with my constituent?"


> Don't even get me started on the nuclear safety problems.

I want to be pro-nuclear energy, but I just don't think I can trust the majority of human institutions to handle nuclear plants.

What do you think about the idea of replacing all global power production with nuclear, given that it would require many hundreds of thousands of loosely-supervised people running nuclear plants?


There's also the issue of force majeure - war, terrorism, natural disasters, and so on. Increase the number of these and not only can you not really maintain the same level of diligence, but you also increase the odds of them ending up in an unfortunate location or event.

There's also the issue of the uranium. Breeder reactors can help increase efficiency, but they bump up all the complexities/risks greatly. Relatively affordable uranium is a limited resource. We have vast quantities of it in the ocean, but it's not really feasible to extract. It's at something like 3.3 parts per billion by mass. So you'd need to filter a billion kg of ocean water to get 3.3kg of uranium. Outside of cost/complexity, you also run into ecological issues at that scale.


Considering that 1 Chernobyl scale accident per year would kill fewer people than global coal power does, I personally would be for it.


It was a tremendous effort and sacrifice paid so that half of Europe wasnt poisoned by that 1 Chernobyl.


Given the scale of people killed by coal every year, I feel relatively confident that had that effort not been undertaken, it would still be true.

And of course that's ignoring the fact that I also feel relatively confident that a Chernobyl scale accident every year is in no way likely, even if the entire world was 100% on nuclear


I don't think the scale of coal is 200m+ people a year. That's taking artistic liberties or is too hyperbolic to entertain.

>I also feel relatively confident that a Chernobyl scale accident every year is in no way likely, even if the entire world was 100% on nuclear

I don't. Einstein's quote rings alarms in my head here. Imagine all the inane incompetencies you've seen with current energies in your house, or at a mechanic, or simply flickering lights at a resaurant. Now imagine that these people now manage small fusion/fission bombs powering such devices.

we need to value labor a lot more to trust that sort of maintanance. And the US alone isn't too good at that. Let alone most of Asia and EMEA.


> 200m+

Were are you getting this from?

In any case if we look at the actual data nuclear has been extremely safe compared to burning fossil fuels. Add up all the nuclear disasters that have ever happened and adjusted by MWh generated it’s a few magnitudes safer than coal.

> Now imagine that these people now manage small fusion/fission bombs powering such devices.

Sure, they’ll have to be trained to the same standards as current nuclear engineers. Not trivial but obviously not exactly an unsolvable problem..

> Let alone most of Asia and EMEA.

Sorry but you’re just saying random things at this point..


You do know that as good as it might have been that TV show was still mostly fictional?


Does coal kill rich people? Nuclear meltdown does.


> Does coal kill rich people?

Certainly, they still breathe the same air, don’t they?

> Nuclear meltdown does.

I’m pretty sure that nuclear meltdowns are much, much easier to avoid. Even in Chernobyl almost all the casualties (shortterm and longterm) were amongst people directly handling and trying to contain a disaster. If you’re rich you’re unlikely to be a fireman..


Same. Its blatantly obvious the humanity is not up to the task.


So far nuclear has been extremely safe compared to some other energy sources (especially coal).


There was no hunch there about a problem, it was very obvious there was a problem. Management willing to risk worker's lives for promotions should be fired immediately unless they jump into the fire only by themselves. No life is worth someone's convenience.


If you're EB, why replace a hydraulic bushing when you can wait, and replace it but also have to repair a bunch of damage and make yourself a nice big extra chunk of change off Uncle Sam?

If you're ship's captain...why not help secure a nice 'consulting' 'job' at EB after retiring from the navy by helping EB make millions, and count on your officers to not say a peep to fleet command that the mess was preventable?


That sounds EXACTLY like something Fat Leonard might have done...

https://en.wikipedia.org/wiki/Fat_Leonard_scandal


My brother has loads of ghese stories related to fighter jets.

Stuff like pilots taking off with no working nav, "I'll follow the guy in front of me".


Is this a different phenomenon though? It seems that there's a difference between an informed risk assessment and not giving a fuck or letting the bureaucratic gears turn and not feeling responsible. Like there's a difference between Challenger and Chernobyl.

But, maybe someone can make a case that it's fundamentally the same thing?


I would make the case that it's fundamentally the same thing.

In both cases, there were people who cared primarily about the technical truth, and those people were overruled by people who cared primarily about their own lifestyle (social status, reputation, career, opportunities, loyalties, personal obligations, etc.) In Allan McDonald's book "Truth, Lies, and O-Rings" he outlines how Morton Thiokol was having a contract renewal held over their head while NASA Marshall tried to maneuver the Solid Rocket Booster production contract to a second source, which would have seriously affect MT's bottom line and profit margins. There's a strong implication that Morton Thiokol was not able to adhere to proper technical rationale and push back on their customer (NASA) because if they had they would have given too much ammunition to NASA to argue for a second-source for the SRB contracts. (In short: "you guys delayed launches over issues in your hardware, so we're only going to buy 30 SRB flight sets from you over the next 5 years instead of 60 as we initially promised."

I have worked as a NASA contractor on similar issues, although much less directly impacting the crews than the SRBs. You are not free to pursue the smartest, most technically accurate, quickest method for fixing problems; if you introduce delays that your NASA contacts and managers don't like, they will likely ding your contract and redirect some of your company's work to your direct competitors, who you're often working with on your projects.


What’s the alternative? Being able to shift to a competitor when a producer is letting you down is the entire point of private contracts; without that, you might as well remove the whole assemblage of profit and just nationalize the whole thing.


Strictly speaking, you're correct, so I don't disagree with your comment. However, assuming MvDonald's recollections are correct and his explanation of the story is accurate, Morton Thiokol was doing an excellent job. The O-Ring seal issue was on track to be solved as they switched to a lighter-weight filament-wound case. According to McDonald, Morton Thiokol was receiving high marks on their contract evaluations, and Marshall was trying to move the contract to a company that had a lot of ex-Marshall employees.


I think it can be thought from this angle: if the customer is corrupt and the contractor ethical, the project can be unsafe. If the customer is ethical and the contractor corrupt, the project also can be unsafe.


That's EXACTLY the alternative.


Okay so it sounds like you're saying that they are fundamentally the same, but only because the Challenger wasn't in the "informed risk assessment" category after all.


Yeah, that's what I think. In both cases the technical decisions were made by people who were not technical experts and were completely ignoring the input of the technical experts because of social pressures. Based on McDonald's retelling, the decision to launch the Challenger was anything but an informed risk decision; none of the managers said "we acknowledge Morton Thiokol's concerns about O-Ring temperatures and are committing to launch anyway, with the following rationale: ..." They just didn't bring up the temperature issue at the flight director level and recommended a launch, backed by no data.

In Chernobyl, they scheduled a safety test to satisfy schedules imposed by central command. The plant engineers either weren't informed or couldn't push back because to go against management meant consequences for your career and family, administered by the Soviet authorities or the KGB.

Both scenarios had engineers who were not empowered to disclose or escalate issues to the highest level because of implied threats against them by non-technical authorities.


>Like there's a difference between Challenger and Chernobyl.

not in year, incidentally


> Saying "no" is easy and safe, but at some point, one needs to say "yes" and take risks, otherwise nothing would be done.

Saying "no" is easy and safe in a world where there are absolutely no external pressures to get stuff done. Unfortunately, that world doesn't exist, and the decision makers in these kinds of situations face far more pressure to say "yes" than they do to say "no".

For example, see the article:

> The NASA official simply said that Thiokol had some concerns but approved the launch. He neglected to say that the approval came only after Thiokol executives, under intense pressure from NASA officials, overruled the engineers.


> Saying "no" is easy and safe

Not in my experience. Saying no to something major when others don’t see a problem can easily be career-ending.


Everyone seems to be reading this too simply. In fact, stupidly.

It's conceptually the easiest answer to the risk of asserting that you are certain, is simply don't assert that you are certain.

They aren't saying it's easy to face your bosses with anything they don't want to hear.


Isn't the definition of "easy" or "hard" that includes the external human pressures the less simple/stupid one? What is the utility of a definition of "easy" that assumes that you work in complete isolation?


Context.


The context to this conversation is the launch of a space shuttle that's supposed to carry a teacher to space. It has both enormous stakes and enormous political pressure to not delay/cancel. I'm unsure why that context makes the spherical cow version of "easy" a sensible one.


The context of that word "easy" was not a vacuum, it was part of a sentence which was part of a conversation. There is more than enough of this context to know what in particular was easy.

You can only fail to get this by not reading the thing you are responding to, or deliberate obtuseness, or perhaps by being 12 years old.


> easily be career-ending.

Easily be career ending? That's a bit dramatic, don't you think?. Someone who continuously says no to things will surely not thrive and probably eventually leave the organization, one way or the other, that's probably right.


Not even slightly dramatic. I have seen someone be utterly destroyed for trying to speak out on something deeply unethical a state was doing, and is probably still doing.

He was dragged by the head of state in the press and televised announcements, became untouchable overnight - lost his career, his wife died a few days later while at work at her government job in an “accident”. This isn’t in some tinpot dictatorship, rather a liberal western democracy.

So - no. Career-ending is an understatement. You piss the wrong people off, they will absolutely fuck you up.


I have long thought that there ought to be an independently funded International Association for the Protection of Whistleblowers. However, it would quickly become a primary target of national intelligence agencies, so I don't know how long it would last.


A "liberal democracy" where the head of state can have random citizens murdered? And I guess despite being an internet anon, you won't name that country because they will come after you and kill your family as well?

That's either a very tall tale or the state is anything but liberal.


> A "liberal democracy" where the head of state can have random citizens murdered?

Abdulrahman Anwar al-Awlaki (also spelled al-Aulaqi, Arabic: عبدالرحمن العولقي; August 26, 1995 – October 14, 2011) was a 16-year-old United States citizen who was killed by a U.S. drone strike in Yemen.

The U.S. drone strike that killed Abdulrahman Anwar al-Awlaki was conducted under a policy approved by U.S. President Barack Obama

Human rights groups questioned why Abdulrahman al-Awlaki was killed by the U.S. in a country with which the United States was not at war. Jameel Jaffer, deputy legal director of the American Civil Liberties Union, stated "If the government is going to be firing Predator missiles at American citizens, surely the American public has a right to know who's being targeted, and why."

https://en.m.wikipedia.org/wiki/Killing_of_Abdulrahman_al-Aw...


>Abdulrahman al-Awlaki's father, Anwar al-Awlaki, was a leader of al-Qaeda in the Arabian Peninsula

Missed highlighting that part. The boy also wasn't the target of the strike anyway. Was the wife from the other user's story living with an al-Qaeda leader as well?


> Abdulrahman al-Awlaki's father, Anwar al-Awlaki, was a leader of al-Qaeda in the Arabian Peninsula

You are a terrorist if you don't want a foreign power to install a government* over you and you fight to prevent that?

And then further, if your dad does that you should die?

*that has to be noted were literally pedophiles


I think the WH spokesperson's response just adds to the level of disturbing:

>When pressed by a reporter to defend the targeted killing policy that resulted in Abdulrahman al-Awlaki's death, former White House press secretary Robert Gibbs deflected blame to the victim's father, saying, "I would suggest that you should have a far more responsible father if they are truly concerned about the well-being of their children. I don't think becoming an al-Qaeda jihadist terrorist is the best way to go about doing your business".


In France, between 2013 and 2016, 40 people were killed (~one per month) by the French state (French) on direct order of President François Hollande:

https://www.lemonde.fr/police-justice/article/2017/01/04/fra...


Yeah, the point that Obama literally executed US citizens without trial is often lost on people on this site, and on much of the "liberal" intelligentsia. They'll just say he was a "terrorist", but folks, you can't say whether he was or not, without trial. And even if he was, his son, who was also killed in that strike, was not a "terrorist". This is an extremely slippery slope, and the fact that people don't acknowledge this just because it was Obama who ordered the murder (let's call a spade a spade) is a damning indictment of "neoliberal values".


I’ve spoken about it here somewhat and circumspectly before - but I prefer to keep the SNR low, as I don’t want repercussions for him. Me, good luck finding.

It’s the U.K. It happened under Cameron. It related to the judiciary. That’s as much as I’ll comfortably reveal.

I will also say that it was a factor in me deciding to sell my business, leave the country, and live in the woods, as what I learned from him and his experience fundamentally changed my perception of the system in which we live.


Considering the launch tempo that NASA had signed up for, and was then currently failing at? Yes, a single 'no-go' on the cert chain could easily result in someone being shunted into professional obscurity thereafter.


Ask Snowden.


Can someone explain why every govt official that was ever in the news talking about Snowden acuse him of being the worst sort of criminal? Specifically what is the case, they are never forthcoming about details.

I personally am very glad to know the things he revealed.


For the same reason they’ve been torturing Assange for the past decade. They view us as little more than taxable cattle that should not ask any questions, let alone embarrass or challenge the ruling class.


Saying no isn't what ended his career.


> Saying no isn't what ended his career.

Within NatSec, saying No to embarrassing the government is implied. Ceaselessly.

Equally implied: The brutality of the consequences for not saying no.


> at some point, one needs to launch the thing, despite the complains

There's a big difference between "complaints" because something is not optimal, and warnings that something is a critical risk. The Thiokol engineers' warnings about the O-rings were in the latter category.

And NASA knew that. The summer before the Challenger blew up, NASA had reclassified the O-rings as a Criticality 1 flight risk, where they had previously been Criticality 1R. The "1" meant that if the thing happens the shuttle would be lost--as it was. The "R" meant that there was a redundant component that would do the job if the first one failed--in this case there were two O-rings, primary and secondary. But in (IIRC) June 1985, NASA was told by Thiokol that the primary O-ring was not sealing so there was effectively no redundancy, and NASA acknowledged that by reclassifying the risk. But by the rules NASA itself had imposed, a Criticality 1 (rather than 1R) flight risk was supposed to mean the Shuttle was grounded until the issue was fixed. To avoid that, NASA waived the risk right after reclassifying it.

> at some point, one needs to say "yes" and take risks, otherwise nothing would be done

Taking calculated risks when the potential payoff justifies it is one thing. But taking foolish risks, when even your own decision making framework says you're not supposed to, is quite another. NASA's decision to launch the Challenger was the latter.


It happens extremely frequently because there is almost no downside for management to override the engineers decision.

Even in the case of the Challenger, no single article say WHO was the executive that finally approved the launch. No body was jailed for gross negligence. Even Ricahrd Feynman felt that the investigative comission was biased from the start.

So, since there is no "price to pay" to make this bad calls they are continuously made.


    > Even in the case of the
    > Challenger, no single article
    > say WHO was the executive
    > that finally approved the launch.
The people who made the final decision were Jerald Mason (SVP), Robert Lund, Joe Kilminster and Calvin Wiggins (all VP's).

See page 94 of the Rogers commission report[1]: "a final management review was conducted by Mason, Lund, Kilminster, and Wiggins".

Page 108 has their full names as part of a timeline of events at NASA and Morton Thiokol.

1. https://sma.nasa.gov/SignificantIncidents/assets/rogers_comm...


Thank you.


> No body was jailed for gross negligence

Jailing people means you'll have a hard time finding people willing to make hard decisions, and when you do, you may find they're not the right people for the job.

Punishing people for making mistakes means very few will be willing to take responsibility.

It will also mean that people will desperately cover up mistakes rather than being open about it, meaning the mistakes do not get corrected. We see this in play where manufacturers won't fix problems because fixing a problem is an admission of liability for the consequences of those problems, and punishment.

Even the best, most conscientious people make mistakes. Jailing them is not going to be helpful, it will just make things worse.


> Punishing people for making mistakes means very few will be willing to take responsibility.

That’s what responsibility is: taking lumps for making mistakes.

If I make a mistake on the road and end up killing someone, I can absolutely be held liable for manslaughter.

I don’t know if jail time is the right answer, but there absolutely needs to be some accountability.


Have you ever made a mistake on the road that luckily did not result in anyone getting killed?

During WW2, a B-19 crash landed in the Soviet Union. The B-29's technology was light-years ahead of Soviet engineering. Stalin demanded that an exact replica of the B-29 be built. And that's what the engineers did. They were so terrified of Stalin that they carefully duplicated the battle damage on the original.

Be careful what you wish for when advocating criminal punishment.


Tu-4 was indeed a very close copy of B-29, but no, they did not "carefully duplicate the battle damage" on the original. The one prominent example of copying unnecessary things that is usually showcased in this instance is a mistakenly drilled rivet hole in one of the wings that was carefully reproduced thereafter despite there not being any evident purpose for it.

That said, even then Tu-4 wasn't a carbon copy. Because US used imperial units for everything, Soviets simply couldn't make it a carbon copy because they could not e.g. source plating and wire of the exact right size. So they replaced it with the nearest metric equivalents that were available, erring on the side of making things thicker, to ensure structural integrity - which also made it a little bit heavier than the original. Even bigger changes were made - for example, Tupolev insisted on using existing Soviet engines (!), weapons, and radios in lieu of copying the American ones. It should be noted that Stalin really did want a carbon copy originally, and Tupolev had to fight his way on each one of those decisions.


We should not blame people for honest mistakes. Challenger was not an honest mistake, it was political pressure overriding engineering. The joints were not supposed to leak at all, yet they were leaking every time and it was being swept under the rug. When someone suddenly demands to get it in writing when it was normally a verbal procedure they *know* there's a problem. That's not a mistake.

Same as the insulation damage to the tiles kept being ignored until Columbia barely survived. And then they fixed the part they blamed for that incident, but the tiles kept coming back damaged.

And look at what else was going wrong that day--the boosters would most likely have been lost at sea if the launch had worked.


Jailing people means you'll have a hard time finding people willing to make hard decisions,

Why do you think you want it? You don't want it.


From the very start they were obviously in cover-up mode.

They had every engineer involved with the booster saying launching in the cold was a bad idea, yet they started by trying to look at all the ways it could have gone wrong rather than even looking into what the engineers were screaming about.

We also have them claiming a calibration error with the pyrometer (the ancestor of the modern thermometer you point at something) even though that made other numbers not make sense.


The "who" was William R. Lucas.

There was a recent Netflix documentary where they interviewed him. He was the NASA manager that made the final call.

On video, he flatly stated that he would make the same decision again and had no regrets: https://www.syfy.com/syfy-wire/netflix-challenger-final-flig...

I had never seen anyone who is more obviously a psychopath than this guy.

You know that theory that people like that gravitate towards management positions? Yeah... it's this guy. Literally him. Happy to send people into the meat grinder for "progress", even though no actually scientific progress of any import was planned for the Challenger mission. It was mostly a publicity stunt!


Maybe he did it because he knew the shuttle was garbage (the absurd design was Air Force political BS) and he wanted NASA to stop using it.


My understanding of the Space Shuttle program is that there were a lot of times they knew they probably shouldn't fly, or try to land, and they lucked out and didn't lose the orbiter. It is shocking they only lost two ships out of the 135 Space Shuttle missions.

The safety posture of that whole program, for a US human space program, seemed bad. That they chose to use solid rocket motors shows that they were willing to compromise on human safety from the get-go. There are reasons there hasn't ever been even one other human-rated craft to use solid rocket motors.


> There are reasons there hasn't ever been even one other human-rated craft to use solid rocket motors.

That's about to not be true. Atlas V + starliner has flown two people and has strap-on boosters, I think it only gets the rating once it returns from the test flight though.

The shuttle didn't have a propulsive launch abort system, and could only abort during a percentage of its launch. The performance quoted for starliner's abort motor is "one mile up, and one mile out" based on what the presenter said during the last launch. You're plenty safe as long as you don't intersect the SRB's plume.


Except SLS?

Not that I think it's a good thing, but...


I forgot about the SLS until after I wrote that. SLS makes most of the same mistakes, plus plenty of new expensive ones, from the Space Shuttle program. SLS has yet to carry a human passenger though.

Its mind boggling that SLS still exists at all. At least $1B-$2B in costs whether you launch or not. A launch cadence measured in years. $2B-$4B if you actually launch it. And it doesn't even lift more than Starship, which is launching almost quarterly already. This before we even talk about reusability, or that a reusable Starship + Super Heavy launch would only use about $2M of propellent.


** SLS has entered the chat **


A lot of people are taking issue with the fact that you need to say yes for progress. I don’t know how one could always say no and expect to have anything done.

Every kind of meaningful success involves negotiating risk instead of seizing up in the presence of it.

The shuttle probably could have failed in 1,000 different ways and eventually, it would have. But they still went to space with it.

Some risk is acceptable. If I were to go to the moon, let’s say, I would accept a 50% risk of death. I would be happy to do it. Other people would accept a risk of investment and work hour loss. It’s not so black or white that you wouldn’t go if there’s any risk.


The key thing with Challenger is that the engineers working on the project estimated the risk to be extremely high and refused to budge, eventually being overruled by the executives of their company.

That's different than the engineers calculating the risk of failure at some previously-defined-as-acceptable level and giving the go-ahead.


> Some risk is acceptable. If I were to go to the moon, let’s say, I would accept a 50% risk of death. I would be happy to do it. Other people would accept a risk of investment and work hour loss. It’s not so black or white that you wouldn’t go if there’s any risk.

It's possible you're just suicidal, but I'm reading this more as false internet bravado. A 50% risk of death on a mission to space is totally unacceptable. It's not like anyone will die if you don't go now; you can afford to take the time to eliminate all known risks of this magnitude.


Not bravado at all, if I was given those odds today, I would put all my effort into it and go.

There are many people who are ideologically-driven and accept odds of death at 50% or higher — revolutionary fighters, political martyrs, religious martyrs, explorers and adventurers throughout history (including space), environmental activists, freedom fighters, healthcare workers in epidemics of serious disease...


> Not bravado at all, if I was given those odds today, I would put all my effort into it and go.

If that's actually true, you should see a therapist.

Given we have a track record of going to the moon with much lower death rate than 50%, that's a proven higher risk than is necessary. That's not risking your life for a cause, because there's no cause that benefits from you taking this disproportionate risk. It's the heroism equivalent of playing Russian Roulette a little more than 3 times and achieves about as much.

> There are many people who are ideologically-driven and accept odds of death at 50% or higher — revolutionary fighters, political martyrs, religious martyrs, explorers and adventurers throughout history (including space), environmental activists, freedom fighters, healthcare workers in epidemics of serious disease...

And for every one of those there's 100 keyboard cowboys on the internet who have never been within a mile of danger and have no idea how they'll react to it.

I would say I'm more ideologically driven than most, and there are a handful of causes I'd like to think I'd die for. But I'm also self-aware enough to know that it's impossible to know how I'll react until I'm actually in those situations.

And I'll reiterate: you aren't risking your life for a cause, because there's no cause that benefits from you taking a 50% mortality risk on a trip to the moon.


I think you may be projecting, because you are acting a bit like a keyboard warrior — telling others to see therapists. Consider that other people have different views, that is all. To some, the cause (principle/life goal) of exploring where others have not gone is enough.


Let me be clear; there are 2 options:

1. Go where others have not gone, with a 50% risk of death.

2. Wait 5 days for temperatures to rise, and go where others have not gone, with a 0.5% risk of death.

Choosing 1 isn't "different views, that is all", it's pretty objectively the wrong choice. It's not dying for a cause, it's not brave, it's not idealistic. It's pointlessly suicidal. So yes, I'm saying if you think 1 is the right choice you should see a therapist.

Notably, NASA requires all astronauts to undergo psychological evaluation, even if they aren't claiming they'll take insane unnecessary risks. So it's not like I'm the only one who thinks talking to someone before you potentially kill yourself is a good idea.


Is there really nothing on Earth so important that you would risk your life doing it, but the Moon is unique in this regard?


> I would accept a 50% risk of death.

No offense but this sounds like the sayings of someone who has not ever seen a 50% of death.

It’s a little different 3 to 4 months out. It’s way different the night before and morning. Stepping “in the arena” with odds like those, I’d say the vast, vast majority will back out and/or break down sobbing if forced.

There’s a small percent who will go forward but admit the fact that they were completely afraid- and rightly so.

Then you have that tiny percentage that are completely calm and you’d swear had a tiny smile creeping in…

I’ve never been an astronaut.

But I did spend three years in and out of Bosnia with a special operations task force.

Honestly? I have a 1% rule. The things might have a 20-30% chance of death of clearly stupid and no one wants to do. Things will a one in a million prob aren’t gonna catch ya. But I figure that if something does, it’s gonna be an activity that I do often but has a 1% chance of going horribly wrong and that I’m ignoring.


> sounds like the sayings of someone who has not ever seen a 50% of death

Well, this sounds like simple ad-hominem. I appreciate your insight, overall, though.

Many ideologically-driven people, like war field medics, explorers, adventurers, revolutionaries, and political martyrs take on very high risk endeavors.

I would also like to explore unknown parts of the Moon despite the risks, even if they were 50%. And I would wholeheartedly try to do it and put myself in the race, if not for a disqualifying condition.

There is also the matter of controllable and uncontrollable risks of death. The philosophy around dealing with them can be quite different. From my experience with battlefield medicine (albeit limited to a few years), I accepted the risks because the cause was worth it, the culture I was surrounded by was to accept these risks, and I could steer them by taking precautions and executing all we were taught. No one among the people I trained with thought they couldn't. And yes, many people ultimately dropped out for it, as did I.

Strapping oneself to a rocket is a very uncontrollable risk. The outcome, from an astronaut's perspective, is more random. I think that offers a certain kind of peace. We are all going to die at random times for random reasons, I think most people make peace with that, especially as they go into old age. That is a more comfortable type of risk for me.

Individuals have different views on mortality. Some are more afraid than others, some are afraid in one set of circumstances but not others. Some think that doing worthwhile things in their lives outweighs the risk of death every time. Your view is valid, but so is others'.


> Stepping “in the arena” with odds like those, I’d say the vast, vast majority will back out and/or break down sobbing if forced.

Something like 10 million people will accept those odds. Let's say 1 million are healthy enough to actually go to space and operate the machinery. Then let's say 99% will back out during the process. That's still 10,000 people to choose from, more than enough for NASA's needs.


50% of the time doing something that has a one percent chance of killing you 69 times will kill you


> No offense but this sounds like the sayings of someone who has not ever seen a 50% of death.

The space program pilots saw it. And no, I would not have flown on those rockets. After all, NASA would "man rate" a new rocket design with only one successful launch.


Using the space shuttle program as a comparison, because it's easy to get the numbers. There were 13 total deaths (7 from Challenger, 6 from Columbia [0]) during the program. Over 135 missions, the Space Shuttle took 817 people into space. (From [1], the sum of the "Crew" column. The Space Shuttle carried 355 distinct people, but some were on multiple missions.)

So the risk of death could be estimated as 2/135 (fatal flights / total flights) or as 13/817 (total fatalities / total crew). These are around 1.5%, must lower than a 50% chance of death.

This is not to underplay their bravery. This is to state that the level of bravery to face a 1.5% chance of death is extremely high.

[0] https://en.wikipedia.org/wiki/List_of_spaceflight-related_ac... [1] https://en.wikipedia.org/wiki/List_of_Space_Shuttle_missions


If I recall correctly, the Saturn V was man rated after one launch. There were multiple failures on the moon missions that easily could have killed the astronauts.

The blastoff from the moon had never been tried before.



> If I were to go to the moon, let’s say, I would accept a 50% risk of death.

But you weren't in the shuttle, so it is irrelevant.


> But at some point, one needs to launch the thing

Do they? Even if risks are not mitigated and say risk for catastrophe can't be pushed below ie 15%? This ain't some app startup world where failure will lose a bit of money and time, and everybody moves on.

I get the political forces behind, nobody at NASA was/is probably happy with those, and most politicians are basically clueless clowns (or worse) chasing popularity polls and often wielding massive decisive powers over matters they barely understand at surface level.

But you can't cheat reality and facts, not more than say in casino.


Maybe it's a bad analogy given the complexity of a rocket launch, but I always think about European exploration of the North Atlantic. Huge risk and loss of life, but the winners built empires on those achievements.

So yes, I agree that at some point you need to launch the thing.


This sounds like you are saying colonialism was a success story?


For the ones doing the colonizing? Overwhelmingly yes. A good potion of the issues with colonizing is about how the colonizing nations end up extracting massive amounts of resources for their own benefit.


In context, it sounds like you think that the genocide of indigenous peoples was totally worth it for European nations and that callous lack of concern for human life and suffering is an example to be followed by modern space programs.

I'd like to cut you the benefit of the doubt and assume that's not what you meant; if that's the case, please clarify.


You are not reading the context correctly. The original point was that establishing colonies was very risky, to which whyever implied that colonialism was not a success story. But in fact it was extremely successful from a risk analysis point of view. Some nations chose to risk lives and it paid off quite well for them. The nuance of how the natives were treated is frankly irrelevant to this analysis, because we're asking "did the risk pay off", not "did they do anything wrong".


I am not participating in amoral risk/reward analysis, and you should not be either.

If the cost was genocide or predictable and avoidable astronaut deaths, the risk didn't pay off; there's no risk analysis. This isn't "nuance" and there is no ambiguity here, it's literally killing people for personal gain.


> In context, it sounds like you think that the genocide of indigenous peoples was totally worth it for European nations and that callous lack of concern for human life and suffering is an example to be followed by modern space programs.

Can you provide a quote of where I said this is an example to be followed"? (This is a rhetorical question: I know you can't because I said nothing remotely akin to that.)

> I'd like to cut you the benefit of the doubt and assume that's not what you meant; if that's the case, please clarify.

Sure, to clarify: I meant precisely what I said. I did not mean any of the completely different nonsense you decided to suggest I was actually saying.

If you see "colonization benefited the people doing the colonizing" and interpret it as "colonization is an example to be followed", that's entirely something wrong with your reading comprehension.

You're not "cutting me some slack" by putting words in my mouth and then saying "but maaybe didn't mean that", and it's incredibly dishonesty and shitty of you to pretend you are.


> Can you provide a quote of where I said this is an example to be followed"?

People can read the context of what you said, there's no need to quote it.

In fact, I would advise you to read the context of what you said; if you don't understand why I interpreted your comment the way I did, maybe you should read the posts chain you responded to and that will help you understand.

> Sure, to clarify: I meant precisely what I said. I did not mean any of the completely different nonsense you decided to suggest I was actually saying.

Well, what you said, you said in a context. If you weren't following the conversation, you didn't have to respond, and you can't blame other people for trying to understand your comments as part of the conversation instead of in isolation.

Even if you said what you said oblivious to context, then I have to say, if you meant exactly what you said, then my response is that a risk/reward analysis which only considers economic factors and ignores human factors is reprehensible.

There is not a situation which exists in reality where we should be talking about economic success when human lives are at stake, without considering those human lives. If you want to claim "I wasn't talking about human life", then my response is simply, you should have been talking about human life because the actions you're discussing killed people and that the most important factor in understanding those events. You don't get to say "They took a risk and it paid off!" when the "risk" was wiping out entire populations--that's not a footnote or a minor detail, that's the headline.

The story of the Challenger disaster isn't "they took a risk ignoring engineers and lost reputation with the NASA client"--it's "they risked astronaut's lives to win reputation with the NASA client and ended up killing people". The story of colonizing North America isn't "they took a risk on exploring unknown territories and found massive new sources of resources" it's "they sacrificed the lives of sailors and soldiers to explore unknown territories, and then wiped out the inhabitants and took their resources".


Isn't it fairly obvious from history that you and the Renaissance-era colonizers calculate morality differently? You speak of things that should not be, but nonetheless were. The success of colonialism to the colonizers is obvious. Natives of the New World were regarded as primitives, non-believers, less than human. We see the actions of the European powers as abhorrent now, but 500 years ago they simply did not see things the way we do, and they acted accordingly.


What exactly is your point in the context of this conversation?

I'm a modern person, I have modern morality? Guilty as charged, I guess.

We're supposed to cut them some slack because they were just behaving as people of their time? Nah, I don't think so: there are plenty of examples of people at that time who were highly critical of colonialism and the treatment of indigenous people. If they can follow their moral compass so could Columbus and Cortez. "Everyone else was doing it" is not an excuse adults get to use: people are responsible for their own actions. As for their beliefs: they were wrong.

There are other points you could be making but I really hope you aren't making any of the other ones I can think of.


Obviously I don't know what points you fear I may be making.

What examples were there of anti-colonialism in those times? What influence would they have had over the monarchies and the church of their day? What influence did they exert?

I would contend that the moral compass of Columbus and Cortez was fundamentally different than yours or mine. They were products of a world vastly different than ours. You and I have modern morality; they did not. Since we cannot change the actions of the past, we can only hold them up as examples of how people were, and how they differ from (or are similar to) what we are now.

My complaint is that, to my eyes, you are criticizing them as if we moderns have some power over their actions. How can we expect them to have behaved as we would? We cannot change them or what they did. I'm not sure means "cutting them some slack." They did what they did; we can only observe the consequences and hope to do better.

I agree, their beliefs were wrong. Nonetheless, they believed what their culture taught them to believe. Yes, people of any era are responsible for their own actions, and if they act wrongly according to their culture, they should be punished for it. But if their culture sees no harm in what they are doing, they'll be rewarded. We certainly can't punish or reward them from 500 years in the future. We can only hope that what we believe, and how we act, is better.


> My complaint is that, to my eyes, you are criticizing them as if we moderns have some power over their actions.

We moderns have power over our own actions, and those actions are informed by the past.

In this thread we're talking about risk/reward analyses and for some reason, you and other people here seem oddly insistent that we not discuss the ethical implications of the actions on question.

And all-too-often, that's what happens today: companies look at the risk/reward in financial terms and ignore any ethical concerns. I would characterize the corporate approach to ethics as "complete disregard". The business ethics classes I took in college were, frankly, reprehensible; most of the material was geared toward rebranding various corporate misdeeds as miscalculated risk/reward tradeoffs, similar to what is being done in this thread. This is a huge problem, and it's pervasive in this thread, in HN as a whole, and in corporate culture.

Your complaint is rather hypocritical: given we have no power over their actions, why defend them? Your complaint applies as much to your own position as it does to mine. What problem are you addressing?


> you and other people here seem oddly insistent that we not discuss the ethical implications of the actions on question.

Hmm, I don't think that's my actual intent; only that we discuss them as they apply to modern morality, not as if we can influence them to be different than what they are.

If I defend them (which I don't think I do), I do so to help explain their attitudes and actions, not to excuse them. We need to understand where they are coming from to see the differences between them and us.


Distancing ourselves from historical people is one of the worst possible mistakes we can make when studying history. We aren't different. The entire 10,000 years we've had anything resembling civilization is an evolutionary blip.

The reasons that Columbus tortured, killed, and enslaved indigenous people are the same reasons for Abu Ghraib: racism, lack of oversight, and greed. The exact details have changed, but the underlying causes are alive and thriving.

Thankfully, I think humans as a whole understand these things better and I think things are improving, but if we fail to keep that understanding alive and build upon it, regress is possible. Certainly the startup culture being fostered here (HN) which looks only at profit and de-emphasizes ethics enables this sort of forgetfulness. It's not that anyone intends to cause harm, it's that they can rationalize causing harm if it's profitable. And since money makes the same people powerful, this attitude is an extremely damaging force in society. That's why I am so insistent that we not treat ethics as a side-conversation.


I would somewhat agree with first launch, first moon mission and so on, but N-th in a row ain't building no new empires. Its business as usual.


Great point and I agree. Balancing the need to launch the thing is the need to improve over time, else the human cost begins to outweigh the benefit.


I think ultimately the problem is of accountability

If the risks are high and there are a lot of warning signs, there needs to be strong punishment for pushing ahead anyways and ignoring the risk

It is much too often that people in powerful positions are very cavalier with the lives or livelihoods of many people they are supposed to be responsible for, and we let them get away with being reckless far too often


> Maybe it's a bad analogy given the complexity of a rocket launch, but I always think about European exploration of the North Atlantic. Huge risk and loss of life, but the winners built empires on those achievements.

> So yes, I agree that at some point you need to launch the thing.

This comment sounds an awful lot like you think the genocide of indigenous peoples is justified by the fact that the winners built empires, but I'd like to assume you intended to say something better. If you did intend to say something better, please clarify.


This is dishonest. I am not engaging with your red herrings.


If the fact that entire nations were murdered is a "red herring" to you, you have no business talking about colonialism. That's not a distraction, it's the headline.


You are debating your own delusions, not me.


> But at some point, one needs to launch the thing, despite the complains.

Or: at some point, one decides to launch the thing.

You are reducing the complaints of an engineer as something inevitable and unimportant, as if it happened in every lunch, and in every lunch someone decided to went ahead, because it was what was needed.


What makes you say it "could have gone right"? From what came out about the o-rings behavior at cold temperatures, it seems they were taking a pretty big risk. Your perspective seems to be that it's always a coin toss no matter what, and I don't think that is true. Were there engineers speaking up in this way at every successful launch too?


Actually, had it been winder that day it might have gone right.

There were 8 joints. Only one failed, and only in one place. The spot being supercooled by boiloff from the LOX tank. And the leak self-sealed (there's aluminum in the fuel--hot exhaust touching cold metal deposited some of it) when it happened--but the seal wasn't robust enough and eventually shook itself apart.


I think what they were saying, especially given the phrasing “How many projects luckily succeeded after a reckless decision?” is that, if things hadn’t failed we would never have known and thus how many other failures of procedure/ ethics have we just not seen because the worst case failed to occur.


Good ol' survivorship bias...


Can't we apply the same logic to the current Starliner situation. There's no way it should have launched, but someone brow beat others into saying it was an acceptable risk with the known issues to go ahead with the launch. Okay, so the launch was successful, but other issues that were known and suspect then caused problems after launch to the point they are not positive it can return. So, should it have launched? Luckily, at least to this point, nobody has been hurt/killed, and the vehicle is somewhat still intact.


There are mitigations (of a sort) for the Starliner. It probably should not have launched, but now that it has, the flight crew is no longer in danger and can be brought down via Crew Dragon if necessary (as if Boeing needs any more embarrassment). If I was NASA, I'd take that option; though actual danger to the astronauts coming down in the Starliner seems minimal, having SpaceX do the job just seems safer.

As it is, NASA is keeping the Starline in orbit to learn as much as possible about what's going on with the helium leaks, which are in the service module, which won't be coming back to earth for examination.


> at some point, one needs to say "yes" and take risks

Do they though? If the Challenger launch had been pushed back what major effects would there have been?

I do get your general point but in this specific example it seems the urgency to launch wasn’t particularly warranted.


> If the Challenger launch had been pushed back what major effects would there have been?

The point is it's not just the Challenger launch. It's every launch.


you need to establish which complaints can delay a launch. The parent comment is arguing that you need to set some kind of threshold on that. In practice, airplanes fly a little bit broken all the time. We have excellent data and theory and failsafes which allow that to be the case, but it's written in blood.


> If the Challenger launch had been pushed back what major effects would there have been?

An administrator would’ve missed a promotion.


I think it’s not even a missed promotion but a perceived risk of one- which may or may not be accurate.


That is a very uncharitable thing to say unless you have proof.

What was the public sentiment of the Shuttle at the time? What was Congress sentiment? Was there organizational fear in NASA that the program would be cancelled if launches were not timely?


> at some point, one needs to say "yes" and take risks

I'm wondering how the two astronauts on the ISS feel about that while Boeing decides if/when it is safe to return then to Earth.

https://www.cnn.com/2024/06/18/science/boeing-starliner-astr...


Presumably about the same as they did prior to their first launch. Space travel is not like commercial air travel. This is part of the deal.


Hard disagree. The idea that the machinery your life will depend on might be made with half-assed safety in mind is definitely not part of the deal.

Astronauts (and anyone intelligent who intentionally puts themselves in a life-threatening situation) have a more nuanced understanding of risk than can be represented by a single % risk of death number. "I'm going to space with the best technology humanity has to offer keeping me safe" is a very different risk proposition from "I'm going to space in a ship with known high-risk safety issues".


> the best technology humanity has to offer keeping me safe

Nobody can afford the best technology humanity has to offer. As one adds more 9's to the odds of success, the cost increases exponentially. There is no end to it.


True, but that's semantics at best--as the other post said, if something is better but humans can't afford it, then it's better than humanity has to offer. In the context of this conversation, there were mitigations which was very much within what could be afforded: wait for warmer temperatures, spend some money on testing instead of stock buybacks.


> but that's semantics at best

The problem is when people believe that other people should pay unbounded costs for their safety.


The incessant "won't someone think of the downtrodden rich and powerful" attitude is tiring.

There is not a systemic problem with people paying too much for safety in the US. In every case where a law doesn't apply, the funders are the ones with the final say in whether safety measures get funded, and as such all the incentives are for too little money spent on safety. The few cases where laws obligate employers to spend money on safety, are laws written in blood because employers prioritized profits over workers' lives.

In short, your concern is completely misplaced. I mean, can you point out a single example in history where a company, went bankrupt because they spent too much money on keeping their workers safe? This isn't a problem that exists.


Lots of companies have gone bankrupt. In almost all of those cases, I don't know the reason.


Which is why I set the bar so low. One real world example. I'll be happy to provide, say, 50 examples of companies cutting safety costs resulting in people dying for every example of a company going bankrupt because they actually gave a shit about the safety of their workers.

If you don't know why companies are going bankrupt, then you don't know that they're going bankrupt due to safety spending. So that's basically admitting your opinion isn't based in any evidence, no?


Companies going bankrupt has nothing to do with my opinion. That's your thing. My opinion is that "the best humanity has to offer" is practically unachievable. I can show 50 examples of human output that are suboptimal. Can you show even one example that could not be improved? If not, assertions about the best humanity has to offer aren't based on evidence, are they?


Cool man, you win. I used an idiom and the literal meaning of it wasn't true. You caught me. Good job!

I cannot think of a more boring thing to debate. But I'm sure you'll be eager to tell me that in fact I can think of more boring things to debate, since it's so important to you that superlatives be backed up with hard evidence.


If nobody can afford it, then it's not on offer.


How about this. Humanity can only offer the best once. Because we will have spent the sum total of human output delivering the first one.


How about we make an effort to understand each others' intent instead of pedantically nitpicking each other's wording.


I'm in favor.

"The best humanity has to offer" seems like a slippery concept. If something goes wrong in retrospect, you can always find a reason that it wasn't the "best". How would you determine if a thing X is the best? How do you know the best is a very different thing from a "high risk" scenario?


That phrasing wasn't meant to be taken literally. It's an American expression.

"The best humanity has to offer" just means that people put in a good faith effort to obtain the best that they were capable of obtaining given the resources they had. It's a fuzzy concept because there aren't necessarily objective measures of good, but I think we can agree that, for example, Boeing isn't creating the best products humanity has to offer at the moment, because they have a recent history of obvious problems being ignored.

> How do you know the best is a very different thing from a "high risk" scenario?

Going to space is inherently a high risk scenario.

As for whether what you have is the best you can have: you hire subject experts and listen to them. In the case of Challenger, the subject experts said that the launch should be delayed for warmer temperatures--the best humanity had to offer in that case was delaying the launch for warmer temperatures.


> Hard disagree. The idea that the machinery your life will depend on might be made with half-assed safety in mind is definitely not part of the deal.

It's definitely built in. The Apollo LM was .15mm thick aluminum, meaning almost any tiny object could've killed them.

The Space Shuttle flew with SSRB's that were solid-fuel and unstoppable when lit.

Columbia had 2 ejection seats, which were eventually taken out and not installed on any other shuttle.

Huge risk is inherently the deal with space travel, at least from its inception until now.


Without links to more information on these engineering decisions, I don't think I'm qualified to evaluate whether these are serious risks, and I don't believe you are either. I tend to listen to engineers.


Destin (from Smarter Every Day Youtube channel fame) has concerns about the next NASA mission to the moon (named Artemis): https://youtu.be/OoJsPvmFixU

Read the comments (especially from NASA engineers). It's pretty interesting that sometimes it takes courageous engineers to break the spell that poor managers can have on an organization.


I've always thought the same, that something like space travel is inherently incredibly dangerous. I mean surely someone during the Apollo program spoke out about something. Like landing on the moon with an untested engine being the only way back for instance.

Nixon even had a 'if they died' speech prepared, so someone had to put the odds of success not at 100.


I think the deal was there was already a pretty high threshold for risk. I don't know the percentage exactly but the problem was the o-ring thing put it over the threshold which should triggered a a no-go.

For example, you could say "we'll tolerate a 30% chance of loss of life on this launch" but then an engineer comes up and says "an issue we found puts the risk of loss of life at 65%". That crosses the limit and procedure means no launch. What should not happen is "well, we're going anyway" which is what happened with Challenger.


Neil Armstrong figured that he only had a 50% chance of making it back from the moon alive.


What would be interesting to know is how many people tried to puts the brakes on all the successful missions.


It’s a shame.

We don’t see software engineers behave ethically in the same way.

Software is filled with so much risk taking and there’s few if any public pushback where engineers are saying the software we’ve created is harmful.

Here’s a few examples:

- Dark patterns in retail

- Cybersecurity flaws in sensitive software (ie. Microsoft)

- Social media and mental health

- Social media and child exploitation / sex trafficking

- Social media and political murder (ie. Riots, assassinations)

This stuff is happening and it’s just shrugs all-around in the tech industry.

I have a ton of respect for those whistleblowers in AI who seem to be the small exception to this rule.


>Saying "no" is easy and safe, but at some point, one needs to say "yes" and take risks, otherwise nothing would be done.

True, but that is for cases where you take the risk yourself. If the challenger crew knew the risk and were - fuck it - it's worth it it would have been different than a bureaucrat chasing a promotion.


Especially when that bureaucrat probably suffered no consequences for making the wrong call. Essentially letting other people take all of the risk while accepting none. No demotion, no firing, and even if they did get fired they probably got some kind of comfy pension or whatever

It's a joke


I doubt in a bureaucracy as big and political as NASA saying "no" is never easy or safe. In an alternate timeline (one where the Challenger launch succeeded) it would have been interesting to track McDonald's career after refusing to sign.


That's the thing I always wonder about these things.

It's fun and easy to provide visibility into whoever called out an issue early when it does go on to cause a big failure. It gives a nice smug feeling to whoever called it out internally, the reporters who report it, and the readers in the general public who read the resulting story.

The actual important thing that we hardly ever get much visibility into is - how many potential failures were called out by how many people how many times. How many of those things went on to cause a big, or even small, failure, and how many were nothingburgers in the end. Without that, it's hard to say whether leaders were appropriately downplaying "chicken little" warnings to satisfy a market or political need, and got caught by one actually being a big deal, or whether they really did recklessly ignore a called-out legitimate risk. It's easy to say you should take everything seriously and over-analyze everything, but at some point you have to make a move, or you lose. You don't get nearly as much second-guessing when you spend too much time analyzing phantom risks and end up losing to your competitors.


> The actual important thing that we hardly ever get much visibility into is - how many potential failures were called out by how many people how many times.

I'm not sure that's important at all. Every issue raised needs to be evaluated independently. If there is strong evidence that a critical part of a space shuttle is going to fail there should be zero discussion about how many times in the past other people thought other things might go wrong when in the end nothing did. What matters is the likelihood that this current thing will cause a disaster this time based on the current evidence, not on historical statistics

The point where you "have to make a move" should only come after you can be reasonably sure that you aren't needlessly sending people to their deaths.


Often, I've personally been that engineer, been ignored, and of not for simple dumb luck a death would have happened.

Phillips, Boeing, ...


Allan McDonald is a new name for me. Thanks for posting this. See also other engineers who objected to the launch, like Bob Ebeling [0], who suffered with overwhelming guilt nearly until his death in 2016, and Roger Boisjoly [1], who never worked again as an engineer after Challenger.

[0] https://archive.ph/kGMYG

[1] https://en.m.wikipedia.org/wiki/Roger_Boisjoly


Boisjoly was Macdonald's peer at Thiokol. Ebeling (I think) was either his direct manager or his division director.

Boisjoly quit Thiokol after the booster incident. Macdonald stayed, and was harassed terribly by management. He took Thiokol to court at least once (possibly twice) on wrongful discrimination / termination / whistleblower clauses, and won.


I hadn't heard of McDonald either, but there's a recent book (https://www.amazon.com/Challenger-Story-Heroism-Disaster-Spa...) that covers his contribution well.

(TBH I'm reading this book right now - probably 2/3 the way through or so - and it's kind of weird to see something like this randomly pop up on HN today.)


I just listened to the audio book on spotify, free for premium members, and I'm wondering if that's why I'm seeing so much about the Challenger disaster lately. Well worth a listen, and spends a great deal of time on setup for these key individuals who tried so hard to avert this disaster.


Boeing's Starliner problems? This article was probably brought on by the (then) recent passing of Allan McDonald


This is an ever recurring theme in the human condition.

McDonald’s loyalty was not beholden to his bosses, or what society or the country wanted at that moment in time. He knew a certain truth, based on facts he was aware of, and stuck by them.

This is so refreshing in todays world, where almost everyone seems to be a slave to some kind of groupthink, at least in public.


We all celebrate a hero who stands for what they believe or know to be right. When they stand alone we admire their steadfastness while triumphant music plays in the background.

In real life we can't stand these people. They are always being difficult. They make mountains out of every molehill. They can never be reasonable even when everyone else on the team disagrees with them.

Please take a moment to reflect on how you treat inconvenient people in real life.


In corporate world, everything must be tame and beige. Conflict or differences of opinion are avoided to focus on the areas where everyone agrees. It’s exhausting sometimes to try and change methodologies. Introducing new technology can cause so much headache that many passive leaders just shun it in favor of keeping the peace.


If my org is any measure of the truth, passive leadership isn’t a thing - despite the prevalence of passive leaders.


There’s a good lecture about this, called “The Normalization of Deviance”:

https://m.youtube.com/watch?v=Ljzj9Msli5o&pp=ygUZbm9ybWFsaXp...


Exactly the concept why you don't want to let whatever dashboards/alerts/etc you maintain on your systems have a "normal amount of reds/fails/spurious texts".

At some point you become immune.

It's a lot harder to notice theres 4 red lights today than the usual 2-3 vs noticing 1 when there are normally exactly 0.


Yes. The causative issue is the way in which projects are managed. Employees have no ownership of the project. If employees had ownership over which changes they think are best, a good employee would act on bringing the alerts back to zero before they take on new features or a new project. There are some obstacles:

1. Employees not having a say in which issues to work on. This pretty much leads to the death of a project in the medium term due to near-total disregard of maintenance issues and alerts.

2. Big-team ownership of a project. When everyone is in charge, no one is. This is why I advocate for a team size of exactly two for each corporate project.

3. Employees being unreasonably pressured for time. Perhaps the right framing for employees to think about it is: "If it were their own business or product, how would they do it?" This framing, combined with the backlog, should automatically help avoid spending more time than is necessary on an issue.


Not making an ethical/moral judgement here, just a practical one - is there any reason to believe that giving employees ownership of the projects will be any better than having "management" own it if all factors were truly considered ?

If every decision an employee made on features/issues/quality/time was accompanied by how much their pay was affected, would the outcomes really be better ?

The team could decide to fix all bugs before taking on a new feature, or that the 2 month allotment to a feature should really be three months to do it "right" without having to work nights/weekends, would the team really decide to do that if their paycheck was reduced by 10%, or delayed for that extra month for those new features were delivered ?

If all factors were included in the employee decision process, including the real world effect of revenue/profit on individual compensation from those decisions, it is not clear to me that employees would make any "better" decisions.

I would think that employees could be even more "short sighted" than senior management, as senior management likely has more at stake in terms of company reputation/equity/career than an employee who can change jobs easier, and an employee might choose not to "get those alerts to zero" if it meant they would have more immediate cash in their pocket.

And how would disagreements between team members be worked out if some were willing to forgo compensation to "do it right', and others wanted to cut even more corners ?

Truly having ownership means you have also financial risk.


> is there any reason to believe that giving employees ownership of the projects will be any better than having "management" own it

Non-technical management's skill level is almost always overrated. They're almost never qualified for it. Ultimately it still is management's decision, and always will be. If however management believes that employees are incapable of serving users, then it's management's fault for assigning mismatched employees.

> how much their pay was affected

Bringing pay into this discussion is a nonsensical distraction. If an employer misses two consecutive paychecks by even 1%, that's enough reason to stop showing up for work, and potentially to sue for severance+damages, and also claim unemployment wages. There is no room for any variation here.

> Truly having ownership

It should be obvious that ownership here refers to the ownership of the technical direction, not literal ownership in the way I own a backpack that I bring to work. If true financial ownership existed, the employee would be receiving substantial equity with a real tradable market value, with the risk of losing some of this equity if they were to lose their job.

> how would disagreements between team members be worked out

As noted, there would be just two employees per project, and this ought to minimize disagreements. If disagreements still exist, this is where management can assist with direction. There should always remain room for conducting diverse experiments without having to worry about which outcomes get discarded and which get used.

---

In summary, if the suggested approach is not working, it's probably because there is significant unavoidable technical debt or the employees are mismatched to the task.


> Not making an ethical/moral judgement here, just a practical one - is there any reason to believe that giving employees ownership of the projects will be any better than having "management" own it if all factors were truly considered ?

It's not either-or, the ownership is shared. As responsibility goes, the buck ultimately stops with management, but when the people in the trenches can make more of their own decisions, they'll take more pride in their work and invest accordingly in quality. Of course some managers become entirely superfluous when a team self-manages to this extent, and will fight tooth and nail to defend their fiefdom. Can't blame them, it's perfectly rational to try to keep one's job.

As for tying the quality to pay in such an immediate way, I guess it depends on who's measuring what and why. Something about metrics becoming meaningless when made into a target, I believe it's called Cunningham's Law. I have big doubts as to whether it could work effectively in any large corpo shop, they're just not built for bottom-up organization.



Been all of an engineer, a manager, and a founder/CEO, and I enjoy analyzing organizational dysfunction.

The difference between an engineer and a manager's perspective usually comes down to their job description. An engineer is hired to get the engineering right; the reason the company pays them is for their ability to marry reality to organizational goals. The reason the company hires a manager is to set those organizational goals and ensure that everybody is marching toward them. This split is explicit for a reason: it ensures that when disagreements arise, they are explicitly negotiated. Most people are bad at making complex tradeoffs, and when they have to do so, their execution velocity suffers. Indeed, the job description for someone who is hired to make complex tradeoffs is called "executive", and they purposefully have to do no real work so that their decision-making functions only in terms of cost estimates that management bubbles up, not the personal pain that will result from those decisions.

Dysfunction arises from a few major sources:

1. There's a power imbalance between management and engineering. An engineer usually only has one project; if it fails, it often means their job, even if the outcome reality dictates is that it should fail. That gives them a strong incentive to send good news up the chain even if the project is going to fail. Good management gets around this by never penalizing bad news or good-faith project failure, but good management is actually really counterintuitive, because your natural reaction is to react to negative news with negative emotions.

2. Information is lost with every explicit communication up the chain. The information an engineer provides to management is a summary of the actual state of reality; if they passed along everything, it'd require that management become an engineer. Likewise recursively along the management chain. It's not always possible to predict which information is critical to an executive's decision, and so sometimes this gets lost as the management chain plays telephone.

3. Executives and policy-makers, by definition, are the least reality-informed people in the system, but they have the final say on all the decisions. They naturally tend to overweight the things that they are informed on, like "Will we lose the contract?" or "Will we miss earnings this quarter?"

All that said, the fact that most companies have a corporate hierarchy and they largely outcompete employee-owned or founder-owned cooperatives in the marketplace tends to suggest that even with the pitfalls, this is a more efficient system. The velocity penalty from having to both make the complex decisions and execute on them outweighs all the information loss. I experienced this with my startup: the failure mode was that I'd emotionally second-guess my executive decisions, which meant that I executed slowly on them, which meant that I didn't get enough iterations or enough feedback from the market to find product/market fit. This is also why startups that do succeed tend to be ones where the idea is obvious (to the founder at least, but not necessarily to the general public). They don't need to spend much time on complex positioning decisions, and can spend that time executing, and then eventually grow the company within the niche they know well.


> All that said, the fact that most companies have a corporate hierarchy and they largely outcompete employee-owned or founder-owned cooperatives in the marketplace tends to suggest that even with the pitfalls, this is a more efficient system.

This conclusion seems nonsensical. The assumption that what's popular in thearket is popular because it's effective has only limited basis in reality. Heirarchical structures appear because power is naturally consolidating and most people have an extreme unwillingness to release power even when presented with evidence that it would improve their quality of life. It is true that employee owned companies are less effective at extracting wealth from the economy, but in my experience working for both traditional and employee owned companies, the reason is employees care more deeply about the cause. They tend to be much more efficient at providing value to the customer and paying employees better. The only people who lose out are the executives themselves which is why employee owned companies only exist when run by leaders with passion for creating value over collecting money. And that's just a rare breed.


You've touched on the reason why hierarchical corporations outcompete employee-owned-cooperatives:

> Hierarchical structures appear because power is naturally consolidating and most people have an extreme unwillingness to release power even when presented with evidence that it would improve their quality of life.

Yes, and that is a fact of human nature. Moreover, many people are happy to work in a power structure if it means that they get more money to have more power over their own life than they otherwise would. The employees are all consenting actors here too: they have the option of quitting and going to an employee-owned cooperative, but most do not, because they make a lot more money in the corporate giant. (If they did all go to the employee-owned cooperative, it would drive down wages even further, since there is a finite amount of dollars coming into their market but that would be split across more employees.)

Remember the yardstick here. Capitalism optimizes for quantity of dollars transacted. The only quality that counts is the baseline quality needed to make the transaction happen. It's probably true that people who care about the cause deliver better service - but most customers don't care enough about the service or the cause for this to translate into more dollars.

As an employee and customer, you're also free to set your own value system. And most people are happier in work that is mission- & values-aligned; my wife has certainly made that tradeoff, and at various times in my life, I have too. But there's a financial penalty for it, because lots of people want to work in places that are mission-aligned but there's only a limited amount of dollars flowing into that work, so competition for those positions drives down wages.


> most customers don't care enough about the service or the cause for this to translate into more dollars.

This is an important point as it reinforces the hierarchical structure. In an economy composed of these hierarchies, a customer is often themselves buying in service of another hierarchy and will not themselves be the end user. This reduces the demand for mission-focused work in the economy, instead reinforcing the predominance of profit-focused hierarchies.


There is a Chinese saying you can conquer a kingdom on horseback but you cannot rule it on horseback. What that means is, yes, entrepreneurial velocity and time to market predominate in startups. But if they don’t implement governance and due process, they will eventually lose what market share they gained. Left uncontrolled, internal factions and self serving behavior destroys all organisations from within.


This is a wonderful summary, very informative. Thank you. Is there a book or other source you’d recommend on the subject of organizational roles and/or dysfunction?…ideally one written with similar clarity.

One thing stood out to me:

You note that executives are the least reality-informed and are insulated from having their decisions affect personal pain. While somewhat obvious, it also seems counterintuitive in light of the usual pay structure of these hierarchies and the usual rationale for that structure. That is, they are nearly always the highest paid actors and usually have the most to gain from company success; the reasoning often being that the pay compensates for the stress of, criticality of, or experience required for their roles. Judgments aside and ignoring the role of power (which is not at all insignificant, as already mentioned by a sibling commenter), how would you account for this?


Most of these organizational theories I've developed myself from observing how actual corporate hierarchies function and trying to put myself (and sometimes actually doing it!) in each of the different roles and think about how I would act with those incentives. I did have a good grounding of Drucker and other business books early in my career, and two blog series' that have influenced my thinking are a16z's "Ones and Twos" [1] and Ribbonfarm's "Gervais principle" [2].

For executive pay, the most crucial factor is the desire to align interests between shareholders and top executive management. The whole point of having someone else manage your company is so that you don't have to think about it; this only works when the CEO, on their own initiative, will take actions that benefit you. The natural inclination of most people (and certainly most people with enough EQ to lead others) is to be loyal to the people you work with; these are the folks you see day in and day out, and your power base besides. So boards need to pay enough to make the CEO loyal to their stock package rather than the people they work with, so that when it comes time to make tough decisions like layoffs or reorgs or exec departures, they prioritize the shareholders over the people they work with.

This is also why exec packages are weighted so heavily toward stock. Most CEOs don't actually make a huge salary; median cash compensation for a CEO is about $250K [3], less than a line manager at a FANG. Median total comp is $2M (and it goes up rapidly for bigger companies), so CEOs make ~90%+ of their comp in stock, again to align incentives with shareholders.

And it's why exec searches are so difficult, and why not just anyone can fill the role (which again serves to keep compensation high). The board is looking for someone whose natural personality, values, and worldview exemplifies what the company needs right now, so that they just naturally do what the board (and shareholders) want. After all, the whole point is that the board does not want to manage the CEO; that is why you have a CEO.

There are some secondary considerations as well, like:

1.) It's good for executives to be financially independent, because you don't want fear of being unable to put food on the table to cloud their judgment. Same reason that founder cash-outs exist. If the right move for a CEO is to eliminate their position and put themselves out of a job, they should do it - but they usually control information flow to the board, so it's not always clear that a board will be able to fire them if that's the case. This is not as important for a line worker since if the right move is to eliminate their position and put themselves out of a job, there's an executive somewhere to lay them off.

2.) There's often a risk-compensation premium in an exec's demands, because you get thrown out of a job oftentimes because of things entirely beyond your control, and it can take a long time to find an equivalent exec position (very few execs get hired, after all), and if you're in a big company your reputation might be shot after a few quarters of poor business performance. Same reason why execs are often offered garden leave to find their next position after being removed from their exec role (among others like preventing theft of trade secrets and avoiding public spats between parties). So if you're smart and aren't already financially independent, you'll negotiate a package to make yourself financially independent once your stocks vest.

3.) Execs very often get their demands met, because of the earlier point about exec searches being very difficult and boards looking for the unicorn who naturally does what the organization needs. Once you find a suitable candidate, you don't want to fail to get them because you didn't offer enough, so boards tend to err on the side of paying too much rather than too little.

Another thing to note is that execs may seem overpaid relative to labor, but they are not overpaid relative to owners. A top-notch hired CEO like Andy Grove got about 1-1.5% of Intel as his compensation; meanwhile, Bob Noyce and Gordon Moore got double-digit percentages, for doing a lot less work. Sundar Pichai gets $226M/year, but relative to Alphabet's market cap, this is only 0.01%. Meanwhile, Larry Page and Sergey Brin each own about 10%. PG&E's CEO makes about $17M/year, but this is only 0.03% of the company's market cap.

There's a whole other essay to write about why owners might prefer to pay a CEO more to cut worker's wages vs. just pay the workers more, but it can basically be summed up as "there's one CEO and tens of thousands of workers, so any money you pay the CEO is dwarfed by any delta in compensation changes to the average worker. Get the CEO to cut wages and he will have saved many multiples his comp package."

[1] https://a16z.com/ones-and-twos/

[2] https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-...

[3] https://chiefexecutive.net/wp-content/uploads/2014/08/CEO_Co...


Excellent. Thank you for the thoughtful response


What I see is a movement where line employees have a say on who is retained at the director and VP level.

The CEO reports to the board. But his immediate and second tier reports are also judged by the employees. The thought is that will give them pause before they embark on their next my way or the highway decision making. The most egregious directors who push out line employees in favor of their cronies will be fired under this evaluation.


> If employees had ownership over which changes they think are best, a good employee would act on bringing the alerts back to zero before they take on new features or a new project.

You say this but as someone who's run a large platform organization that hasn't been my experience. Sure some employees, maybe you, care about things like bringing alerts back to zero but a large number are indifferent and a small number are outright dismissive.

This is informed not just by individual personality but also by culture.

Not too long ago I pointed out a bug in someone's code who I was reviewing and instead of fixing it they said, "Oh okay, I'll look out for bugs like that when I write code in the future" then proceeded to merge and deploy their unchanged code. And in that case I'm their manager not a peer or someone from another team, they have all the incentive in the world to stop and fix the problem. It was purely a cultural thing where in their mind their code worked 'good enough' so why not deploy it and just take the feedback as something that could be done better next time.


With regard to alerts, I have written software that daytrades stocks, making a lot of trades over a lot of stocks. Let me assure you that not a single alert goes ignored, and if someone said it's okay to ignore said alerts, or to have persistent alerts that require no action, they would be losing money because in time, they will inevitably ignore a critical error. I stand by my claim that it's what sets apart good employees from those that don't care if the business lives or dies. I think a role of management is to ensure that employees understand the potential consequences to the business of the code being wrong.


Yes, there was a recent story about (yet another) Citi "fat finger" trade. The headlines mentioned things like "the trader ignored 700 error messages to put in the trade", but listening to a podcast about it.. its more like awful systems that are always half broken is what ultimately lead to it.

The real punchline was this - the trader confused a field for entering shares quantity for notional quantity, but due to some European markets being closed, the system had a weird fallback logic that it sets the value of shares to $1, so the confirmation back to the trader was.. the correct number of dollars he expected.

So awful system designs lead to useless and numerous alerts, false confirmations, and ultimately huge errors.


> If employees had ownership over which changes they think are best, a good employee would act on bringing the alerts back to zero before they take on new features or a new project

That requires that you have good employees, which can be as rare as good management.


And groupthink


The more pernicious form of this, in my experience, are ignored compiler/linter/test warnings. Many codebases have a tremendous number of these warnings, devs learn to ignore them, and this important signal of code quality is effectively lost.


It's almost always worth spending the time to either fix all warnings or, after determining it's a false positive, suppressing it with a #pragma.

Once things are relatively clean, it's easy to see if new code/changes trip a warning. Often unexpected warnings are a sign of subtle bugs or at least use of undefined behaviors. Sorting those out when they come up is a heck of a lot easier than tracing a bug report back to the same warming.


I like to program with -wall.

Doesn't win me fans, but I sleep well.


Could you please expand on what that is?



It's a C/C++ compiler flag, saying all warnings on.

Since I do Swift, these days, in Xcode, I use project settings, instead.

I also like to treat warnings as errors.

Forces me to be circumspect.



It’s a CLI flag to the compiler that enables all warnings.


In both Challenger and Columbia disasters, people noticed there might be a problem, tried to escalate it to get it fixed and failed to stop the launch, leading to disasters.

Do we know how many times people noticed a problem, it launched anyway and everything was fine?


What we should remember about Al McDonald [is] he would often stress his laws of the seven R's," Maier says. "It was always, always do the right thing for the right reason at the right time with the right people. [And] you will have no regrets for the rest of your life.


That is the key line from the whole piece.


Even following all that could have led to Challenger exploding (stochastic process with non-zero probability of a terminal failure), and leaving everyone with "What did we do wrong?" without any answer and full of regrets for the rest of their lives.


"Truth, Lies, and O-Rings" is a fascinating (if sometimes tedious) book that should be at the top of any reading list for those interested in the Challenger disaster.

For me one of the more interesting side-bar discussions are those around deciding to use horizontal testing of the boosters despite that not being an operational configuration. This resulted in flexing of the joints that was not at all similar to the flight configuration and hindered identification of the weaknesses of the original "field joint" design.


Interestingly, we're still testing SLS SRBs[1] horizontally.

https://www.youtube.com/watch?v=n-wqAbVqZyg

---

1. In case anyone doesn't know, they use the actual recovered Shuttle casings on SLS, but use an extra "middle" section to make it 5 sections in length instead of the Shuttle's 4 sections. In the future they'll move to "BOLE" boosters which won't use previously flown Shuttle parts.


I think the booster was redesigned after the accident, I guess/hope the opportunity was seized to make a design that would be less sensitive to orientation.


> I think the booster was redesigned after the accident

That is correct. I believe they added:

* An extra seal

* A "J-Leg" carved into the insulation[1] that acts as a sort of pre-seal

> I guess/hope the opportunity was seized to make a design that would be less sensitive to orientation.

I guess, we'll see how things shake out.

---

1. https://www.nasaspaceflight.com/2020/12/artemis-1-schedule-u...


Are you saying that they are tested horizontally or that they are ONLY tested horizontally? (Very different things.)


> Are you saying that they are tested horizontally or that they are ONLY tested horizontally?

My understanding is that they are only hot fired horizontally.

Presumably there are many tests done at the component level, although it's questionable whether it makes sense to call those tests horizontal or vertical at that point.


It's also worth noting how the o-ring story was made public. There is the famous testimony by Richard Feynman[0], but the secret was that astronaut/commissioner Sally Ride leaked the story to another commissioner, who then suggested it to Feynman over dinner[1].

Neither Ride nor Kutyna could risk exposing the information themselves, but no would could question or impeach Feynman.

[0] https://www.youtube.com/watch?v=raMmRKGkGD4

[1] https://lithub.com/how-legendary-physicist-richard-feynman-h...


That's interesting. I didn't know that bit of the story.

I reminds me a bit of Jeffrey Sachs, who chaired the Lancet covid enquiry saying he was told the insert a furin cleavage site experimentation was already done before a grant application was put in to do that. Also presumably based on some source who didn't want to be exposed.


It's a shame we don't have more engineers today that refuse to invent things because so many technological inventions today are being used to further the destruction of our planet through consumerism.

Sadly, human society has a blind spot when it comes to inventions with short-term benefits but long-term detriments.

I would love to see more programmers refusing to work on AI.


> I would love to see more programmers refusing to work on AI.

Refusing to work on something is not newsworthy. I refuse to work on (or use) AI, ads and defence projects, and I'm far from being the only one.

Though let who is free of sin throw the first stone, I now stand on a high horse after having worked in the gambling sector, and now ashamed of it, so I prefer to focus the projects themselves rather than the people and what they choose to do for a living.


> Refusing to work on something is not newsworthy.

One person, no. A hundred, who knows. Ten thousand programmers united together not to work on something? Now we're getting somewhere. A hundred thousand? Newsworthy.


I would bet there are a hundred thousand people refusing to work in war, ai, ads, gambling, crypto etc. I certainly am. But all it means is that pay goes up and quality of engineering goes down a little in those sectors, but not much more.


The issue is quantifying this sentiment. How would you even identify programmers who are doing this? Yet another reason why software engineers really ought to organize their labor like a lot of other disciplines of engineering have done decades ago. Collective action like this would be more easily mustered, advertised, and used to influence outcomes if labor were merely organized and informed of itself.


You can do public pledges, e.g.: https://neveragain.tech


I also refuse to work on the war machine, blockchain, or gambling.

Unfortunately it looks like that might also be refusing to eat right now. We'll see how much longer my principles can hold out. Being gaslit into an unjustified termination has me in a cynical kind of mood anyway. Doing a little damage might be cathartic.


I’ve been gaslit, I ended up walking away from my company. It was extremely painful.

> Doing a little damage might be cathartic.

Please avoid the regret. Do something kind instead. Take the high road. Take care of yourself.


Kindness doesn't have any dev openings.


Of course. But at least try to minimise the damage. Don’t do anything you’ll regret.


Regret right now would be letting the stress of unemployment rip my family apart. I've got maybe a handful of door-slamming "what the fuck did you do all day then?" rants that I can tolerate before I'm ready to sign on with Blockchain LLM O-Ring Validation as a Service LLC: We Always Return True!™ if it'll pay the bills and get my wife to stop freaking out.


And this is how all unjust systems sustain themselves. You WILL participate in the injustice, or be punished SEVERELY. Why do the people doing the punishing want to punish you? Because they WILL participate in punishing, or be punished SEVERELY.

People have wondered how so many people ever participated in any historical atrocity. This same mechanism is used for all of them.


Yep. Hail Moloch, I guess. He shows up, which is more than we can say for other deities.


It probably doesn't help right now, but you should know you are not the only one in your situation. Perhaps it might help to write down your actual principles. Then compare that list with the real reasons you refuse some employment opportunities.

I think you have already listed one big reason that isn't a high-minded principle. You want to make money. There may be others.

It's always wonderful when you can make a lot of money doing things you love to do. It stinks when you have to choose between what you are exceptionally good at doing and what your principles allow.

If only somebody could figure out how the talents of all the people in your situation could be used to restore housing affordability. Would you take a 70% paycut and move to Nebraska if it allowed you to keep all your other principles?

As you say, kindness isn't hiring. I'd love to see an HN discussion of all the good causes that need founders. It would be wonderful to have some well known efforts where the underemployed could devote some energy while they licked their wounds. It might even be useful to have "Goodworks Volunteer" fill that gap in employment history on your resume.

How do we get a monthly "What good causes need volunteers?" post on HN?


> It probably doesn't help right now, but you should know you are not the only one in your situation.

You're right, it doesn't. It feels more like an attempt to minimize. The rest was you spitballing some unrelated idea.


Avoiding the use of AI is just going to get you lapped.

There’s no benefit to your ideological goals in kneecapping yourself.

There’s nothing morally wrong with using or building AI, or gambling.


There's a lot baked into that thought, but I wanted to extract this part:

> There’s nothing morally wrong with ... building... gambling.

Say you're building a gambling system and building that system well. What does that mean? More people use it? Those people access it more? Access it faster? Gamble more? Gamble faster?

It creates and feeds addiction.


I agree with you. It's also worth noting that this isn't unique to anything discussed here. EVERYONE has their line in the sand on a huge array of issues, and that line falls differently for a lot of people.

Environment, religion, war, medicine; everything has a personal line associated with it.


Lots of things create and feed addictions, including baking cookies.

Let’s not confuse the issue. Just because you find something distasteful doesn’t mean it’s bad or morally problematic.


I've never seen a homeless person in Atlantic City put his fist through an oven window because the cookies didn't come out right.


I’ve seen plenty of simple-carb-addicted people die of fatness. It’s a slow and painful death.

We let adults make their own choices.


1) I question how much choice an addict has.

2) If you were devising more efficient sugar delivery systems for those acquaintances as a means to take every last cent they had, knowing they'd be unable to resist, you're complicit in robbing and killing them.


The benefit is a clear conscience.


In what context? Code generation? Art exploration?


Wake me up when AI is able to compete with a software engineer with almost two decades in the field.

Hint: most of my consulting rate is not about writing fizzbuzz. Some clients pay me without even having to write a single line of code.


I am curious why you avoid ads - personally I view them as a tremendous good for the world, helping people improve their lives by introducing them to products or even just ideas they didn't know existed.


I tend to view ads as the perfect opposite of what you mentioned; it’s an enormous waste of money and resources on a global scale that provides no tangible benefit for anyone that isn’t easily and cheaply replaced by vastly superior options.

If people valued ad viewing (e.g. for product decisions), we’d have popular websites dedicated to ad viewing. What we have instead is an industry dedicated to the idea of forcefully displaying ads to users in the least convenient places possible, and we still all go to reddit to decide what to buy.


> If people valued ad viewing (e.g. for product decisions), we’d have popular websites dedicated to ad viewing.

There was a site dedicated to ad viewing once (adcritic.com maybe?) and it was great! People just viewed, voted, and commented on ads. Even though it was about the entertainment/artistic value of advertising and not about making product decisions.

Although the situation is likely to change somewhat in the near future, advertising has been one of the few ways that many artists have been able to make a comfortable living. Lying to and manipulating people in order to take more of their money or influence their opinions isn't exactly honorable work, but it has resulted in a lot of art that would not have happened otherwise.

Sadly the website was plagued by legal complaints from extremely shortsighted companies who should have been delighted to see their ads reach more people, and it eventually was forced to shutdown after it got too expensive to run (streaming video in those days was rare, low quality, and costly) although I have to wonder how much of that came from poor choices (like paying for insanely expensive superbowl ads). The website was bought up and came back requiring a subscription at which point I stopped paying any attention to it.


We do have such sites though, like Tom's Hardware or Consumer Reports or Wirecutter or what have you. Consumers pay money for these ads to reduce the conflict of interest, but companies still need to get their products chosen for these review pipelines.


Tom's Hardware and Consumer Reports aren't really about ads (or at least that's not what made them popular). they were about trying to determine the truth about products and see past the lies told about them by advertising.


Strictly speaking, isn't advertising any action that calls attention to a particular product over another? It doesn't have to be directly funded by a manufacturer or a distributor.

I'd consider word-of-mouth a type of advertising as well.


To me advertising isn't just calling attention to something, it's doing so with the intent to sell something or to manipulate.

When it's totally organic the person doing the promotion doesn't stand to gain anything. It less about trying to get you to buy something and usually just people sharing what they enjoy/has worked for them, or what they think you'd enjoy/would work for you. It's the intent behind the promotion and who is intended to benefit from it that makes the difference between friendly/helpful promotion and adversarial/harmful promotion.

Word of mouth can be a form of advertising that is directly funded by a manufacturer or a distributor too though. Social media influencers are one example, but companies will pay people to pretend to casually/organically talk up their products/services to strangers at bars/nightclubs, conferences, events, etc. just to take advantage of the increased level trust we put in word of mouth promotion exactly because of the assumption that the intent is to be helpful vs to sell.


To me, ads are primarily a way to extract more value from ad-viewers by stochastically manipulating their behavior.

There is a lot of support in favor. Consider:

- Ads are typically NOT consumed enthusiastically or even sought out (which would be the cases if they were strongly mutually beneficial). There are such cases but they are a very small minority.

- If product introduction was the primary purpose, then repeatedly bombarding people with well-known brands would not make sense. But that is exactly what is being done (and paid for!) the most. Coca Cola does not pay for you to learn that they produce softdrinks. They pay for ads to shift your spending/consumption habits.

- Ads are an inherently flawed and biased way to learn about products, because there is no incentive whatsoever to inform you of flaws, or even to represent price/quality tradeoffs honestly.


Back when I was a professor I would give a lecture on ethical design near the end of the intro course. In my experience, most people who think critically about ethics eventually arrive at their own personal ethics which are rarely uniform.

For example, many years ago I worked on military AI for my country. I eventually decided I couldn't square that with my ethics and left. But I consider advertising to be (often non-consensual) mind control designed to keep consumers in a state of perpetual desire and I'd sooner go back to building military AI than work for an advertising company, no matter how many brilliant engineers work there.


Products (and particularly ideas) can be explored in a pull pattern too. Pushing things—physical items, concepts of identity, or political ideology—in the fashion endemic to the ad industry is a pretty surefire way to end up with an extremely bland society, or one that segments increasingly depending on targeting profile.


I also believe advertisements are useful! However, by this definition, the ad industry is not engaged in advertisement.


>I am curious why you avoid ads - personally I view them as a tremendous good for the world, helping people improve their lives by introducing them to products or even just ideas they didn't know existed.

I would agree with you if ads were just that. Here's our product, here's what it does, here's what it costs. Unfortunately ads sell the sizzle not the steak. That has been advertising mantra for probably 100 years.

https://www.youtube.com/watch?v=UW6HmQ1QVMw


Ads are most often manipulation, not information. They are pollution.


If all the programmers working on advertising and tracking and fingerprinting and dark pattern psychology were to move into the field of AI I think that would be a big win.

And that's not saying that AI is going to be great or even good or even overly positive, it's just streets ahead of the alternatives I mentioned.


Is it miles ahead? An engine that ingests a ridiculous amount of data to produce influence? Isn't that just advertising but more efficient and with even less accountability?


I feel like AI is going to be all those things on steroids.


I'll reply here since your comment was first.

AI has the potential to go in many directions, at least some of which could be societally 'good'.

Advertising is, has always been, and likely always will be, societally 'bad'.

This differentiation, if nothing else.

(Yes, my opinion on advertising is militantly one sided. I'm unlikely to be convinced otherwise, but happy for, and will read, contrary commentary).


I don't think it's advertising that's inherently evil. Like government, it's a good thing, even a needed thing. People need laws and courts, and buyers and sellers need to be able to connect.

It turns evil in the presence of corruption. Taking bribes in exchange for power. Government should never make rules for money, but for the good of the people. And advertising should never offer exposure for sale - exposure should only result from merit.

Build an advertising system with integrity - in which truthful and useful ads are not just a minimum requirement but an honest aspiration and the only way to the top of the heap. Build an advertising system focused, not on exploiting the viewer, but on serving them - connecting them with goods and services and ideas and people and experiences that are wanted and that promote their health and thriving.

I won't work on advertising as it's currently understood... I agree it's evil. But I'd work on that, and I think it would be a great good.


I used to think there were useful ads. But really, even a useful add is an unsolicited derailing of your thoughtspace. You might need a hammer, but did you really have to think about it right then? I think back to how my parents and grandparents got their goods before the internet. If they needed something they went to the store. If they were interested in new stuff that might be useful thats coming out, they'd get a product catalog from some store mailed to them. Is a product catalog an ad? Maybe, depending on how you argue the semantics, but its much more of a situation like going to a restaurant and browsing the menu and choosing best for yourself, vs being shown a picture of a big mac on a billboard every time you leave your home.


AI is the anti printing press. Done well, it removes the ability t read something written by someone far away, because it erodes any ability to trust that someone exists, or to find that persons ideas amongst the remixed nonideas AI churns out.

Advertising is similar, of course, and the only thing that has kept the internet working as a communications medium in spite of advertising is that it was generally labeled, constrained, enclosed, spam-filtered, etc.

The AI of today is being applied to help advertising escape those shackles, and in doing so, harm the ability to communicate.


Only in a sense that computers are all those things on steroids. It's a low-level tech that can be used for many different things. Given the incentives in our socioeconomic system, it will be used for the things that you have listed, just as everything else.


Yeah, Google, Facebook and Microsoft putting a massive fraction of their resources on AI is what already happened, but isn't really encouraging.


Yeah they are the dark pattern, tracking, advertising l, privacy violating kings. Of course they’re going to keep doing all that “but with AI (TM)”


If only it were that easy.

A lot of engineers in the US who are both right out of school and are on visas need to find and keep work within a couple months of graduation and can’t be picky with their job or risk getting deported.

We have a fair number of indentured programmers.


I will never forget the grumpy look on the face of a imperial tobacco representative on a job fair in my university years ago. No one was visiting their booth for anything except for silly questions about benefit package including cigarettes.


Sadly it's not enough for 99% of engineers to refuse to work on an unethical technology, or even 99.99%

Personally I don't work on advertising/tracking, anything highly polluting, weapons technology, high-interest loans, scams and scam-adjacent tech, and so on.

But there are enough engineers without such concerns to keep the snooping firms, the missile firms, and the payday loan firms in business.


One issue we have is that economic pressures underly everything, including ethics. Ethics are often malleable depending on what someone needs to survive and given different situations with resource constraints, people are ultimately more willing to bend ethics.

Now, there’s often limits to some flexibility and lines some simply will not cross, but survival and self preservation tends to take precedent and push those limits. E.g., I can’t imagine ever resorting to cannibalism but Flight 571 with the passengers stranded in the Andes makes a good case for me bending that line. I’d be a lot more willing to work for some scam or in high interest loans for example before resorting to cannibalism to feed myself and I think most people would.

If we assure basic survival at a reasonable level, you might find far less engineers willing to work in any of these spaces. It boils down to what alternatives they have and just how firm they are on some ethical line in the sand. We’d pretty much improve the world all around I’d say. Our economic system doesn’t want that though, it wants to be able to apply this level of pressure on people and so do those who are highly successful who leverage their wealth as power. As such I don’t see how that will ever change, you’ll always have someone doing terrible things depending on who is the most desperate.


There are even engineers with such concerns working in these firms. They might figure that the missile is getting built no matter if they work there or not, so they might as well take the job offer.


I no longer work as a software developer because I feel that technology is ruining normal human interactions by substituting them in incomplete ways and making everyone depressed.

I think we'd be better off making things for each other and being present and local rather than trying to hyperstimulate ourselves into oblivion.

I'm just some dude though. It's not making it to the headlines.


> I'm just some dude though. It's not making it to the headlines.

Doesn't have to be on headlines. Even just hearing that gives me a bit more energy to fight actively against the post-useful developments of modern society. Every little bit helps.


How do you get money nowadays?


The curse of technology is that it is neither good nor bad. Only in the way it is used t becomes one or the other.

>I would love to see more programmers refusing to work on AI.

That is just ridiculous. Modern neural networks are obviously an extremely useful tool.


As others have said, a big part of the problem is the need to eat.

I have a family. I work for a company that does stuff for the government.

I'd _rather_ be building and working on my cycling training app all day every day, but that doesn't make me any money, and probably never will.

All the majority of us can hope for is to build something that helps people and society, and hope that does enough good to counteract the morally grey in this world.

Nothing is ever black and white.


The problem is that for every one that refuses, there's at least one that will. So standing on principles only works if the rest of the rungs of the ladder above you also have those same principles. If anywhere in the org above you does not, you will be overruled/replaced.


> I would love to see more programmers refusing to work on AI.

This is not effective.

Having a regulated profession that is held to some standards, like accountants, would actually work

Without unions and without a professional body individual action won’t be achieving anything


So do you think that people should be required to become members of a "regulated profession" before writing a VBA spreadsheet macro, or contributing to an open-source project?


Are you required to become a chartered civil engineer to build a house for your dog?

But the software developer who’se code handles personal information of 10 million million people should know that you don’t store them in plain text, which developers and business leaders at Virgin Media did not know, and if you click ‘forgot password’ they would send you a letter with you password In The Mail


But... accountants do work for AI companies, right? That doesn't seem like a good example.


I would wish lot more programmers refuse to work with surveillance and add tech... But nearly every site has that stuff on them... Goes to tell what are the principles of profession or in general...


"Yeah, but your scientists were so preoccupied with whether or not they could, that they didn't stop to think if they should."


I have an unclarity with this situation.

How much of him being a hero is a coincidence? Did he refuse to sign the previous launches? Did NASA have reasons to believe that the launch could be successful? How much of a role does probability play here. I mean if someone literally tells you something isn't safe, especially the person who made it, you can't tell him it will work. There is somekind of bias here.


Of course there's bias. If he had rubber-stamped it there would be no story to tell.

His decision would have been questioned after the fact, he would defer to information from levels below, and this would recurse until responsibility had dissipated beyond and any personal attribution. The same pattern happens in every org, every day (to decisions of mostly lesser affect).

The key point—at least from my read—were the follow up actions to highlight where information was intentionally ignored, prevent that dispersion of responsibility, and ensure it didn't happen again.


> the follow up actions to highlight where information was intentionally ignored, prevent that dispersion of responsibility, and ensure it didn't happen again.

Unfortunately, while that specific problem did not happen again, the general cultural changes that were supposed to happen had been lost 15 years later. The loss of Columbia in 2003 was due to the same kind of poor decision making and problem solving process that was involved in the loss of Challenger.


The article is a bit weird, he refused to sign a form inside a private company. But the private company presented a signed form to NASA (signed by higher-up’s).

So NASA probably didn’t look closely into the engineering, in particular when launch is tomorrow.


> NASA probably didn’t look closely into the engineering

Yes, they did. NASA had been told by Thiokol the previous summer about the O-ring issue and that it could cause the loss of the Shuttle--and ignored the recommendation to ground the Shuttle until the issue was fixed. The night before the launch there was a conference call where the Thiokol engineers recommended not launching. Detailed engineering information was presented on that call--and it was information that had already been presented to NASA previously. NASA knew the engineering information and recommendation. They chose to ignore it.


I got to hear him recount the story, and yeah the article is weird.

The form he talked about was one that, if not signed, would mean that the launch would not happen. I can't remember if it was an internal form or not, but it doesn't really matter in that context.

Since NASA needed that form signed, he was under intense pressure to actually sign it both by NASA and his company. Someone else from the company not on site signed it.


The challenger disaster was a case study when I was in school: The important lesson is about human psychology, and why it's important to not speak up when something is dangerous.

Basically, the "powers that be" wanted the launch and overruled the concerns of the engineers. They forced the launch against better judgement.

(Think of the, "Oh, that nerd is always complaining, I'm going to ignore them because they aren't important," attitude.)


> How much of him being a hero is a coincidence?

None. He knew the right thing to do and did it despite extreme pressure.

> Did he refuse to sign the previous launches?

I don't know about him personally, but Thiokol, at the behest of McDonald and other engineers, had sent a formal letter to NASA the previous summer warning about the O-ring issue and stating explicitly that an O-ring failure could lead to loss of vehicle and loss of life.

> Did NASA have reasons to believe that the launch could be successful?

Not valid ones, no. The launch took place because managers, at both NASA and Thiokol, ignored valid engineering recommendations. But more than that, NASA had already been ignoring, since the previous summer, valid engineering recommendations to ground the Shuttle until the O-ring issue was understood and fixed.


To be completely honest I think you are somewhat naive. I have seen organizations push through decisions, which were obviously bad, in fact nearly everyone on the lower levels agreed that the goal of the decision was unachievable. But of course that didn't stop the organization.

> I mean if someone literally tells you something isn't safe, especially the person who made it, you can't tell him it will work.

You literally can.


Given that the other risk he cited, of ice damaging the heat shield tiles, is exactly what led to the loss of Columbia, I'd say he has an excellent grasp of the risks.


Something can work and not be safe at the same time.


> He neglected to say that the approval came only after Thiokol executives, under intense pressure from NASA officials, overruled the engineers.

Sounds kinda familiar?


A story as old as time.


I wonder how the process even allows this. An approval from the executives of the company shouldn't be worth anything.


I got to eat lunch with Allan Macdonald in college. I was an IEEE officer and we hosted him for a talk at Montana State, so I got to take him out for lunch before his talk.

Dude got a lunch beer without a second though. (My man!)

He then gave a talk that afternoon talking about interrupting a closed session of the Challenger commission to gainsay a Thiokol VP. The VP in question testified to Congress that he wasn't aware of any launch risks. Macdonald stood up, went to the aisle, and said something to the effect of "Mr. Yeager, that is not true - this man was informed of the risks multiple times before the launch. I was the one that told him." (He was addressing Chuck Yeager, btw. Yeah, that Chuck Yeager.)

No mean feat to have the stones to interrupt a congressional hearing stacked with America's aviation and space heavyweights.


> to gainsay a Thiokol VP

My understanding is that it was the NASA manager, Larry Mulloy, who had given the go for launch for the SRBs.


isn't lying to congress a crime? was there documented proof of the notification or was it just a he said / he said situation?


It's sad to see the decline of civilization, and how far back basic principles were not understood, and turned into a cargo cult. The point why somebody had to sign something to approve it was exactly that he had the option to not sign it in case that there was a problem. But even then, it was seen as a job to be done, that you either do, or fail to do.


There is a good movie about the Challenger disaster and the follow up investigation from the pov of Feynman: https://en.wikipedia.org/wiki/The_Challenger_Disaster


Iconoclasts like Robert are vital to get us to a stage one civ. May he rest in peace. Appreciate the post.


McDonald was my hero as a young engineering student. The miracle was that he was exonerated.


What is missing here for me is who were the anonymous "executives" that overruled Mcdonald (and others) and tried to punish him? Did they suffer any consequences for actions that cost lives and for the coverup?


Rest in peace Allan.

As much as his action were admirable, the most shocking thing about that story was how the politicians rallied to protect him after his demotion, forcing his company to keep and actually promote him. That's why I get both sad and angry when I hear the new mantra of "Government can't do anything, the markets have to regulate that problem."


I mean... his company was sitting on a lucrative government contract for an agency that was working hard to cover up a failure. It's fortunate that in this case distribution of power (and the shocking nature of the failure) ensured that the right thing happened, but I see a corporate and government management colluding to maintain their positions.

Distribution of power is definitely important though, whether public or private. People concerned about government abuse is due to the fact that due to its nature, government power structures are more often centralised and without competitors by definition. There are monitors but they are often parts of the same system.


> the new mantra of "Government can't do anything, the markets have to regulate that problem."

That's been the conservative line for 35+ years. How is that new?


I think more like 70 years at this point. It's been SOP for the conservatives to get elected to govern, make government worse at every turn while enriching themselves and their friends, and then turning around to the public and being like "look how badly this works, clearly we need to cut taxes since it isn't working" and rinse and repeat until every institution in the world is borderline non-functioning.


It was Jimmy Carter and not Ronald Reagan who scrapped the civil service competency exams. Government getting worse has been a two-party affair for quite some time. No one has any incentive to fix it, and the system is so vast, so complex, and so self-serving that no one even has the power to fix it (as things stand).


The Democrats in America are highly conservative. Not as conservative as the Republicans, but still very conservative. We don't have a left and a right here, we have a hard right and a center right.


Certain "hard right" parties like the PAP in Singapore and the LDP in Japan have placed a competent civil service at the forefront of their policies. Though in many ways, the US may appear more conservative than its "peers", in other ways, it appears more liberal.


> Allan McDonald leaves behind his wife, Linda, and four children — and a legacy of doing the right things at the right times with the right people.

It sounds like the most noteworthy part of his legacy is attempting to do the right thing, but with the wrong people.

I think this is meaningful to mention, because saying to do "the right things, at the right time, with the right people" is easy -- but harder is figuring out what that really means, and how do you achieve that state when you have incomplete control?


He had incomplete control but did the right thing (to refuse to let the risk slide) at the right time (before the launch). You don't need to have full control to do this.

> but harder is figuring out what that really means

I think it is quite clear except the part about "right people"; if the people around you are not right, I would guess it is even more important to do the right thing. Obviously this comes at at a (potentially great) cost which is why it is easier said than done and why his actions are so admirable.


"The right people" is difficult. Working with NASA would seem one of the better bets.

For startup founders, you can try to hire "the right people". (And share the equity appropriately.)

For job-seekers, when you're interviewing with them, you can ask yourself whether they're "the right people". (And don't get distracted by a Leetcode hazing, in what's supposed to be collegial information-sharing and -gathering by both parties.)


Now that OSS projects like a certain popular dynamic language have been taken over by corporations, criticism like security or performance issues are forbidden as well and punished.

(One corporation though seems to withdraw from that language due to the attitude of the project and its representatives.)


Honestly, you're either telling too much or too little.

Could tell what are the precise language / corporation / project, if you're comfortable with that of course?


I'm late to the party, but I work as a NASA contractor and have just recently been reading "Truth, Lies, and O-Rings" by Mr. McDonald.

Something that I find really frustrating is that it seems that there's an international "caste" of honest engineers who are ready, and have been ready for centuries if not millenia, to pull the metaphorical trigger on advancing human society to the next level. International rail systems, replacing all electrical generation with nuclear, creating safe and well-inspected commercial airplanes, etc.

Blocking that "caste" from uniting with each other and coordinating these projects are the Old Guard; the "local area warlords", although these days they may have different titles than they would have a thousand years ago. These people do not speak a language of technical accuracy, but rather their primary guiding principles are personal loyalty, as was common in old honor societies. They introduce graft, violence, corruption, dishonesty, and personal asset capture into these projects and keep them from coming to fruition. They would not sacrifice their lifestyles in order to introduce technical excellence into the system they're charged with managing, but instead think more about their workload, their salary, their personal obligations to their other (often dishonest) friends, and their career tracks.

It wouldn't even occur to me to worry more about a promotion than than the technical merit of a machine or system I was engaged with. I would never lie about something myself a colleague of mine said or did. For those reasons I will never be particularly competitive with the people who do become VPs and executive managers.

How many different people around the world, and especially that are on HackerNews, are in my exact situation? With the right funding and leadership could all quit our stupid fucking jobs building adtech or joining customer databases together or generating glorified Excel spreadsheets and instead be the International Railway Corps, or the International Nuclear Corps. And yet since we can't generate the cashflow necessary to satisfy the Local Area Warlords that own all the tooling facilities and the markets and the land, it will never be.


> at some point, one needs to say "yes" and take risks

Sure, but they need to understand the risks, and be open about the choices they are making. Ideally at the time but certainly coving it up after it goes wrong is not acceptable.


We're seeing it all happen again now at Boeing.

I just keep waiting for that magical invisible hand to swoop in and fix this cluster f_ck... What could possibly be holding it up?


> Morton Thiokol executives were not happy that McDonald spoke up, and they demoted him.

And then all of their government contracts should have been revoked.


Ok, cool, but what the hell happened? They had a guy in charge of signing-off the launch, he didn't sign off because of 3 problems he identified, and they still launched. wtf?


The engineers were overruled by the executives because NASA was pissed at the company for messing up their plans.


From the article: (During the hearing)

> The NASA official simply said that Thiokol had some concerns but approved the launch. He neglected to say that the approval came only after Thiokol executives, under intense pressure from NASA officials, overruled the engineers.


This sounds like an issue that's still around.


Which executive pressured the engineers, was there any accountability?


nowadays you have an unlucky accident if youre a whistleblower, lucky he wound up getting a promo for it (after being demoted).


(2021)


> McDonald became a fierce advocate of ethical decision-making

My hero, but also Don Quixote. I'm a huge believer in Personal Integrity and Ethics, but I am painfully aware that this makes me a fairly hated minority (basically, people believe that I'm a stuck-up prig), especially in this crowd.

I was fortunate to find an employer that also believed in these values. They had many other faults, but deficient institutional Integrity was not one of them.


> I'm a huge believer in Personal Integrity and Ethics, but I am painfully aware that this makes me a fairly hated minority (basically, people believe that I'm a stuck-up prig),

This doesn’t match my experience at all. In my experience, the average person I’ve worked with also believes in personality integrity and is guided by a sense of ethics. One company I worked for started doing something clearly unethical, albeit legal, and the resulting backlash and exodus of engineers (including me) was a nice confirmation that most people I work with won’t tolerate unethical companies.

I have worked with people who take the idea of ethics to such an unreasonable extreme that they develop an ability to find fault with nearly everything. They come up with ways to rationalize their personal preferences as being the only ethical option, and they start finding ways to claim things they don’t like violate their personal integrity. One example that comes to mind is the security person who wanted our logins to expire so frequently that we had to log in multiple times per day. He insisted that anything less was below his personal standards for security and it would violate his personal integrity to allow it. Of course everybody loathed him, but not because they lacked personal integrity or ethics.

If you find yourself being a “hated minority” or people thinking you’re a “stuck up pig” for having basic ethics, you’re keeping some strange company. I’d get out of there as soon as possible.


> keeping some strange company

Actually, that's this community. I do understand. Money is the only metric that matters, here, as it's really an entrepreneur forum. Everyone wants to be rich, and they aren't particularly tolerant of anything that might interfere with that.

But I'm not going anywhere. It's actually fun, here. I learn new stuff, all the time.


> Money is the only metric that matters, here

Says who? Did I agree to that when I subscribed?

> Everyone wants to be rich,

Everyone? Like me too? Tell me more about that.

You in an earlier comment said that people believe that you are "a stuck-up prig". Are you sure it is due to your moral stance, and not because you are judgemental, and abrasive about it?

Perhaps if you would be less set in your mind about how you think everyone is you wouldn't come through as "a stuck-up prig". Maybe we would even find common grounds between us.


> Money is the only metric that matters, here, as it's really an entrepreneur forum. Everyone wants to be rich

This place is surprisingly mixed in that regard given its origin; a significant number of comments I see about Apple, about OpenAI, about Paul Graham, are essentially anti-capitalist.

The vibe I get seems predominately hacker-vibe rather than entrepreneur-vibe.

That said, I'm also well aware of the "orange site bad" meme, so this vibe I get may be biased by which links' I find interesting enough to look at the discussions of.


Yeah, it was a snarky comment, and not my proudest moment, but it does apply to a significant number of folks. I tend to enjoy the contributions from folks that don't have that priority.

The demoralizing part, is folks that are getting screwed by The Big Dogs, and totally reflect the behavior; even though TBD think of them as "subhuman."


HN is not really a community.


I believe that it is. In my opinion and experience, any group of humans, interacting, on a regular basis, in a common venue, becomes a community.

I guess that it is a matter of definition.

I treat it as if it were a community, and that I am a member of that community, with rights and Responsibilities, thereof.

I know that lots of folks like to treat Internet (and, in some cases, IRL) communities as public toilets, but I'm not one of them. I feel that it is a privilege to hang out here, and don't want to piss in the punch bowl, so I'm rather careful about my interactions here.

I do find it a bit distressing, to see folks behaving like trolls, here. A lot of pretty heavy-duty folks participate on HN, but I guess the casual nature of the interactions, encourages folks to lose touch with that.

I think that it is really cool, that I could post a comment, and have an OG respond. I suspect that won't happen, too often, if I'm screeching and flinging poo.


Just like in-person communities, you'll have general consensus on some ideas and fierce disagreement in others. You'll have people who are kind and those who are hateful.

You can identify that there may be a trend within a community without declaring that everyone in the community thinks the exact same way. And you could also be wrong about that trend because the majority is silent on the issue and you bump up against the vocal minority.

Perhaps you can elaborate on what a community is, and how HN differs from one.


The topical interests, general characteristics, experiences and opinions of HN members are too diverse to qualify as a community, IMO. There may be subsets that could qualify as a community, and if you only look at certain kinds/topics of submissions it might feel like one, but they are mixed within a larger heterogeneous crowd here.


I feel that a community can def be heterogenous AF. I participate in exactly that type of (IRL) community, and it is worldwide.

It does require some common focus, and common agreement that the community is important.

I do believe that we have those, here. The "common focus" may not be immediately apparent, but I think everyone here shares a desire to be involved in technology; which can mean a few things, but I'll lay odds that we could find a definition that everyone could agree on.

It is possible. I guarantee it.


Thanks, that clarifies a lot.


I've left two companies over ethical concerns, but it's not as easy for most people implied here. Losing income can be challenging, especially if the industry is in a downturn.


Generally when people talk about leaving a company, they mean to go to another company.

I don’t think most people expect you to quit on the spot and walk straight into unemployment.


Sometimes the alternative to unemployment is far less attractive (exuberant burnout or total time sink preventing a meaningful job search).


Out of curiosity, did you leave those companies because the company's core business was unethical (or veered that direction over time), because leadership was generally unethical, or because specific incidents that forced your hand?

At a previous job I saw unethical choices made by my boss, but the company as a whole wasn't doing anything wrong. One of my coworkers was asked to do something unethical and he refused, but he wasn't punished and wasn't forced to choose between his ethics and the job.


Every time I had to leave for ethical reasons it was a leadership thing, mostly relating to how they treated other employees.

For instance, I joined a company that advertised itself as being fairly ethical (they even had a "no selling to military" type policy). However, after joining it was apparent that this wasn't the case. They really pushed transparent salaries, but then paid me way more than anyone else. There was a lot of sexism as well: despite one of my colleagues being just as skilled as I am, this colleague was given all the crap work because leadership didn't think they were as capable as I was. There was a lot of other stuff as well, but that's the big summary. I left after nine months.

The other company was similar, but it wasn't nearly as obvious at first. Over time it became very apparent that the founders cared more about boosting their own perception in the industry than they did the actual startup, and they also allowed the women in the company to be treated poorly. This company doesn't exist anymore.

I should mention that these were all startups I worked at, and I was always fairly highly positioned in the company. This meant I generally reported directly to the founders themselves. If it was something like a middle management issue I'd have tried to escalate it up to resolve it before just leaving, but if that doesn't work I'm financially stable enough to just leave.


Thanks for taking the time to respond to me.

In startups like that, company culture and the founders' behavior is nearly one-in-the-same.

That's sad you had to deal with that kind of stuff. Even in the bad jobs I've had, the bad bosses treated the employees equally poorly.


Well it's weird for me, because I was one of the people being treated better (I'm a guy). I just don't want to work with assholes, so when I see people being assholes to other people and leadership doesn't take it seriously then I leave.


> One example that comes to mind is the security person who wanted our logins to expire so frequently that we had to log in multiple times per day. He insisted that anything less was below his personal standards for security and it would violate his personal integrity to allow it. Of course everybody loathed him, but not because they lacked personal integrity or ethics.

Speaking as a "security person", I passionately despise people like this because they make my life so much more difficult by poisoning the well. There are times in security where you need to drop the hammer, but it's precisely because of these situations that you need to build up the overall good will with your team of working with them. When you tell your team "this needs to be done immediately, and it's blocking", you need to have built up enough trust that they realize you're not throwing yet another TPS report at them, this time it's actually serious, and they do it immediately, as opposed to fighting/escalating.

And yes, like the original poster, most of them think they're the main character in an suspense-thriller where they're The Only Thing Saving Humanity From Itself, when really they're the stuck-up side relief character in someone else's romcom, at best.


> And yes, like the original poster, most of them think they're the main character in an suspense-thriller where they're The Only Thing Saving Humanity From Itself, when really they're the stuck-up side relief character in someone else's romcom, at best.

That's an interesting read of what I posted.

Glad to have been of service!


> In my experience, the average person I’ve worked with also believes in personality integrity and is guided by a sense of ethics.

Individual aspirations are not enough, if your org doesn't shape itself in a way to prevent bad outcomes, bad outcomes will happen.


If the world had more stuck up prigs, billion dollar corporations wouldn’t be using customers to beta test their lethal robots on public streets.

Here’s to prigs!


And the million people being killed by human drivers every year? I guess they are a worthy sacrifice for idealogical purity.


They're a sacrifice at the altar of biased decision making.

I think Tesla is somewhat reckless with self driving, but we all need to agree humans aren't much better and don't generate any controversy.


> we all need to agree humans aren't much better

At the current state of the art for self-driving, this simply is not true. Humans are much better, on average. That's why the vast majority of cars are still driven by humans.

The technology will keep improving, and at some point one would expect that it will be more reliable than humans. But it's significantly less reliable now.


Self-driving cars are a solution to a problem we already fixed a hundred years ago: we fixed transit with trains.

PS: I'm not claiming that every single transport need can be solved by trains, but they do dramatically reduce the cost in human life. Yes, they have to be part of a mix of other solutions, such as denser housing. Yes, you can have bad actors that don't maintain their rail and underpay/understaff their engineers which leads to derailments, etc. I say this because the utopia of not having to drive, not caring about sleepiness, ill health, or intoxication, not having to finance or repair a vehicle or buy insurance, not renting parking spots, all that is available today without having to invent new lidar sensors or machine vision. You can just live in London or Tokyo.


> Self-driving cars are a solution to a problem we already fixed a hundred years ago: we fixed transit with trains.

Not for everyone, we didn't. Self-driving cars have the potential to serve people who don't want to restrict themselves to going places trains can take them.

> You can just live in London or Tokyo.

Not everyone either can or wants to live in such places. If I prefer to live in a less dense area and have a car, the risk is mine to take. And if at some point a self-driving car can drive me more reliably than I can drive myself, I will gladly let it do so.


> Tokyo

I traveled there regularly, for over 20 years.

Their train system is the Eighth Wonder.

A lot of the reason, is cultural. Trains are a standard part of life. Most shows have significant scenes on commuter trains, as do ads. Probably wouldn’t apply to nations like the US.


> the million people being killed by human drivers every year?

If self-driving cars at their current level of reliability were as common as human drivers, they would be killing much more than a million people a year.

When I am satisfied that a self-driving car is more reliable than I am, I will have no problem letting it take me places instead of driving myself. But not until then.


That comment was about self-driving cars? Here I was thinking it was about Israeli arms manufacturers testing their intentionally-lethal robots on Palestine before selling them to the USA.

Anyway, subways are awesome.


I’m not saying they should, but that there’s a right way to do things and a wrong way to do things.

The right way asks for community buy in, follows safety procedures, is transparent and forthcoming about failures, is honest about capabilities and limitations.

The wrong way says “I can do what I want, I’m not asking permission, if you don’t like it sue me” The wrong way throws the safety playbook out the window and puts untrained operators in charge of untested deadly machines. The wrong way doesn’t ask for community input, obfuscates and dissembles when challenged, is capricious, vindictive, and ultimately (this is the most crucial part) not effective compared to the right way of doing things.

Given a choice between the safe thing to do and the thing that will please Musk, Tesla will always choose the latter.


The human driver is liable, the machine is not (or not in the same sense).


And we all know that liability makes accidents less fatal after the fact ;)


"I can tolerate a million people dying, but I draw the line at one person dying without a clear person to sue."


"I'm sorry ModernMech, but you're in violation of our CoC with your overly negative and toxic tone. We're going to go ahead, close your issue, and merge the PR to add Torment Nexus integration."

This is what happens in the real world when you're a stuck up prig, not the Hollywood movie ending you've constructed in your head.


> I was fortunate to find an employer that also believed in these values.

Same here, it's not paying well, but it feels refreshing to know that babies won't get thrown into mixers if you stop thinking for 10 minutes.


>I'm a huge believer in Personal Integrity and Ethics, but I am painfully aware that this makes me a fairly hated minority

This is like when you tell an interviewer your great flaw is being too much of a perfectionist.


…and… here we go…

I have no idea why the tech industry is such a moral cesspool.


It isn't though it's not really even one industry. It's used by every industry and some of that is a cesspool and some solutions/products are purely tech based cessools.


Easy money and generally low education


All industries that involve huge amounts of money are moral cesspools. Tech are saints compared to the “defense” industry, or healthcare.


If you get to see some of the details, defense (US) is expensive but there is very little profit compared to other industry. There is epic amount of inefficiencies which is where all that cost is eaten.


Or anything in manufacturing or food/beverage (see Nestle and water rights) production. I think most of tech has it pretty good. Tech has the potential for incredible amounts of bad, but this is limited to the handful that dominate social media (see Facebook and the civil war in Ethiopia) or, I don't know, the ones selling surveillance software to governments and law enforcement.


I thought ICT was terrible, so I decided I'd try the industrial side of things.

Ok, on the one hand, getting to play with cool robots, and eg using an actual forklift for debugging? Absolutely priceless, wouldn't trade it for the world.

But the ethical side of things? There's definitely ethics, don't get me wrong. Especially on the hardware side - necessary for safety after all. But the way software is sold and treated is ... different.


My response when I'm told that in an interview is to ask specifically how that trait has caused problems for them. Quickly separates someone who's actually put thought into it from someone who is just trying to skate by.


That sounds funny, but being a perfectionist IS actually a problem. You'll often waste time and effort making something perfect when "good enough" is all that's required.


I don't relish all of the issues which will eventually surface with SpaceX's Starship, which makes Space Shuttle development look like a paragon of high quality development practices. Starship is built in a metaphorical barn with a "fuck around and find out" attitude.


I don't think that's quite the case. SpaceX's method is more "release early, release often", and find (and solve!) issues early on. Traditional space companies on the other hand use a very rigid waterfall method.

SpaceX's method is not "fuck around and find out". It's design, find out, iterate. From what I can tell from the outside, it seems very reasonable.


If you're looking for a rocket company with a barn and a "fuck around and find out" attitude, Pythom is the one. Watch how they test rockets: https://vimeo.com/690376951

From another angle, showing how some of them had to run away from the toxic fumes: https://www.youtube.com/watch?v=EQ1j85VgALA


The early manned space programs at USAF/NASA were a lot more cavalier than the shuttle program.


That metaphorical barn is run by Kathy Lueders. Look her up and it might soften your thinking a bit.


I have no idea what to make of this, does anyone have further information? Faces match, some careers match, logo is insane:

https://rumble.com/v4wxpje-challenger-astronauts-alive-deman...


Well, according to Occam's Razor...


Why would NASA use their real names if they hired some random group of people to play astronauts that died in Challenger? Or, why would NASA not give false identities to their astronauts that faked dying in Challenger and instead gave them high profile jobs that would have required real resumes? And what is the point of blowing up a space shuttle? If NASA is faking space launches all the time, it seems easier just to declare each one a success than to manufacture a tragedy and congressional investigation. This guy is an absolute kook and that "documentary" is complete nonsense.


My guess is because when you make it so stupidly obvious it's unbelievable, people will respond exactly like you have, ask exactly your questions, and end up convinced it's not true. Ad hominem doesn't help (as much as I may agree!).

The fact remains that these people the guy found look extremely similar, but correctly aged and have the same names. If it's not indicative of some bizarre conspiracy, it's still extremely weird a coincidence.

I'd have hoped someone could calculate some odds based on names and looks or something and make it make sense.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: