Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Warning from OpenAI leaders helped trigger Sam Altman's ouster (washingtonpost.com)
120 points by c5karl on Dec 8, 2023 | hide | past | favorite | 87 comments



Seven Signs of Ethical Collapse

1. Pressure to maintain numbers

2. Fear and silence

3. Young ‘uns and a bigger-than-life CEO

4. Weak board of directors

5. Conflicts of interest overlooked or unaddressed

6. Innovation like no other company

7. Goodness in some areas atones for evil in others

from "The Seven Signs of Ethical Collapse: How to Spot Moral Meltdowns in Companies Before It's Too Late", 2005 by Marianne M. Jennings, professor of legal and ethical studies in business in the Department of Management in the W.P. Carey School of Business at Arizona


Letter signing looks like a masterful use of the bandwagon effect. Quickly reframing the issue, going to attack, and creating a sense of outrage and hurry. People start quickly pressuring each other because there is a need to be unity for the scheme to work and no time to think.

> Within hours, messages dismissed the board as illegitimate and decried Altman’s firing as a coup by OpenAI co-founder and chief scientist Ilya Sutskever, according to the people.

>people identified as current OpenAI employees also described facing intense peer pressure to sign the mass-resignation letter.

>“Half the company had signed between the hours of 2 and 3am,”


This is also consistent with some number of OpenAI operators wanting Altman removed, and then discovering that the company's long-term financial outlook, and their accrued compensation in particular, was highly contingent on him staying. It's pretty normal for people to believe that they could, if all else was equal, have a better or more healthy management culture. But people rarely act on that belief precisely because earnestly pursuing it can end up threatening everyone's future well-being.

It's also consistent with OpenAI's board comprehensively botching their response to those concerns, even if had been clear that over the medium term Altman needed to be managed out. That's why there's a term for this ("managed out").

Instead, the OpenAI board created news cycle after news cycle of palace intrigue by seeming obliviousness to how their actions would be read.


Incentives are the force behind almost everything.

"Well, I think I’ve been in the top 5% of my age cohort all my life in understanding the power of incentives, and all my life I’ve underestimated it. And never a year passes but I get some surprise that pushes my limit a little farther." -- Charlie Munger


The board fired Altman and maintained discretion. It was Altman that instigated the circus after that.


And?


How do you distinguish between bandwagoning and genuine consensus? How unpopular does someone need to be before they can cry "bandwagoning" whenever they're unaminously criticized?


Speed. Consensus usually implies self-reflection. Know thyself and all that jazz. If there's no time to know thyself, it's hard to get true consensus.


I really dislike the current attitudes around ethics and AI. Tools should be 100% open about what they can and can't do. Building unethical software like deepfakes for porn is also unethical.

Quoting of vague "ethics" risks around AI, or neutering LLMs in the vein of making them "safer" seems dubious to me. Obviously, there are a huge number of problems which depend on safe AI - where safe often means something completely different from what is currently being talked about. e.g. does the AI reliably do what I expect it to do in the face of adversarial inputs? does it handle risky settings like deciding to refund customers? How can I ensure that the AI is making legal decisions and not secretly using redlining, or other illegal criteria to make its decision?


I am still taking in information and trying to figure out my stance for AI ethics, but this is moreso about the ethics of the business and its structure, not the ethics of the AI that they make.


Both of these are intertwined, if not outright the same thing. A place that has no corporate understanding of ethics, no staff to take care about that, or even fires that staff (like Twitter did [1]), cannot build any ethical system. No matter if it's chatbots that shouldn't scoop up abuse by 4chan [2], reproduce long-debunked racially charged myths [3], or even something as simple and innocent-seeming as a soap dispenser that had never even been tested by Black people [4].

Many people think like this is "wokeness" and at best a waste of money, and yet time and time again there are incidents that show in sometimes very horrible ways just how bad biases and a lack of diversity and oversight at all stages - planning, development and post-launch support - can end.

IMHO, ethics oversight and threat modelling based on ethics should be seen, similar to pentesting and QA in AI, software (and, as the example shows, also hardware) development, as an absolute requirement to do - because if you don't do it, 4chan will, and tear your public image apart in the process.

[1] https://www.cnbc.com/2023/05/26/tech-companies-are-laying-of...

[2] https://www.theverge.com/2016/3/24/11297050/tay-microsoft-ch...

[3] https://fortune.com/well/2023/10/20/chatgpt-google-bard-ai-c...

[4] https://www.standard.co.uk/news/world/automatic-soap-dispens...


I agree that oversight is needed - but I'm not certain people are the right approach to this oversight. Everything described above represents an undesirable software output for the given business. Why can't we make the LLM smart enough to know it's current operating context and build guardrails to catch the cases which slip through?


The proper guardrails worthy of being entrusted, they best come from an ecology of human minds with diverse perspectives and ideas, to grok the whole possibility space with minimal blind spots.

The human machine is already tuned to do this navigating of the information landscape. The sense of curiosity itself lures us toward things that are just a little weird -- the goldilocksian zone between familiar and unfamiliar[1]. Our minds are designed to bridge gaps and seek paths into otherness, but only when the individual steps are small. We are built to seek, grow understanding and incorporate diverse thought :)

[1]: https://nautil.us/curiosity-depends-on-what-you-already-know...


Thing is, you need people to take care in selecting training data. GIGO - garbage in, garbage out - especially applies in AI.


> Some OpenAI employees have rejected the idea that there was any coercion to sign the letter. “Half the company had signed between the hours of 2 and 3am,” a member of OpenAI’s technical staff, who tweets under the pseudonym @roon, posted on X. “That’s not something that can be accomplished by peer pressure.”

> Altman’s departure jeopardized an investment deal that would allow them to sell their stock back to OpenAI, cashing out equity without waiting for the company to go public. The deal — led by Joshua Kushner’s Thrive Capital — values the company at almost $90 billion, according to a report in the Wall Street Journal, more than triple its $28 billion valuation in April, and it could have been threatened by tanking value triggered by the CEO’s departure.

Ok so the company was offering a cash out of 3x and the long term employees are most likely to have most power and influence within the company. Yeah, no undue coercion that everyone is so motivated by this they’re signing things at 2am? Or more like they’re reading tea leaves and don’t want to be the odd man out because this is a convenient enemies list if your name isn’t on it, especially since Sam apparently has a history of manipulative and retributive behavior + a huge pay day for either you, your bosses, or people who have significant standing to impact your performance reviews / career trajectory.


> Or more like they’re reading tea leaves and don’t want to be the odd man out because this is a convenient enemies list if your name isn’t on it, especially since Sam apparently has a history of manipulative and retributive behavior + a huge pay day for either you, your bosses, or people who have significant standing to impact your performance reviews / career trajectory.

To me, the tea leaves at the time said “a once in a lifetime acquihire that will rescue everyone is about to happen OR Sam will come back and the damage will be undone”. There’s life-changing amounts of money on the table and signing a letter with the majority of employees is a no-brainer for the upside scenario.

And to-be-fair, a lot of people were up at 2am waiting for news on this drama. A lot of people with no skin in the game with nothing to lose were watching it intently. Why wouldn’t the employees of the actual company do the same?


>Joanne Jang, who works in products at OpenAI, tweeted that no influence had been at play. “The google doc broke so people texted each other at 2-2:30 am begging people with write access to type their name.”

Also, this line seems self-contradictory. There was no influence, but people were "begging" others to sign?


I think person A was calling person B (who had write access) to write person's A name, since person A was unable to do so because the google doc "broke?"


Oh, of course. That's rather obvious on rereading. With my 5pm Friday brain, I somehow interpreted "broke" to mean that the doc had become available. Like a news story breaking, I guess?


B: Why haven't you signed?

A: I tried, but I couldn't edit the Google doc.

B: I could put your name in for you.

. . .

People don't just spontaneously wake up at 2:30 am to sign a letter.


I stayed up late to follow the drama and I don't even work there.


They were begging others to sign _for them_. It's not contradictory :)


No, the signers begged others to sign for them.


So we have an admission that folks were putting other people's names on the lists?


Note that the executives approaching the board was previously reported by Time: https://time.com/6342827/ceo-of-the-year-2023-sam-altman/ https://news.ycombinator.com/item?id=38550240

This adds some additional material, like blaming internal chaos on Altman inconsistencies, which seems consistent with what the Atlantic was hinting at: https://www.theatlantic.com/technology/archive/2023/11/sam-a... https://news.ycombinator.com/item?id=38341399

(It also seems consistent with the reporting on Altman's firing from YC, which suggested that it wasn't any single smoking gun or crossed redline, just a pattern of behavior which eventually led to his ouster and some internal YC reforms: https://www.washingtonpost.com/technology/2023/11/22/sam-alt... https://news.ycombinator.com/item?id=38378216 )


Even if everything mentioned is 100% true the board handled the situational so poorly they lost and Sam won. It was like an average episode of Succession being played out.


Nah, it was like a good episode of Succession!


Well I for one am shocked that these allegations of psychological abuse would have led to such a cunning and effective response from Sam after the firing.


I wonder if those same senior employees signed the letter. If the WaPo could verify that, that could clarify whether peer pressure extended beyond Ilya.


It wouldn't be surprising if they did sign the letter, and it wouldn't take peer pressure to make it happen. People can have concerns about a manager's style or a leadership culture and still not want to have years of lucrative equity comp erased overnight by the board's reaction to those concerns.


I'd still find the added context insightful. They went to the board alleging psychological abuse, so I assume they understood the board would at least consider removing Sam.


Sure, and it's entirely possible for them to have changed their mind once they saw how contingent the whole company was on his remaining.


Sam Altman was toxic, but also, 90% of OpenAI employees joined the counter-coup. Doesn't really add up.

That this article is written by Nitasha Tiku, a Gawker alum, casts even further doubt on its claims.


I don't really have a dog in this fight but in case of a cult of personality, "toxic" and "loyal followers" are not contradictory.


The word "toxic" has lost all meaning these days. It would be interesting to see some actual examples. I've worked with people, in the past, who would be considered "toxic" by today's standards. For example, they were disagreeable, pointing out when time was being wasted on tangents, demanding data to back up claims, and all those other nasty things that help keep a team truthful and on track. They were great to work for, if you could stomach criticism and directness, which seems to be getting rare.


I mean his own sister is claiming he did a lot of really vile shit to her when she was a child. The idea that this person might also be genuinely “toxic” is hardly a stretch.

https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...


she seems stable


Getting sexually assaulted as a small child by a family member would probably do that I imagine.


I'm pretty much sure that Tesla employees will revolt if its board tries to oust Elon. That doesn't necessarily mean that Elon is not a toxic leader.


IMO, ousting Elon would be the best thing Tesla could do for the future of the company.

The problem is that stock traders hate uncertainty, and ousting the CEO of any company brings massive uncertainty. So in the short term, an ousting of Elon would probably tank the stock.


I'm not a fan of some of the things Elon has done recently, like really not a fan, bigly, but I also know I don't have to love the whole person to really appreciate some things that person has done.

And Tesla wouldn't be Tesla without Elon, it wouldn't be nearly as innovative. There definitely wouldn't be a Tesla Semi that is going to change the trucking industry, there wouldn't be a Cybertruck that is going to change the farm truck industry. EVs wouldn't be nearly as far along as they are now.

I suppose ousting Elon would keep Tesla where it is now, but that is like ousting Jobs from Apple. Elon has many great ideas and he delivers.

I don't have to agree with his personal politics to really appreciate what Elon has done for humanity. He is one persistent person, and he gets things done. I mean, how many people have internet access now that didn't before due to Starlink. It wasn't just him obviously, it was a very large team of people. But he somehow got those people to believe in the mission. And THEY delivered. And I don't think we would be where we would be today with renewable energy and electric vehicles if it wasn't for his contributions.

Again, we don't have to like the whole person to appreciate their contributions. None of us are perfect, and some of us say highly offensive things in public hat can feel very threatening and may actually be threatening to some.

I think Elon's public statements sometimes are taken even more seriously because some/many of us idolized Elon (myself included) for a while. And there was this big reveal of like "oh, I don't like that part of you", "what you said was insensitive and mean". And then this tendency to "throw the baby out with the bathwater" and overlook everything great that he has done.

It is a complicated thing for sure.


> There definitely wouldn't be a Tesla Semi that is going to change the trucking industry

Pure delusion.

The Tesla semi might be alright for intra-city deliveries, but it's a god-awful idea for long haul trucking. Tractors rarely sit idle long enough to charge. They'll drop off a trailer at one dock, then immediately pick up a trailer from another dock and get back on the road.

Also, talk to any trucker about the semi and it'll be clear they didn't actually talk to any truckers in their design. They've got a whole list of complaints.

> there wouldn't be a Cybertruck that is going to change the farm truck industry.

You're joking, right? Like, this gotta be a troll post.

> Elon has many great ideas and he delivers.

He delivers 3-5 years behind schedule. Roadster was announced in 2017 for 2020. It's 2023 and Elon and Tesla are completely silent about it. They also announced the ATV, and I doubt that'll ever actually happen.

Don't get me started on FSD, which has been 6 months away for nearly 10 years.


I doubt that. I think Tesla the business would be better off without him. That said, the stock is ludicrously overvalued, which must be largely due to his cult of personality. So if employees are concerned about their options being devalued in the short term, I could at least see the possibility. On the other side, I'm sure Tesla employees lean left, so I doubt most are fans of Elon. I'm not sure the financial motive alone would be enough to engender this sort of reaction.


Tiku edited for Valleywag 2013-14 then left. I wouldn't blindly discredit her reporting barring specific complaints.


Altman’s departure represented a real possibility they may miss out of enormous amounts of wealth. People are willing to put up with or even support a lot of things when money is on the line.


Everyone is reading so much into this, when it sounds like a personal conflict between Helen and Sam that blew up and turned into a board coup by Helen, packed with vague justifications no one bought because they weren't given any reason to believe them.

Why would the employees buy the coup leaders pitch when they barely explained why it happened in the first place?

The story of the next day's morning meeting was everyone was unsasitfied for the reasons they were given, they were as confused as everyone else, and then the next story is the employees all signing a letter and Ilya doing a 180.

Why would the employees immediately trust the new leadership by a board member not involved in day to day operations? I doubt it's all just about making more $$ under Sam. It's about trust and stability.

Everyone was talking about how it was about AI saftey vs reckless capitalism but tangibly we still have little evidence of that. What we do have is Sam critiquing Helen's paper, then suddenly half the board is moving against Sam and an interim CEO whose slow-AI-dev views on AI, which are 100% in line with Helen's paper, is replacing him. While the conviction of the other board members like Ilya seemed pretty thin.


Sam tried to remove Helen from the board which would effectively give him control over the board with the two people who worked for him holding seats. He was the one that tried to upset the board-CEO balance.


That's a hell of allegations on Sama from employees, if true.


Well that explains a non-trivial number of employees signing a letter threatening to quit unless he returned.


this article tells us nothing new


I will never understand the cult of personality that appears around people like Altman, Musk, or whoever the unicorn CEO of the month is. They are always, without exception, smooth talking psychopaths. That might as well be the job description of being a CEO of a company startup with this kind of growth.


One of these things is not like the other. Altman led OpenAI to an industry leading position and produced record results in product adoption. Even if that is over-praising him, Microsoft's fast and enormous investment in OpenAI was evidently closely linked to Altman. Just springing a surprise on Microsoft was dunderheaded enough to make it obvious that, whatever the board was doing, it was the opposite of what should have been done.

In contrast, it is equally obvious Elon dragged $44B into the road and set it on fire.


How is it equally obvious Elon dragged $44B into the road and set it on fire? Are you sure you aren't speaking from a perspective of bias that already has a disdain towards Elon?


Good point. Based on the latest internal valuation, Elon only destroyed $25b in the first year.


Internal valuations can predict market prices of currently private companies? That’s amazing, someone should go make billions off that


Yes. Everything at X is going swimmingly well, and the statement that from the company that it lost half its valuation, and the owner publicly telling companies to stop advertising on it is actually 19 dimensional chessboxing.

Geez, man. A contortionist would be envious of how you.

https://www.theverge.com/2023/10/30/23938969/x-twitter-valua...


Your assessment is clearly outputted from a model weighted towards disdain.


Sorry. Senpai will never notice you.


I have no idea what you’re trying to say.


Agreed, I have the opposite visceral reaction; immediate and intense skepticism.


The power of reason alone is generally insufficient to spur people into action. It's the impassioned, often irrational fervor that galvanizes movements and leads to significant change. Consider the impact of Erasmus, a paragon of rational thought, against that of Luther, a figure of intense passion and conviction. Initially, Erasmus garnered attention and respect, yet his logical approach failed to incite actionable change among the people. In stark contrast, Luther, with his fervent and somewhat radical beliefs, succeeded in mobilizing the masses.


> Altman’s departure jeopardized an investment deal that would allow them to sell their stock back to OpenAI, cashing out equity without waiting for the company to go public.

"Smooth talking psychopaths" can make you rich if you get in early enough and not left holding the bag.


Sure, but you can remain self aware about what's going on. So many people seem to join the cult of personality, not just recognize the aligned incentives.


Being a smooth talking steve jobs is valuable in an of itself. If you can sell your company to the public with an aura, that's worth more than CTO/CIO/CMO etc combined.

Better to be a thomas edison than a tesla.


My immediate thought on this comment was of was Steve Jobs but I’ve backed off a bit. The man seems to have been a bit of an asshole to some, the stories are numerous, but in some ways he seems different.

I think mostly because he didn’t seem devious, or self serving, just blunt and his mission was to the company or product more so than to himself.


Hi died relatively young and before the general tech backlash, which is why people look back at him fondly. Had he lived, he would've been taken down many notches. I mean, rich asshole who verbally abuses and berates his staff, upgrades his Benz every six months so he doesn't need a license plate like the plebs AND parks exclusively in the disabled spot? That was never gonna fly.


Steve’s overriding mission was to the customer. He used his own taste as a proxy for customer goals.


> Better to be a thomas edison than a tesla

A great example of why America is in social and political decline. The base nihilism of the con man figure is now the quintessential American aspiration.


Truly, a man that died penniless while spouting rather fanciful and discredited claims of superweapons (earthquake machines, intercontinental death rays, for two) is clearly the man to emulate.


Yes. Who would want to be the genius who discovered a revolutionary paradigm in energy transmission and created numerous world-changing inventions when you could be the rich asshole who ripped him off?

And also who claimed to have invented a telephone that talked to the dead[0], btw.

[0]https://www.atlasobscura.com/articles/dial-a-ghost-on-thomas...


Truly, the psychopath torturing animals with live current in public is the man to emulate!


Never happened, that's an urban legend


>Eventually, Edison’s team would kill 44 dogs, six calves, and two horses in their quest to discredit alternating current.

https://www.discovermagazine.com/technology/the-cruel-animal...


Okay, I might have to mea culpa here, as I thought it was a reference to the infamous Topsy the elephant, which Edison was not involved with.


In what sense does Altman belong in the same sentence as Musk? I’m completely puzzled.


Well as per above both are smooth talking psychopaths. However you're right that Altman is much poorer and should probably be listed in the second tier.


Why not add Putin, Trump, Mussolini, or any other number of leaders and historical figures?

This is something a sizable portion of populations do. You may not do it yourself, but understand are people are going to behave like this and prepare accordingly.


We could add them. I didn't because I was keeping my statement within the context of the article focusing on a tech leader but sure, all of them fit the bill.


Yep them too.


Musk is farthest from a smooth talking founder/ceo


He was mean, even though everyone liked him.

Did the Washington Post really publish this?


I’m not surprised by the lack of quality in WaPo and Business Insider articles. Today, these names are nothing more than rebranded Gawker. I _am_ surprised by the hn crowd eating up this kind of recycled, second-hand pov palace intrigue these publications profit from. Guess the death throes of legacy media are still some time away.


> who tweets under the pseudonym @roon

Fundamental factual error, about a figure who anybody doing OpenAI kremlinology would be extremely familiar with.

Inspiring reporting by the Washington Post as usual.


I'm so skeptical of the "he mischaracterized others' views on Helen Toner" narrative, devoid of details.

There are lots of decisions where consensus has to be forged by getting skeptical people onboard. If I had to guess, it probably went something like:

---

SAM: Hey, I think we should remove Helen Toner.

EMPLOYEE A: Hmmm, I'm not sure I agree.

SAM: Well, [reasons x, y, and z], and I'm talking to [employee b] to see if they agree.

EMPLOYEE A: Well, if [reasons x, y, and z] are true, and [employee b] wants to fire her, then I guess I could be convinced.

[later]

SAM: Hey [employee b], I think we should remove Helen Toner. I talked to [employee a] and they said they would be open to it.


There were six board members. Sam and Helen were two of them. The conversations were with the other four board members. So it would be something like this:

Sam to A: We should get rid of Helen. B, C, and D agree with me.

Rinse and repeat. Typical Mean Girls stuff.


That was the original allegation, yes, but the above article makes that allegation specifically for employees as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: