I'm guessing the reasoning is something like this...
As a CEO I'd want your concerns brought to me so they could be addressed. But if they were addressed, that is one less paper that could be published by Ms. Toner. As a member of the openai board, seeing the problem solved is more important to openai than her publishing career.
"Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s pzrincipal beneficiary is humanity, not OpenAI investors."
I see. I don't know whether she did discuss any issues with Sam before hand, but it really does not sound like she had any obligation to do so (this isn't your typical for-profit board so her duty wasn't to OpenAI as a company but to what OpenAI is ultimately trying to do).
Frankly that's an irrelevant first order thinking.
If Sam would let it go what would happen? Nothing. Criticism and comparisions already exist and will exist. Having it coming from board member at least gives counter argument that they're well aware of potential problems and there is opportunity to address gaps if they are confirmed.
If regulators find argument in the paper reasonable and that's going to have impact - what's wrong with that? It just means argument was true and should be addressed.
They don't need to worry about commercial side because money is being pured more than enough.
The nature of safety research is critical by definition. You can't expect to have research constrained to talk only in positive terms.
but her job is to do exactly that. anybody in this space knows Anthropic was formed with the goal of AI Safety. her paper just backed that. is she supposed to lie?
It does not sound like what she did helps advance the development of AGI that is broadly beneficial. It simply helps slow down the most advanced current effort, and potentially let a different effort take the lead.
> It simply helps slow down the most advanced current effort
If she believes that the most advanced current effort is heading in the wrong direction, then slowing it down is helpful. "Most advanced" isn't the same as "safest".
> and potentially let a different effort take the lead
Sure but her job isn't to worry about other efforts, it's to worry about whether OpenAI (the non-profit) is developing AGI that is safe (and not whether OpenAI LLC, the for-profit company, makes any money).
She's a board member. Who had approximately 1/4 of the power to fire Sam, and she did eventually exert it. Why do you rather assume her voice was not heard?
You should assume that most of this happened before the firings.
Then it was 1/6 about the voting.
But voting is totally different thing than speaking about concerns and then getting actually them into the list, which will we voted further if they decide to do something about it.
In theory that is 1/6 * 1/6 power if you are alone with it for the decision to happen.
I still see no justification for assuming that the board member's voice was not heard before the publication. There's zero evidence for it, while the priors ought to be favoring the contrary because she does wield a fairly material form of power. If more evidence does emerge, then we could revisit the premise.
Holden Karnofsky resigns from the Board, citing a potential conflict because his wife, Daniela Amodei, is helping start Anthropic, a major OpenAI competitor, with her brother Dario Amodei. (They all live(d) together.) The exact date of Holden’s resignation is unknown; there was no contemporaneous press release.
Between October and November 2021, Holden was quietly removed from the list of Board Directors on the OpenAI website, and Helen was added (Discussion Source [1]).