Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Promoting Anthropic and putting down OpenAI doesn't make her better at her job. Her job isn't self promotion.


Unless she has equity in Anthropic (which would be major conflict of interest), I don't see how this is self promotion...?


I'm guessing the reasoning is something like this...

As a CEO I'd want your concerns brought to me so they could be addressed. But if they were addressed, that is one less paper that could be published by Ms. Toner. As a member of the openai board, seeing the problem solved is more important to openai than her publishing career.


https://openai.com/our-structure

"Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s pzrincipal beneficiary is humanity, not OpenAI investors."

I see. I don't know whether she did discuss any issues with Sam before hand, but it really does not sound like she had any obligation to do so (this isn't your typical for-profit board so her duty wasn't to OpenAI as a company but to what OpenAI is ultimately trying to do).


> but it really does not sound like she had any obligation to do so

The optics dont look good though if a board member is complaining publicly.


Frankly that's an irrelevant first order thinking.

If Sam would let it go what would happen? Nothing. Criticism and comparisions already exist and will exist. Having it coming from board member at least gives counter argument that they're well aware of potential problems and there is opportunity to address gaps if they are confirmed.

If regulators find argument in the paper reasonable and that's going to have impact - what's wrong with that? It just means argument was true and should be addressed.

They don't need to worry about commercial side because money is being pured more than enough.

The nature of safety research is critical by definition. You can't expect to have research constrained to talk only in positive terms.

Both sides should have worried less and carry on.


but her job is to do exactly that. anybody in this space knows Anthropic was formed with the goal of AI Safety. her paper just backed that. is she supposed to lie?


What she is supposed to do is bring the issues to the company so that they can be fixed.

That's the pro safety solution.


It is a complaint, or a discussion of the need for caution?


It does not sound like what she did helps advance the development of AGI that is broadly beneficial. It simply helps slow down the most advanced current effort, and potentially let a different effort take the lead.


> It simply helps slow down the most advanced current effort

If she believes that the most advanced current effort is heading in the wrong direction, then slowing it down is helpful. "Most advanced" isn't the same as "safest".

> and potentially let a different effort take the lead

Sure but her job isn't to worry about other efforts, it's to worry about whether OpenAI (the non-profit) is developing AGI that is safe (and not whether OpenAI LLC, the for-profit company, makes any money).


On the other hand, you create more pressure for solving the problem by publishing acknowledged paper, if your voice is not usually heard


She's a board member. Who had approximately 1/4 of the power to fire Sam, and she did eventually exert it. Why do you rather assume her voice was not heard?


You should assume that most of this happened before the firings.

Then it was 1/6 about the voting.

But voting is totally different thing than speaking about concerns and then getting actually them into the list, which will we voted further if they decide to do something about it.

In theory that is 1/6 * 1/6 power if you are alone with it for the decision to happen.


I still see no justification for assuming that the board member's voice was not heard before the publication. There's zero evidence for it, while the priors ought to be favoring the contrary because she does wield a fairly material form of power. If more evidence does emerge, then we could revisit the premise.


> she does wield a fairly material form of power.

How? Being a board member is not enough. There were aready likely two against her in this case, while the rest is unknown.


These guys are already millionaires. Do you think people writing these kinds of papers really are that greedy?


Is Helen associated with Anthropic?


Apparently an indirect association. From [0]:

Fall 2021

Holden Karnofsky resigns from the Board, citing a potential conflict because his wife, Daniela Amodei, is helping start Anthropic, a major OpenAI competitor, with her brother Dario Amodei. (They all live(d) together.) The exact date of Holden’s resignation is unknown; there was no contemporaneous press release.

Between October and November 2021, Holden was quietly removed from the list of Board Directors on the OpenAI website, and Helen was added (Discussion Source [1]).

0. https://loeber.substack.com/p/a-timeline-of-the-openai-board

1. https://forum.effectivealtruism.org/posts/fmDFytmxwX9qBgcaX/...


Don't tell me there is another polycule in there somewhere.


> Her job isn’t self promotion

Isn’t she an academic? Getting people to pay attention to her is at least half her job.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: