There are now 4 people left in the OpenAI non-profit board, after the ouster of both Sam and Greg today. 3 of the 4 remaining are virtual unknowns, and they control the fate of OpenAI, both the non-profit and the for-profit. Insane.
For anybody, like me, who was wondering who is actually on their board:
>OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.
Sam is gone, Greg is gone, this leaves: Ilya, Adam, Tasha, and Helen.
Tasha: https://www.usmagazine.com/celebrity-news/news/joseph-gordon... (sorry for this very low quality link, it's the best thing I could find explaing who this person is? There isn't a lot of info on her, or maybe google results are getting polluted by this news?)
Helen Toner is well-known as well, specifically to those of us who work in AI safety. She is known for being one of the most important people working to halt and reverse any sort of "AI arms race" between the US & China. The recent successes in this regard at the UK AI Safety Summit and the Biden/Xi talks are due in large part to her advocacy.
She is well-connected with Pentagon leaders, who trust her input. She also is one of the hardest-working people among the West's analysts in her efforts to understand and connect with the Chinese side, as she uprooted her life to literally live in Beijing at one point in order to meet with people in the budding Chinese AI Safety community.
She's at roughly the same level of eminence as Dr. Eric Horvitz (Microsoft's Chief Scientific Officer), who has similar goals as her, and who is an advisor to Biden. Comparing the two, Horvitz is more well-connected but Toner is more prolific, and overall they have roughly equal impact.
She has an h index of 8 :/ thats tiny for those that are unaware, in property much every field. AI papers are getting an infinite number of citations now a days because the field is exploding - just goes to show no one doing actual AI research cares about her work
Anyway, the idea that the Chinese military or leadership actually will sacrifice a potential advantage in the name of ai safety is absurd.
Agreed. She’s not famous for her publications. She’s famous (and intimidating) for being a “power broker” or whatever the term is for the person who is participating in off-the-record one-on-one meetings with military generals.
> Anyway, the idea that the Chinese military or leadership actually will sacrifice a potential advantage in the name of ai safety is absurd.
The point of her work (and mine and Dr. Horvitz’s as well) is to make it clear that putting AI in charge of nuclear weapons and other existential questions is self-defeating. There is no advantage to be gained. Any nation that does this is shooting themselves in the foot, or the heart.
> The point of her work (and mine and Dr. Horvitz’s as well) is to make it clear that putting AI in charge of nuclear weapons and other existential questions is self-defeating
Is part of the advocacy convincing nation-states that an AI arms-race is not similar to a nuclear arms-race amounting to a stalemate?
What's the best place for a non-expert to read about this?
> What's the best place for a non-expert to read about this?
Thank you for your interest! :) I'd recommend skimming some of the papers cited by this working group I'm in called DISARM:SIMC4; we've tried to collect the most relevant papers here in one place:
At a high level, the academic consensus is that combining AI with nuclear command & control does not increase deterrence, and yet it increases the risk of accidents, and increases the chances that terrorists can "catalyze" a great-power conflict.
So, there is no upside to be had, and there's significant downside, both in terms of increasing accidents and empowering fundamentalist terrorists (e.g. the Islamic State) which would be happy to utilize a chance to wipe the US, China, and Russia all off the map and create a "clean slate" for a new Caliphate to rule the world's ashes.
There is no reason at all to connect AI to NC3 except that AI is "the shiny new thing". Not all new things are useful in a given application.
I would say we're likely to see some governance and board composition changes made soon.
Honestly, I would expect more from Microsoft's attorneys, whether this was overlooked or allowed. Maybe OAI had superior leverage and MS was desperate to get in to AI.
"McCauley currently sits on the advisory board of British-founded international Center for the Governance of AI alongside fellow OpenAI director Helen Toner. And she’s tied to the Effective Altruism movement through the Centre for Effective Altruism"
I wonder if she (Tasha? Tascha?) was Sam FTX's girlfriend before Caroline. She hired him at the Effective Altruism foundation or whatever it is called after he left Jane Street.