This is not true. A different deal was offered to Anthropic, and they refused. Then the DoW turned around and went with OpenAI even though their terms weren’t materially different from the terms of their agreement with Anthropic.
it seems like oai deal does include the same red lines, plus some more, and the ability for oai to deploy safety systems to limit the use cases of the model via technical means
this seems strictly better than what anthropic had. anthropic has ruined their relationship with the US govt, giving oai a good negotiating hand
the oai folks are good at making deals, just look at all the complex funding arrangements they have
"OAI wins by playing the government's game" is such a catastrophically bad take.
> anthropic has ruined their relationship with the US govt, giving oai a good negotiating hand
You want to try defending this ridiculous statement a bit more thoroughly?
For a start, the designation by the government of a company as a supply chain risk is not a negotiating tool. It may well be found to be arbitrary and capricious once the courts look at it. Business have rights too.
For another, why do you think OAI was able to make what looks like the same deal? Anthropic was willing to say yes to anything lawful up to their red lines, and it was still a no. Why turn around and give OAI exactly the same thing, unless it's not really what it looks like?
And Altman is always looking for the next buck.
All these supposedly impressive complex funding arrangements have OAI on the hook to firms like Oracle in the hundreds of billions of dollars. No indication at all how this unprofitable business will become a trillion dollar juggernaut.
you're right, supply chain risk is not a negotiating tool. it's spite after talks have ended. it indicates a ruined relationship
the oai deal is similar, but it includes technical safeguards. I think anthropic would have wanted the oai deal
the deal was not only successful because the govt is rebounding. the miltary prefers boundaries to be technical, not contractual
they can try using it, and trust that it will only operate within its designed limits, where the output is reliable
technical barriers to misuse help prevent both accidental and bad-faith misuse. a contract allows both kinds of misuse, enforced only by lawsuits. filing in court to dispute the terms is not always allowed
> supply chain risk is not a negotiating tool. it's spite after talks have ended.
No. It's unlawful abuse of power.
> the miltary prefers boundaries to be technical, not contractual
That's nice for the military. Meanwhile, Anthropic has the right to refuse the use of its IP without being subject to punishment by the government.
You seem to me to be irretrievably "deal-brained", and not at all concerned about the obvious abuse of power by the government here, or the constant display of bad faith by gov't officials.
Adding more to this, IIRC US Govt threatened to invoke laws which have never been used against an American company in the entire history of US over two conditions that were:
1. No global surveillance on citizens
2. No autonomous killing machines (essentially)
That was it, Anthropic was fine with everything else but they couldn't (in their conscience?) agree to these two things and just these two very reasonable demands caused the govt. to spiral so bad.
anthropic has nothing but a contract to enforce what is appropriate usage of their models. there are no safety rails, they disabled their standard safety systems
openai can deploy safety systems of their own making
from the military perspective this is preferable because they just use the tool -- if it works, it works, and if it doesn't, they'll use another one. with the anthropic model the military needs a legal opinion before they can use the tool, or they might misuse it by accident
this is also preferable if you think the government is untrustworthy. an untrustworthy government may not obey the contract, but they will have a hard time subverting safety systems that openai builds or trains into the model
- When has any AI company shipped "safeguards" that aren't trivially bypassed by mid bloggers? Just one example would be fine.
- The conventional wisdom is that OAI's R&D (including safety) is significantly behind Anthropic's.
- OpenAI is constantly starved for funding. They don't make money. They have every incentive to say yes to a deal that entrenches them into govt systems, regardless of the externalities
anthropic previously agreed to deploy their models in this context with nothing but a contract to enforce their red lines -- they even disabled their safety systems!
per announcement, openai can include safety systems of their own making, including ones to prevent their red lines from being crossed. that seems to be a more robust solution, including in the face of an untrustworthy government
> this undervalues how financial engineering allows more ideas and companies to be funded
I think the comment is about the marginal utility of additional workers at Jane St over, perhaps, DE Shaw Research. The caliber and education of roughly the same kind of person might be applied to understanding drug mechanisms, or shaving off trading milliseconds.
Is the marginal benefit to the world greater if someone is advancing financial engineering? I don't think it's obvious that our increased complexity is, itself, yielding further increases in 'allowing more ideas and companies to be funded' except in the sense where already-wealthy people gain more discretionary income which they may decide to spend on their pet projects. Futures have existed for much longer than derivative markets; are we helping farmers more when we allow futures to be traded more quickly?
But I disagree that the limit is funding—it's simply a lack of concerted interest. We accept that we should spend tax money on rewarding certain financial activities, and we create a system that disproportionately rewards people who facilitate these activities. But we might restructure things so people are incentivized to do research instead of financial engineering.
I think the fundamental idea is that things of value need to be extracted or manufactured at some point and we're not set up to reward people studying new extractive tools or new manufacturing processes when those people could instead work on finance products.
I think these are totally different things. HFT firms and Hedge Funds are not "allowing more ideas to be funded". Finance in general can indeed be good but I think its much harder to argue for the net benefit of firms like Jane Street or Citadel.
"SHOULD generally means: some people might require it."
No it absolutely does not mean that. It means, by explicit definition which is right here, that text is exactly that definition, that no one requires it. They can't require it, and still be conforming to the spec or rfc. That's the entire point of that text is to define that and remove all ambiguity about it.
It's not required by anyone.
The reason it's there at all, and has "should" is that it's useful and helpful and good to include, all else being equal.
But by the very definition itself, no people require it. No people are allowed to require it.
Heat travels when there is a thermal gradient. What thermally superconducting material are you going to make your cube out of that the surface temperature is exactly the same as the core temperature? If you don't have one, then to keep the h100 at 70c, the radiators have to be colder. How much more radiator area do you need then?
Have you considered the effects of insolation? Sunlight heats things too.
How efficient is your power supply and how much waste heat is generated delivering 1kW you your h100?
How do you move data between the ground and your satellite? How much power does that take?
If it's in LEO, how many thermal cycles can your h100 survive? If it's not in LEO, go back to the previous question and add an order of magnitude.
I could go on, but honestly those details - while individually solvable - don't matter because there is no world where you would not be better off taking the exact same h100 and installing it somewhere on the ground instead
The typical GPU cloud machine will have 8 H100s in a box. I didnt check your math but if a single machine needs 32 square meter radiator, 200 machines will probably be the size comparable to the ISS.
How much does it cost to launch just the mass of something that big?
Do you see how unrealistic this is?
Given that budget, I can bundle in a SMR nuclear reactor and still have change left.
reply