Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He went full-bore on commercialization, scale, and growth. He started to ignore the 'non-profit mission'. He forced out shoddy, underprovisioned product to be first to market. While talking about safety out one side of his mouth, he was pushing "move fast and break things", "build a moat and become a monopoly asap" typical profit-driven hypergrowth mindset on the other.

Not to mention that he was aggressively fundraising for two companies that would be either be OpenAI's customer or sell products to OpenAI.

If OpenAI wants commercial hypergrowth pushing out untested stuff as quickly as possible in typical SV style they should get Altman back. But that does seem to contradict their mission. Why are they even a nonprofit? They should just restructure into a full for-profit juggernaut and stop living in contradiction.



chatgpt was under provisioned relative to demand, but demand was unprecedented, so it's not really fair to criticize much on that.

(It would have been a much bigger blunder to, say, build out 10x the capacity before launch, without knowing there was a level of demand is known to support it.)

Also, chatgpt's capabilities are what drove the huge demand, so I'm not sure how you can argue it is "shoddy".


Shipping broken product is a typical strategy to gain first mover advantage and try to build a moat. Even if it's mostly broken, if it's high value, people will do sign up and try to use it.

Alternatively, you can restrict signups and do gradual rollout, smoothing out kinks in the product and increasing provisioning as you go.

In 2016/17 Coinbase was totally broken. Constantly going offline, fucking up orders, taking 10 minutes to load the UI, UI full of bugs, etc. They could have restricted signups but they didn't want to. They wanted as many signups as possible, and decided to live with a busted product and "fix the airplane while it's taking off".

This is all fine, you just need to know your identity. A company that keeps talking about safety, being careful what they build, being careful what they put out in the wild and its potential externalities, acting recklessly Coinbase-style does not fit the rhetoric. It's the exact opposite of it.


In what way is ChatGPT broken? It goes down from time to time and has minor bugs. But other than that, the main problem is the hallucination problem that is a well-known limitation with all LLM products currently.

This hardly seems equivalent to what you describe from Coinbase, where no doubt people were losing money due to the bad state of the app.

For most startups, one of the most pressing priorities at any time is trying to not go out of business. There is always going to be a difficult balance between waiting for your product to mature and trying to generate revenue and show progress to investors.

Unless I’m totally mistaken, I don’t think that OpenAI’s funding was unlimited or granted without pressure to deliver tangible progress. Though I’d be interested to hear if you know differently. From my perspective, OpenAI acts like a startup because it is one.


A distasteful take on an industry transforming company. For one, I'm glad OpenAI released models at the pace they did which not only woke up Google and Meta, but also breathe a new life into tech which was subsumed by web3. If products like GitHub Copilot and ChatGPT is your definition of "shoddy", then I'd like nothing more for Sam to accelerate!


I'm just saying that they should stop talking about "safety", while they are releasing AI tech as fast as possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: