Its success stems from a refreshingly unconventional approach to innovation. Liang Wenfeng's philosophy of maintaining a flat organizational structure where researchers have unrestricted access to computing resources and can collaborate freely.
What's particularly striking is their deliberate choice to stay lean and problem-focused, avoiding the bureaucratic bloat that often plagues AI departments at larger companies. By hiring people driven primarily by curiosity and technical challenges rather than career advancement, they've created an environment where genuine innovation can flourish.
AI development doesn't necessarily require massive resources - it's more about fostering the right culture of open collaboration and maintaining focus on the core technical challenges.
The model you described probably works great (not just in AI) as long as it's not your primary and direct source of revenue with which you must pay back investors. Once it becomes your primary and direct source of revenue and you must generate some returns for investors or meet some revenue targets, then whatever you're doing somehow has to align with that revenue stream (often ruining the fun).
You're describing a lot of tech companies like Google that had all these different orgs that were money sinks not related to a direct source of revenue and funded by dominance in search and high margins. And these programs didn't necessarily yield great creative products. Quite the opposite.
Whereas if you have some objective measure that's driving your decisions, like revenue or customer engagement (proxy for usefulness), you can drive great results.
I think either method can work if you have the right culture.
Having the right culture is easier said than done.
The enshittification of nearly everything can largely be attributed to the difficulty of maintaining that culture of open-ended creation without direct accountability to revenue.
have to disagree there - the enshittification of nearly everything is 100% attributable to the seductive nature of rent seeking (more specifically, trying to gouge ever more in recurring revenue from people who you try to ensure have no other options). even companies who have innovative products and positive revenue streams have gone down that road.
> to make a profit and deliver profits to shareholders and investors
This is only part of the reason. It's really to signal that they can provide ever-increasing value. This year's 5% stock price increase cannot preclude next year's 5% stock price increase. It should be obvious this can't go on forever but it seems investors today either don't notice or don't care. It's precisely why enshittification happens: costs (things the business does to make their products/services more appealing to the customer) go down and revenues (the prices of the products/services) go up.
Making a car and selling the car is profitable (presuming the manufacturer is able to attract buyers). Making a car and selling the car while holding back software features unless the purchaser of the vehicle pays a monthly subscription fee is profitable and rent-seeking. The former used to be what happened and increasingly the latter is what's happening. It's not because the car companies weren't profitable in the past, it's because they want to show investors that they can continue to grow their profits.
They are not profitable. The problem is that they have to find their way to profitability because investors and shareholders need to be paid back. And because they have to do that, you could say that it "compromises" on objectives that would more rapidly advance the field like openly sharing their reasoning architecture.
I think most of what's said here has value. But be wary of the survivorship bias. There are also a ton of flat, lean, problem-focused and curiosity driven startups that _don't_ succeed. Their success definitely has a lot to do their talent and how they work, but also a lot of luck, too.
I don't buy that. Allegedly Google lost to OpenAI because the compute resources were allocated evenly and then each team shared to other teams. So it became a popularity contest instead of meritocratic allocation. And then Pichai tried to merge all the different AI teams making it even worse. From rumors by connected people on podcasts.
There has to be some structure to put the best ones first. The key problem is how to judge that.
"teams" are a clue to there being an underlying hierarchy and division which may not exist at deepseek. If its a smaller self-organising team of people, there would be no such effect.
It is also common knowledge that google's internal team and advancement politics are already pathological -- against a background of a winner-takes-all, cooperation does not work.
While I agree that Google's advancement politics are concerning, it's far fetched to say that there's a winner-takes-all aspect - there's still a lot of remuneration/power/recognition to go around for everyone, just not unevenly distributed.
DeepSeek's results speak for themselves - they've built a competitive AI model with millions that matches capabilities of systems costing billions.
While debates about resource allocation and organisational structure are interesting, what matters is their demonstrated ability to innovate efficiently.
For the curious, Valve uses a version of the "lattice organization". This style or management structure is attributed to Bill Gore, creator of Gore-Tex.
Either you give people a clear idea what you will build and they will organize accordingly or you organize and they will guess what they are supposed to build according to the org structure.
I mean, I guess you could call it unconventional since it was the status quo of basically every super-massive tech company way back in the early days of the tech sector, but has since been utterly eclipsed by like 4 companies the size of nations that can't seem to ship a single app without the input of 6,000 people.
What's particularly striking is their deliberate choice to stay lean and problem-focused, avoiding the bureaucratic bloat that often plagues AI departments at larger companies. By hiring people driven primarily by curiosity and technical challenges rather than career advancement, they've created an environment where genuine innovation can flourish.
AI development doesn't necessarily require massive resources - it's more about fostering the right culture of open collaboration and maintaining focus on the core technical challenges.