Different situation. Uber's entire strategy was to "disrupt" the transportation industry by undercutting everyone, with the promise that eventually adoption (and monetization via data, advertising, etc.) would be large enough for fees not to rise so much as to push consumers and drivers into the established taxi industry. Anthropic, on the other hand, is a competitor brand in a brand-new industry, one with heavy reliance on capex and exploding employee salaries, for that matter.
And when Anthropic raises the prices enough, people will jump ship.
That's why you don't pay the yearly license for anything at this point in time. Pay monthly and evaluate before each bill if there's something better out already.
Across most anglosphere countries and tech cities - wages and salaries far outstrip what you can get for AI. AI is already objectively cheaper than human talent in rich countries. Is it as good? Yea I'd say it's better than most mid to junior engineers. Can it run entirely by itself? No, it still needs HITL.
Nobody is investing half a trillion in a tech without expecting a 10x return.
And fairly sure soon those $20/month subscriptions will sell your data, shove ads everywhere AND basically only allow you to get that junior dev for 30 minutes per day or 2 days a month.
And the $200/month will probably be $500-1000 with more limitations.
Still cheap, but AI can't run an entire project, can't deliver. So the human will be in the loop, as you said, so at least a partial cost on top.
What’s different is all the open weight models like Kimi-k2 or Qwen-3 Coder that are as good and, depending on the task, better than Anthropic’s Sonnet model for 80% less via openrouter [1] and other similar services.
You can use these models through Claude Code; I do it everyday.
Some developers are running smaller versions of these LLMs on their own hardware, paying no one.
So I don’t think Anthropic and the other companies can dramatically increase their prices without losing the customers that helped them go from $0 to $4 billion in revenue in 3 years.
Users can easily move between different AI platforms with no lock-in, which makes it harder to increase prices and proceed to enshitify their platforms.
No, they don't. FAANG probably employs 400 000 programmers worldwide, and I think the US alone probably has about 3-4 million programmers. Worldwide there are probably 30 million.
And even for FAANG, an SDE for them in Spain makes 60-100k total comp, not 400k.
> most people agree that the output is trite and unpleasant to consume
This is likely a selection bias: you only notice the obviously bad outputs. I have created plenty of outputs myself that are good/passable -- you are likely surrounded by these types of outputs without noticing.
Neither of you read the content of the OP methinks. Many of the AI skeptics framed in the blog do not leverage agentic frameworks. In fact, they are explicitly turned off by how the chat interface cannot tackle large codebases as you can't just throw vector embeddings into the chat window. Thus, they write off AI completely.
In order to do this you need to set up your IDE and connect it to an API, either local or one of the paid services. I am not aware of any paid service that has an unlimited token plan through the API.
It doesn't matter if they offer unlimited tokens or not. You're not using unlimited tokens. What matters is how many tokens you need to get good results, and whether you can get that many tokens at a good price.
GitHub Copilot has a free tier these days. It's not 100% free no matter how much you use it but it's generous enough that you can get a feel for if it's worth paying for.
Under what circumstances would that cost be high? Is OpenAI going to rip off your app? Why would they waste a second on that when there are better models to be built?
By building a good reputation and contact list doing salaried work. Usually you do a good job there, make a bunch of stakeholders happy, and then you have a chance at spinning off on your own.
Best part is that they probably have data to show that all that patience costs the typical passenger mere seconds to a minute on 99% of rides.
This has always bothered me about aggressive or impatient human drivers: they are probably shaving like 30 seconds off of their daily commute while greatly increasing the odds of an incident.
I experienced this phenomena on my electric scooter. I could always scoot faster than someone walking but ultimately it makes little difference because I just spent more time for the crossing signal to turn green. So they end up catching up to me.
Now, when there's long stretch or when you have to go up hill, that's where the electric scooter begins to shine and makes the largest difference.
You are missing all the times where you are enough faster that you catch a green while the other person gets there on red and so they never catch up. It is easy to see/remember the times they catch up.
Interesting - I'm definitely substantially faster on a scooter than walking. Part of it is knowing the best routes, but I think even if there are crossing signals, if you're going further than a few blocks there's just no comparison to walking.
This is also why streets inside cities in the Netherlands are converting to be single-lane, except at intersections - the ability to overtake doesn't make traffic flow faster.
Or just implement vastly more automated ticketing systems. They are standard in many countries. They could be implemented with limited-purview privacy preserving architectures where that aligns with expectations and values.
But people speeding, driving aggressively, driving anti-socially (by trying to speed past lines and cut in at the front), running lights and stops... this could be squashed forever, saving lives and ultimately making life more pleasant for everyone.
But they won't be implemented with a privacy preserving architecture. They'll be outsourced to a third party with unknown privacy and security, and eventually be treated as a revenue generator, leading cities to implement rule changes that enhance revenue at the cost of privacy and safety.
It's so frustrating. These things are trivially solved. There's basically a 50/50 shot, every time the light cycles, that someone will illegally take a right on red on the street outside my house. All you need is a single cop sitting there and watching. Or just one camera! Argh.
signaling humans for bad behaviors tend to backfire. it program us to recreate that situation in anger. we aren't smart enough to naturally learn lessons that way.
Well sure but drones won't shit, that's why we need the organic piece. I guess they could drop rotten fruit in lieu of shit, but then we need a supply chain to restock the rotten fruit in the drones.
Curious what you are expecting when you say "bottom falls out". Are you expecting significant failures of large-scale systems? Or more a point where people recognize some flaw that you see in LLMs?