Hacker Newsnew | past | comments | ask | show | jobs | submit | philomath_mn's commentslogin

Why is that?


Because at some point, Anthropic needs to stop hemorrhaging money and actually make some.


Uber, founded in 2009 was finally profitable in 2023. That's a bit longer than 6 months.


Different situation. Uber's entire strategy was to "disrupt" the transportation industry by undercutting everyone, with the promise that eventually adoption (and monetization via data, advertising, etc.) would be large enough for fees not to rise so much as to push consumers and drivers into the established taxi industry. Anthropic, on the other hand, is a competitor brand in a brand-new industry, one with heavy reliance on capex and exploding employee salaries, for that matter.


Totally different than except for the fact that it shows that "money" is able to wait 14 years, which is 28x longer than 6 months.


And when Anthropic raises the prices enough, people will jump ship.

That's why you don't pay the yearly license for anything at this point in time. Pay monthly and evaluate before each bill if there's something better out already.


and that jumping ship didn't happen IRL, cursor ARR jumped 50% after their pricing change.


AI bills are already trivial compared to human time. I pay for claude max, all I need to do is save an hour a month and I will be breaking even.


$200h * 8 * 5 * 4 * 12 = $384 000 per year.

You're like in the top 0.05% of earners in the software field.

Of course, if you save 10 hours per month, the math starts making more sense for others.

And this is assuming LLM prices are stable, which I very much doubt they are, since everyone is price dumping to get market share.


Across most anglosphere countries and tech cities - wages and salaries far outstrip what you can get for AI. AI is already objectively cheaper than human talent in rich countries. Is it as good? Yea I'd say it's better than most mid to junior engineers. Can it run entirely by itself? No, it still needs HITL.


Again, those prices aren't stable.

Nobody is investing half a trillion in a tech without expecting a 10x return.

And fairly sure soon those $20/month subscriptions will sell your data, shove ads everywhere AND basically only allow you to get that junior dev for 30 minutes per day or 2 days a month.

And the $200/month will probably be $500-1000 with more limitations.

Still cheap, but AI can't run an entire project, can't deliver. So the human will be in the loop, as you said, so at least a partial cost on top.


What’s different is all the open weight models like Kimi-k2 or Qwen-3 Coder that are as good and, depending on the task, better than Anthropic’s Sonnet model for 80% less via openrouter [1] and other similar services.

You can use these models through Claude Code; I do it everyday.

Some developers are running smaller versions of these LLMs on their own hardware, paying no one.

So I don’t think Anthropic and the other companies can dramatically increase their prices without losing the customers that helped them go from $0 to $4 billion in revenue in 3 years.

Users can easily move between different AI platforms with no lock-in, which makes it harder to increase prices and proceed to enshitify their platforms.

[1]: https://openrouter.ai/


The wages aren't stable either. There's going to be gradual convergence.


Oh, by the way, this entire discussion revolves around LLMs being an exponential tech. Real life only works with sigmoids.


Not gonna happen. The competition for AI models is approaching commodity.


The percentages in the field are skewed, FAANG employ a vast number of engineers.


No, they don't. FAANG probably employs 400 000 programmers worldwide, and I think the US alone probably has about 3-4 million programmers. Worldwide there are probably 30 million.

And even for FAANG, an SDE for them in Spain makes 60-100k total comp, not 400k.


on the other hand, it could also you mean you are overpaid


> most people agree that the output is trite and unpleasant to consume

This is likely a selection bias: you only notice the obviously bad outputs. I have created plenty of outputs myself that are good/passable -- you are likely surrounded by these types of outputs without noticing.

Not a panacea, but can be useful.


Anywhere I can follow your takes on LLM-assisted coding?


ChatGPT is $20 / month?


And Gemini is free. https://aistudio.google.com/ gets you free access to their best models.

OpenAI and Anthropic both have free plans as well.


Neither of you read the content of the OP methinks. Many of the AI skeptics framed in the blog do not leverage agentic frameworks. In fact, they are explicitly turned off by how the chat interface cannot tackle large codebases as you can't just throw vector embeddings into the chat window. Thus, they write off AI completely. In order to do this you need to set up your IDE and connect it to an API, either local or one of the paid services. I am not aware of any paid service that has an unlimited token plan through the API.


It doesn't matter if they offer unlimited tokens or not. You're not using unlimited tokens. What matters is how many tokens you need to get good results, and whether you can get that many tokens at a good price.


GitHub Copilot has a free tier these days. It's not 100% free no matter how much you use it but it's generous enough that you can get a feel for if it's worth paying for.


Under what circumstances would that cost be high? Is OpenAI going to rip off your app? Why would they waste a second on that when there are better models to be built?


The exact same logic applies to any deductible expense, and yet people think they can buy a business vehicle "for free" because it is deductible.

IDK why this is so hard to understand.


People get confused between deductions and credits, mostly?


By building a good reputation and contact list doing salaried work. Usually you do a good job there, make a bunch of stakeholders happy, and then you have a chance at spinning off on your own.


Best part is that they probably have data to show that all that patience costs the typical passenger mere seconds to a minute on 99% of rides.

This has always bothered me about aggressive or impatient human drivers: they are probably shaving like 30 seconds off of their daily commute while greatly increasing the odds of an incident.


Driving is a cooperative game, which we all win if everyone arrives at their destination safely.


I experienced this phenomena on my electric scooter. I could always scoot faster than someone walking but ultimately it makes little difference because I just spent more time for the crossing signal to turn green. So they end up catching up to me.

Now, when there's long stretch or when you have to go up hill, that's where the electric scooter begins to shine and makes the largest difference.


You are missing all the times where you are enough faster that you catch a green while the other person gets there on red and so they never catch up. It is easy to see/remember the times they catch up.


Interesting - I'm definitely substantially faster on a scooter than walking. Part of it is knowing the best routes, but I think even if there are crossing signals, if you're going further than a few blocks there's just no comparison to walking.


This is also why streets inside cities in the Netherlands are converting to be single-lane, except at intersections - the ability to overtake doesn't make traffic flow faster.


I'd love to train a bunch of intelligent cybernetic, half-robot half-organic birds to shit on aggressive cars everywhere.


Please don't do this here.


Or just implement vastly more automated ticketing systems. They are standard in many countries. They could be implemented with limited-purview privacy preserving architectures where that aligns with expectations and values.

But people speeding, driving aggressively, driving anti-socially (by trying to speed past lines and cut in at the front), running lights and stops... this could be squashed forever, saving lives and ultimately making life more pleasant for everyone.


But they won't be implemented with a privacy preserving architecture. They'll be outsourced to a third party with unknown privacy and security, and eventually be treated as a revenue generator, leading cities to implement rule changes that enhance revenue at the cost of privacy and safety.


It's so frustrating. These things are trivially solved. There's basically a 50/50 shot, every time the light cycles, that someone will illegally take a right on red on the street outside my house. All you need is a single cop sitting there and watching. Or just one camera! Argh.


signaling humans for bad behaviors tend to backfire. it program us to recreate that situation in anger. we aren't smart enough to naturally learn lessons that way.


good thing drones are getting smarter


Well sure but drones won't shit, that's why we need the organic piece. I guess they could drop rotten fruit in lieu of shit, but then we need a supply chain to restock the rotten fruit in the drones.


Curious what you are expecting when you say "bottom falls out". Are you expecting significant failures of large-scale systems? Or more a point where people recognize some flaw that you see in LLMs?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: