I’m starting to think you’re right but only because software engineers don’t seem to actually value or care about open source anymore. Apparently we have collectively forgotten how bad it can be to let your tools own you instead of the other way around.
Maybe another symptom of Silicon Valley hustle culture — nobody cares about the long term consequences if you can make a quick buck.
There's nothing stopping you from using OpenCode with any other provider, including Anthropic: you just can't get the subsidised pricing while doing so. This is irritating, yes - it certainly disincentivises me from trying out OpenCode - but it's also, like, not unexpected?
In any case, the long-term solution for true openness is to be able to run open-weight models locally or through third-party inference providers.
Yes but why are they subsidizing the pricing and requiring to use their closed source client to benefit from it? It’s the same reason the witch in the story of Hansel and Gretel was giving out free candy.
Is this a serious question? Why would they subsidize people when there is no benifet to them? Subsidization means they are LOSING money when people use it. If the customers that are using 3rd party clients are unwilling to pay a price that is profitable for them, that is a very positive, not negative, thing for Anthropic to lose them.
The reason to subsidize is the exact reason you are worried about. Lock in, network effects, economies of scale, etc.
It very obviously is, you'd have to be the most naive of the most naive to think there isn't a path for them to jack prices later. Maybe that's not nefarious depending on your definition, but the point is you will definitely be paying more in the future.
I mean, this is the playbook of every tech company for the past 30 years. You sell something at a huge loss to gain market share and force your competitors to exit, and then you begin value extraction from your, now captive, customer base. You lower quality, raise prices, and cut support, and you do it slowly enough that nobody is hit with enough friction at one time to walk.
If you expect anything else, I don't know what to tell you. This is very much the standard. In fact it's SO much the standard that companies don't even have a choice. If you choose not to do this, then the people who are doing this will just undercut you and run you out.
The key piece in this is that, once the value extraction begins, it can't just strive for profitability. No, it also has to make up for the past 10 or 15 years of losses on top of that. So it's not like the product will just get expensive enough to sustain itself like you'd expect with a typical product. It'll get much more expensive than that.
> software engineers don’t seem to actually value or care about open source anymore.
Hate to break it to you, but the vast majority never did. See any thread about Linux on HN. Maybe the Open Source wave was before my time, but ever since I came into the industry around 2015 "caring about open source" has been the minority view. It's Windows/Mac/Photo Shop/etc all the way up and down.
> Apparently we have collectively forgotten how bad it can be to let your tools own you instead of the other way around.
We've collectively forgotten because a large enough number of professional developers have never experienced anything other than a thriving open source ecosystem.
As with everything else (finance and politics come to mind in particular), humans will have to learn the same lessons the hard way over and over. Unfortunately, I think we're at the beginning of that lesson and hope the experience doesn't negatively impact me too much.
It might make sense from Anthropics perspective but as a user of these tools I think it would be a huge mistake to build your workflow around Claude Code when they are pushing vendor lock in this aggressively.
Making this mistake could end up being the AI equivalent of choosing Oracle over Postgres
As a user of Claude Code via API (the expensive way), Anthrophic's "huge mistake" is capping monthly spend (billed in advance and pay as you go some $500 - $1500 at a time, by credit card) at just $5,000 a month.
It's a supposedly professional tool with a value proposition that requires being in your work flow. Are you going to keep using a power drill on your construction site that bricks itself the last week or two of every month?
An error message says contact support. They then point you to an enterprise plan for 150 seats when you have only a couple dozen devs. Note that 5000 / 25 = 200 ... coincidence? Yeah, you are forbidden to give them more than Max-like $200/dev/month for the usage-based API that's "so expensive".
They are literally "please don't give us money any more this month, thanks".
I imagine a combination of stop loss and market share. If larger shops use up compute, you can't capture as many customers by headcount.
// There was a figure around o3, an astonishing model punching far above the weights (ahem) of models that came after, that suggested the thinkiest mode cost on the order of $3500 to do a deep research. Perhaps OpenAI can afford that, while Anthropic can't.
That leads to the obvious question; is the API next on the chopping block? Or would they just increase the API pricing to a point where they are A) making profit off it and B) nobody would use the API just for a different client?
I'm pretty sure everyone is pricing their APIs to break-even, maybe profit if people use caching properly (like GPT-5 can do if you mark the prompts properly)
Sounds plausible they're not really making any. Arbitrary and inflexible pricing policies aren't unusual, but it sounds easy enough for a new rapidly-growing company to let the account managers decide which companies they might have a chance of upselling 150 seat enterprise licenses to and just bill overage for everyone else...
Their target is the Enterprise anyway. So they are apparently willing to enrage their non-CC user base over vendor-locking.
But this is not the equivalent of Oracle over Postgres, as these are different technology stacks that implement an independent relational database. Here were talking about Opencode which depends on Claude models to work "as a better Claude" (according to the enraged users in the webs). Of course, one can still use OC with a bazillion other models, but Anthropic is saying that if you want the Claude Code experience, you gotta use the CC agent period.
Now put yourself in the Anthropic support person shoes, and suppose you have to answer an issue of a Claude Max user who is mad that OC is throwing errors when calling a tool during a vibe session, probably because the multi-million dollar Sonnet model is telling OC to do something it can't because its not the claude agent. Claude models are fine-tuned for their agent! If the support person replies "OC is an unsupported agent for Claude Code Max" you get an enraged customer anyway, so you might as well cut the crap all together by the root.
If you’ve only got a CLAUDE.md and sub agent definitions in markdown it is pretty easy to do at the moment, although more of their feature set is moving in a direction that doesn’t have 1:1 equivalents in other tools.
The client is closed source for a reason and they issued DMCA takedowns against people who published sourcemaps for a reason.
The people defending Anthropic because “muh terms of service” are completely missing the point. These are bad terms. You should not accept these terms and bet the future of your business on proprietary tooling like this. It might be a good deal right now, but they only want to lock you in so that they can screw you later.
By only supporting their own cloud service for remote execution & slowly adding more and more proprietary integration points that are incompatible with other tools.
But switching cost to a different CLI coding tool is close to zero… I truly don’t understand the argument that using Claude Code means betting your business on that particular tool. I use Claude Code daily, but if tomorrow they massively raised prices, made the tool worse, or whatever I’d just switch to a competitor and keep working like nothing happened.
To be clear, I’ve seen this sentiment across various comments not just yours, but I just don’t agree with it.
They wouldn’t require you to use their closed source client if they weren’t planning on using it to extract value from you later. It’s still early & a lot more capabilities are going to be coming to these tools in the coming months. Claude Code or an equivalent will be a full IDE replacement and a lot of the integration and automation mechanisms are going to be proprietary. Want to offload some of that to the cloud? Claude Code Web is your only option. Someone else drops a better model or a model that’s situationally better at certain types of tasks? You can’t use it unless you move everything off of that stack.
As an example, this is the exact type of thing Anthropic doesn’t want you to be able to build with Claude & it’s why they want you on their proprietary tooling:
Altman really is a generational bullshit artist. Exaggerating the value of his talent while pretending he hasn't already lost a lot of his most valuable people (he has).
It makes sense he focuses on Meta in this interview -- his other competitors actually have taken some of his top talent and are producing better models than GPT now.
There is technology that can greatly benefit solopreneurs, but it isn’t AI. It’s just an evolution of the same tech that has been a huge benefit to early stage projects for awhile now. Knowing what you’re doing with highly productive tools like Rails or Laravel or whatever your preference is going to have a far greater impact than some LLM will.
Yeah I really don’t get why people keep hyping AI like this. It really doesn’t make things go that much faster. At best you’re able to generate prototypes more quickly + get better autocomplete. Nothing particularly revolutionary there.
Anyone claiming a generalized 100x, 10x, or even 2x productivity gain is either delusional or trying to sell you something. Possibly both.
The companies saying they are reducing the size of their workforce because of gains they’re getting from AI are probably just telling investors what they want to hear while cutting costs for the same reason they always have.
I felt this way until Claude Code. It works much, much better in large codebases than anything else I've tried. It implements smaller features, including ones with FE + API changes and tests for each, pretty well. I'm going to try cloning our main repo multiple times to get it working on multiple branches at once.
>Anyone claiming a generalized 100x, 10x, or even 2x productivity gain is either delusional or trying to sell you something
I don't understand how anyone who spends a couple hours or more per day coding new functionality couldn't at least double their productivity with LLMs, unless their organisation prohibits LLM usage. Even just limiting the LLM to writing unit tests would still save that much time.
The thing is, tab complete using LLMs is really great. But I still read it, then press tab, then press enter, then type a few chars, then wait.
Sometimes I get 1 good line from that. Sometimes I get 30. Usually I get 10 bad lines and have to type a bit more to coax out 8 good ones.
It just looks faster but typing was NEVER the bottleneck for coding.
Where it really flies, though, is building tooling around a well known API. FML if I ever have to write AWS CDK or AWS API calls without an LLM again. You're looking at ages of reading through really bad docs to get it going.
For that, which is a 1% task of convenience for most jobs, I can use most LLM output verbatim. But that's like I said less than 1% of the job, and only then when the core software is done.
Did you notice how much better things are today (eg Claude Sonnet 3.7) than they were 1 year ago? Don’t you expect things will not improve in the next year? Even R1, a public weights model, can add huge value when left to code in a loop.
Hm, I’m a CDK pro (4y of full time experience). I used all LLMs, except latest Claude model. All were bad in my estimation and just got in the way. I don’t use them for CDK code anymore.
Yeah! That's exactly the thing. It's passable for novices and bad for experts. But I don't need expert level CDK I need an instance to start up. Hate it or love it that's all I need.
The bottleneck is not putting code on the hard drive, or turning my thoughts into code — the productivity bottleneck is thinking and frankly no LLM is thinking better than an average developer.
I don’t know what news you’ve all been reading, but I don’t see anything about Trump cancelling the CHIPS Act. The 2 main things I’ve seen is trying to get TSMC to take over Intels manufacturing, and wanting to remove things like union labor requirements from the CHIPS Act.
The delays have nothing to do with it being in Ohio, and the CHIPS Act didn’t dictate where these would be built. Intel picked the site, just like TSMC and others picked theirs. Cost of land, energy, labor, etc all taken into account. The “flyover” states are the more cost effective place to do these things.
With Ohio specifically it’s being built just outside of a city too. Yes, we have those here. It’s actually not just one big state of a bunch of rural hicks demanding handouts from the government.
Elixir is a little less flexible since it doesn’t have the JVM interop but for domains where it’s a good fit I think it’s even better at most of this stuff (and easier to teach people unfamiliar with FP or lisps)
I think it's less of a distinctly populist thing and more of a "convert's zeal". I have seen this type of language used with a lot of different technologies over the years when there's a lot of hype and momentum around them.
Ultimately there's just a lot of tradeoffs to make in this space and I'm glad that we have more options now. For awhile it seemed like you were either doing SPAs or everyone thought you were doing it "wrong", but things like HTMX, LiveView, Livewire, and Hotwire make it much easier to build good backend driven web apps.
Inertia.js 2.0 is a really compelling middle ground as well & IMO is the best thing to come out of Laravel. You can get most of the benefits of a JS "metaframework", but you have adapters for many different backend and frontend frameworks.
Inertia is a godsend. We are using it at work and it’s such an amazing combination of the best of both worlds without all the craziness going on with the more popular meta frameworks.
Maybe another symptom of Silicon Valley hustle culture — nobody cares about the long term consequences if you can make a quick buck.
reply