Things change. Allies just used to be threatened in private. Even today, the UK, Canada, and others are supporting the US and Israel in taking down the regime.
I’m not suggesting things haven’t or can’t change, but I am suggesting we haven’t seen any pivotal turning points, at least not yet.
We have, they were just a long time ago, and people are only just now noticing because Obama and Biden were relatively restrained and Trump I was simply incompetent.
But all the things that allow Trump to do that he's doing happened a long time ago
I get where you're coming from. Every US administration has been corrupt, flaunted the constitution, started illegal wars (at least in the last 100 years or so)
It does kind of drive me nuts that people don't remember Bush very well, and give Obama and Biden passes on their own crimes.
That said, i do honestly believe that Trump has taken the level of corruption and abject cruelty to a new level. But this was inevitable; both parties have spent 50 years building this reality. I won't be surprised when the next Democrat also deports millions and starts illegal wars.
I don't disagree with you, in general. My point here only was that I don't think the specific language used is correct. For me a turning point would be like, the Japanese declaring war on the United States and attacking Pearl Harbor, or Napoleon being defeated by the Duke, or the French Revolution, or something more along those lines. Bombing Iran (we've done stuff like that before), arresting Maduro - Noriega (sp?), federal vs state standoffs - yep done that before. Largely this is the routine mess of democracy, and it's heightened and more exposed because it's the United States of America and also because our republic has 340 million people from all over the world - there's going to be some differences of opinion.
Of course "this time" can be different for these things but I'm not sure I've seen anything I'd construe as a turning point or significant change or anything quite like that.
I think this ship has sailed pretty hard, by now. Pretty much any app you can possibly use, from iTerm to Slack, is sending data to third-party LLMs (sometimes explicitly, most times as small features here and there)
It’s just irrelevant for most users. These companies are getting more adoption than they can handle, no matter how clunky their desktop apps are. They’re optimizing for experimentation. Not performance.
While this may be true for casual users, for dev native products like Codex, the desktop experience actually matters a lot. When you are living in the tool for hours, latency, keyboard handling, file system access, and OS-level integration stop being “nice to have” and start affecting real productivity. web or Electron apps are fine for experimentation, but they hit a ceiling fast for serious workflows -- especially if the icp is mostly technical users
And they're pretty much the only example of an embedded browser architecture actually performing tolerably and integrating well with the native environment.
Fair, I think I'm certainly in the minority. Especially now more then ever with an increasing amount of non-technical people exploring vibe coding, 'good enough' really is good enough for most users
It's not irrelevant for developers neither for users. Tiktok has shown that users deeply care about the experience and they'll flock en-masse to something that has a good experience.
More adoption? I don't think so... It feels to me that these models && tools are getting more verbose/consuming more tokens to compensate for a decrease in usage. I know my usage of these tools has fallen off a cliff as it become glaringly obvious they're useful in very limited scopes.
I think most people start off overusing these tools, then they find the few small things that genuinely improve their workflows which tend to be isolated and small tasks.
Moltbot et al, to me, seems like a psyop by these companies to get token consumption back to levels that justify the investments they need. The clock is ticking, they need more money.
I'd put my money on token prices doubling to tripling over the next 12-24 months.
What do weights have to do with how much it costs to run inference? Inference is heavily subsidized, the economics of it don't make any sense.
Anthropic and OpenAI could open source their models and it wouldn't make it any cheaper to run those models.. You still need $500k in GPUs and a boatload of electricity to serve like 3 concurrent sessions at a decent tok/ps.
There are no open source models, Chinese or otherwise that are going to be able to be run profitably and give you productivity gains comparable to a foundation model. No matter what, running LLMs is expensive and the capex required per tok/ps is only increasing, and the models are only getting more compute intensive.
The hardware market literally has to crash for this to make any sense from a profitability standpoint, and I don't see that happening, therefor prices have to go up. You can't just lose billions year after year forever. None of this makes sense to me. This is simple math but everyone is literally delusional atm.
It's a fantasy to believe that every single one of these 8 providers is serving at incredibly subsidized dumping prices 50% below cost and once that runs out suddenly you'll pay double for 1M of tokens for this model. It's incredibly competitive with Sonnet 4.5 for coding at 20% of the token price.
I encourage you to become more familiar with the market and stop overextrapolating purely based on rumored OpenAI numbers.
I'm not making any guesses, I happen to know for a fact what it costs. Please go try to sell inference and compete on price. You actually have no clue what you're talking about. I knew when I sent that response I was going to get "but Kimi!"
The numbers you stated sound off ($500k capex + electricity per 3 concurrent requests?). Especially now that the frontier has moved to ultra sparse MoE architectures. I’ve also read a couple of commodity inference providers claiming that their unit economics are profitable.
Okay, so you are claiming "every single one of those 8 providers, along with all others who don't serve openrouter but are at similar price points, are subsidizing by more than 50%".
That's an incredibly bold claim that would need quite a bit of evidence, and just waving "$500k in gpus" isn't it. Especially when individuals are reporting more than enough tps at native int4 with <$80k setups, without any of the scaling benefits that commercial inference providers have.
Imagine thinking that $80k setups to run Kimi and serve a single user session is evidence that inference providers are running at cost, or even close to it. Or that this fact is some sort of proof that token pricing will come down. All you one-shotted llm dependents said the same thing about Deepseek.
I know you need to cope because your competency is 1:1 correlated to the quality and quantity of tokens you can afford, so have fun with your Think for me SaaS while you can afford it. You have no clue the amount of engineering that goes into provide inference at scale. I wasn't even including the cost of labor.
You are literally telling me that an open source model costs $80k "at decent tok/ps (whatever that means)" to run a single session as proof something. How come people aren't dropping Anthropic for Kimi, it costs 10x less... You aren't a serious person worth engaging with.
It really is insane how far it's gone. All of the subsidization and free usage is deeply anticompetitive, and it is only a profitable decision if they can recoup all the losses. It's either a bubble and everything will crash, or within a few years once the supplier market settles, they will eventually start engaging in cartel-like behavior and ratchet up the price level to turn on the profits.
I suspect making the models more verbose is also a source of inflation. You’d expect an advanced model to nail down the problem succinctly, rather than spawning a swarm of agents that brute force something resembling an answer. Biggest scam ever.
80% of the world population back then is less than 50% of the current number of people working in farming, so the assertion isn’t wrong, even if fewer people are working on farming proportionally (as it should be, as more complex, desirable and higher paid options exist)
We're at a point in the LLM curve where there's two huge, polarized groups of developers:
- the ones who don't see any value on AI for coding and dismiss it as a fad at every change they get
- the ones who are in love with the new tools and adopting as many as they can on their workflows
I know the arguments of the second bunch well. But very curious about what the "AI is a fad" bunch thinks will happen. Are we going to suddenly realize all these productivity gains people are claiming are all lies and go back to coding by typing characters on emacs and memorizing CS books? Will StackOverflow suddenly return as the most popular source of copy-paste code slop?
> Are we going to suddenly realize all these productivity gains people are claiming are all lies
I'll grant you that many have become adamant that LLMs suddenly, out of the blue, became useful just last week, which is much too soon to have any concrete data for, but coding agents in some shape have been around for quite a while and in the data we have there isn't offering of any suggestion of productivity gains yet.
And I'm not sure many are even claiming that they are more productive, just that the LLMs have allowed them to carry out a task faster. Here's the thing: At least my experience, coding was never the bottleneck. The bottleneck has always been the business people squabbling over what the customers and business need. They haven't yet figured out how to get past their egos.
The most promise for productivity seems to be from lone startup founders who aren't constrained by the squabbling found in a larger organization and can now get more done thanks to the task shortening. However, the economic conditions are not favourable to that environment right now. Consumers are feeling tapped out, marketing has become way harder, and, even when everything else is in place, nobody is going to consider your "SaaS" when they believe the foundational LLMs will be able to do the same thing tomorrow.
> Are we going to suddenly realize all these productivity gains people are claiming are all lies and go back to coding by typing characters on emacs and memorizing CS books?
If you have not learned CS, how do you expect to separate the LLM wheat from the chaff?
> Will StackOverflow suddenly return as the most popular source of copy-paste code slop?
Coding sites manually populated by humans are dead.
confusing any law with "moral principles" is a pretty naive view of the world.
Many countries base some of their laws on well accepted moral rules to make it easier to apply them (it's easier to enforce something the majority of the people want enforced), but the vast majority of the laws were always made (and maintained) to benefit the ruling class
Yeah I see where you are going with this, but I think he was trying to make a point about being convinced by decree. It tended to get people to think that it should be moral.
Also I disagree with the context of what the purpose is for law. I don't think its just about making it easier to apply laws because people see things in moralistic ways. Pure Law, which came from the existence of Common Law (which relates to whats common to people) existed within the frame work of whats moral. There are certain things, which all humans know at some level are morally right or wrong regardless of what modernity teaches us. Common laws were built up around that framework. There is administrative law, which is different and what I think you are talking about.
IMHO, there is something moral that can be learned from trying to convince people that IP is moral, when it is, in fact, just a way to administrate people into thinking that IP is valid.
I don't think this is about being confused out of naivety. In some parts of the western world the marketing department has invested heavily in establishing moral equivalence between IP violation and theft.
reply