For who? Nvidia sell GPUs, OpenAI and co sell proprietary models and API access, and the startups resell GPT and Claude with custom prompts. Each one is hoping that the layer above has a breakthrough that makes their current spend viable.
If they do, then you don’t want to be left behind, because _everything_ changes. It probably won’t, but it might.
This bubble will be burst by the Trump tariffs and the end of the zirp era. When inflation and a recession hit together hope and dream business models and valuations no longer work.
Which one? Nvidia are doing pretty ok selling GPU's, and OpenAI and Anthropic are doing ok selling their models. They're not _viable_ business models, but they could be.
NVDA will crash when the AI bubble implodes, and none of those Generative AI companies are actually making money, nor will they. They have already hit limiting returns in LLM improvements after staggering investments and it is clear are nowhere near general intelligence.
All of this can be true, and has nothing to do with them having a business model.
> NVDA will crash when the AI bubble implodes,
> making money, nor will they
> They have already hit limiting returns in LLM improvements after staggering investments
> and it is clear are nowhere near general intelligence.
These are all assumptions and opinions, and have nothing to do with whether or not they have a business model. You mightn't like their business model, but they do have one.
I consider it a business model if they have plans to make money at some point (no sign of that at openai which are not based on hopium) and are not engaged in fraud like bundling and selling to their own subsidiaries (nvda).
These are of course just opinions, I’m not sure we can know facts about such companies except in retrospect.
You’re on a startup forum complaining that vc backed startups don’t have a business model when the business model is the same as it has been for almost 15 years - be a unicorn in your space.
Than any silly idea can be a business model. Suppose I collect dust from my attic and hope to sell it as an add-on on my neighbor's lemonade stand, with a hefty profit for the neighbor, who is getting paid by me $10 to add a handful of dust in each glass and sell it to the customers for $1. The neighbor accepts. It's a business model, at least until I don't run of existing funds or the last customer leaves in disguist. At which point exactly that silly idea stops being an unsustainable business model and becomes a silly idea? I guess at least as early as I see that the funds are running up, and I need to borrow larger an larger lumps of money each time to keep spinning the wheel...
Indeed it can. The difference between a business model and a viable business model is one word - viable.
If you asked me 18 years ago was "giving away a video game and selling cosmetics" a viable business model I would have laughed at you.If you asked me in 2019 I would probably give you money. If you asked me in 2025, I'd probably laugh at you again.
> and I need to borrow larger an larger lumps of money each time to keep spinning the wheel...
Or you figure out a way to to sell it to your neighbour for $0.50 and he can sell it on for $1.
The play is clear at every level - Nvidia Sell GPUs, OpenAI sell models, and SAAS sell prompts + UI's. Whether or not any of them are viable remains to be seen. Personally, I wouldn't take the bet.
My experience as someone who uses LLMs and a coding assist plugin (sometimes), but is somewhat bearish on AI is that GPT/Claude and friends have gotten worse in the last 12 months or so, and local LLMs have gone from useless to borderline functional but still not really usable for day to day.
Personally, I think the models are “good enough” that we need to start seeing the improvements in tooling and applications that come with them now. I think MCP is a good step in the right direction, but I’m sceptical on the whole thing (and have been since the beginning, despite being a user of the tech).
The whole MCP hype really shows how much of AI is bullshit. These LLMs have consumed more API documentation than possible for a single human and still need software engineers to write glue layers so they can use the APIs.
The problem is that up until _very_ recently, it's been possible to get LLMs to generate interesting and exciting results (as a result of all the API documentation and codebases they've inhaled), but it's been very hard to make that usable. I think we need to be able to control the output format of the LLMs in a better way before we can work on what's in the output. I don't konw if MCP is the actual solution to that, but it's certainly an attempt at it...
That's reasonable along with your comment below too, but when you have the ceo of anthropic saying "AI will write all code for software engineers within a year" last month I would say that is pretty hard to believe given how it performs without user intervention (MCP etc...). It feels like bullshit just like the self driving car stuff did ~10 years ago.
I completely agree with you there. I think we're a generation away from these tools being usable with light supervision in the way I _want_ to use them, and I think the gap between now and that is about 10x smaller than the gap between that and autonomous agents.
Because it’s lossy compression. I also consumed a lot of books and even more movies and I don’t have good memory of it all. But some core facts and intuition from it.
AI is far better at regurgitating facts than me even if it's lossy compression but if someone gives me an api doc I can figure out how to use it without them writing a wrapper library around the parts that I need to use to solve whatever problem I'm working on.
> but if someone gives me an api doc I can figure out how to use it without them writing a wrapper library around the parts that I need to use to solve whatever problem I'm working on.
I think this is where AI is faling short hugely. AI _should_ be able to integrate with IDEs and tooling (e.g. LSP, Treesitter, Editorconfig) to make sure that it's contextually doing the right thin.
On every large codebase I’ve worked on , updating a low level function has required more work updating the tests than updating the application using it.
Unit tests have a place, but IME are overused as a crutch to avoid writing useful bigger tests which require knowing what your app does rather than just your function.
> Integration test are slow/expensive to run compared to unit tests and reduce your iteration speed.
My unit tests might take under a second to run but they’re not the ones (IME) that fail when you’re writing code. Meanwhile, integration tests _will_ find regressions in your imperfect code when you change it. I currently use c# with containers and there’s a startup time but my integration tests still run quickly enough that I can run them 10s or hundreds of times a day very comfortably.
> If you’re doing a lot of mocking then your design is not good.
This is the “you’re holding it wrong” argument. I’ve never seen a heavily unit tested codebase that didn’t either suffer from massive mocking or wasn’t decomposed into such small blocks that they were illogically small and the mental map of the project was like drawing with sand - completely untenable.
> And only public interfaces should have testing.
This is the smoking gun in your comment - I actually disagree and think you should infer this. Services (or whatever you call them) should be tested, and low level functionality should be tested but the stuff in the middle is where you spend 80% of your time and get 10% of the benefit
I do a decent amount of ux work and probably fall into category 1 here. The problem isn’t “we don’t want to spend money on support”, the problem is “people really do need to be babysat for a lot of things, and no matter what you do, they will not read the documentation.
That's fair. People really are like that. This is suboptimal, and I emphasize with both frustrated devs and PHBs worried about escalating support costs. The reasons behind why users are like this are complex, but "users are stupid" isn't one of them.
I think "users are not paying attention" is a friendlier way to describe it.
A while back, I was supporting an e-sports event. We had professionals, competing for an awful lot of money who were deeply familiar with the game. We had taken mobile phones, etc from them so no distractions.
They were briefed before hand that all they had to do was wait until they were given the green light, and click the big OK button on their screen to enter the game. We added a giant modal with the OK that explained "press this button when you are told to". This was a last minute workaround for the fact that we could only tell how many people were in the queue for something, but not which of our expected players were not in the queue. Our telemetry tells us one person is missing, so we have to go walking around to find them. Found the guy, sitting in front of a giant modal saying "Click this when you are told to", and his response was "I didn't know I was supposed to click it".
Now add mobile phones, children, doorbells, cooking, neighbours, and this becomes widespread.
It's a decent approximation, if you remember it's an approximation for "the user is tired/stressed/not paying attention/doesn't actually want to deal with your app". I remember a talk which suggested "The user is drunk" as a better approximation, because it's more obviously not literally true.
The really hard part of that is that if you can’t build an excellent interface, you will build a worse one than if you used the native interface. So you either need to be prepared to sweat every last detail forever.
When would you define as “before”? I’ve had a thinkpad on and off and I’d describe the quality as consistent.
People talking about old Lenovos being good quality are often talking about in the pre-IBM days which is far more likely to be nostalgia at this point.
I have a T420. A few years ago I switched to a slightly used T480, keyboard was a huge downgrade and the whole series can get really stupid USBC issues. After half a year or so it didn't dock anymore and I got an X1, basically the same laptop glad I found it without touch and the 'bright screen' because the screen is barely good enough, keyboard is the same and USBc already started to get finicky.
Meanwhile my T420 still runs like on day one (which was already 5 years old when I got it, and travelled 1+ years with me in a backpack), the screen works in direct sunlight and it's not even the best of its series, hardware still perfect. Fat SSD + 32GB Ram and you can barely tell how old it is.
Yup, my T480 got upgraded to a Framework 13 after the T480's Thunderbolt port broke (known firmware issue that basically fried the chip). I loaned my T480 to someone about a year ago, and haven't bothered asking for it back.
Meanwhile, my T410 works great as a workbench computer.
I also have a T420, though not using it regularly nowadays. It would be really nice to get proper USB-C there – using one cable to plug in monitor and Ethernet and charge is really nice.
I’ve wanted to get a T480 for a while now (mainly to do a T25 frankenpad [1] – seems like a nice project), but if it really has those issues with the USB-C ports, I think I’ll pass :-(
I've used Thinkpads consistently for 25-30 years, and still do. I can't really draw a line between "before" and "after" but if I take a long enough period I can definitely see differences in the experience getting watered down or generally worse, from less flexibility to lower reliability.
I still have and regularly use a fully functional X200, somewhere in the box I have a fully functional T42 and an R31 whose only defect is a small screen blemish caused by me closing the lid with something on the keyboard.
But my multiple X1 Gen1 and Gen2 all have various failures (screen, battery, webcam, or keyboard), my T450 has big battery issues, my T470s have screen/GPU and battery issues. T490 is fine for now, X1 Gen11 has crappy battery and is overheating from the get go. These are different generations, different lots and still affected by the same constant issues.
I definitely know that people have complained that modern ThinkPads are not as good as before, and they have been doing that for ages, just as Socrates back in the day already was complaining about modern kids and their behaviours ;-)
In this case I was referring to post-T480 ThinkPads which have soldered memory, and no longer have hot-swappable batteries or on-board Ethernet.
I don't mind not having an external battery now that these laptops can charge off USB-C. So many ways to get some kind of USB-C power source to connect to and get a bit more charge, and then that spare energy source is usable with pretty much all the rest of my electronics. Whereas before it was a big, proprietary battery that only worked with one device and needed to be connected to the laptop to charge some time later.
They're still pretty easy to find replacements for when they go bad.
My first ThinkPad had terrible battery life. It was a X1 Extreme or something like that, pretty high end but the battery was useless. Even brand new it wouldn't last an hour off leash. Also couldn't use usb-c charging from the monitors at the office, had to be plugged in.
Also the Fn key is where the Ctrl key should be, which is endlessly annoying as a user of different laptop brands.
IBM invented the Fn key so if anyone has their Fn key where the Ctrl key should be, it is the copycats.
> The Fn key first debuted on the monochrome display ThinkPad 300 in October of 1992. Yes there was a ThinkPad with a monochrome display. The Fn key circa 1992 was placed exactly as it is today. Interestingly enough, Apple uses the same positions for their Fn and Ctrl keys as ThinkPad. Every other notebook personal computer manufacturer that I know of has the Fn and Ctrl key positions swapped. Some would say backwards.
Not sure what GP means but I gather the x230 era (2012?) has a cult following. I picked one up a few years ago when a laptop died and I didn't have the cash for something new: it is still my daily driver and I'm not replacing it til it dies.
By contrast, I know someone who got a T480 second hand and it lasted six months. My guess is the 2012 era was when the change happened
It's been a gradual shift, with a few obvious changes along the way.
Among a few: The keyboard switch from the old 7-row (whose pinnacle was at the x220/T420 era with double-height esc and del) to the new 6-row (with later ever decreasing key travel) to the current x9 (which is basically just a yoga keyboard with no trackpoint, no key grouping, and the loss of pgup/pgdn). Things like the modular battery options vanished. The case got flimsier over time with e.g. the magnesium rollcage first vanishing from the display, then from the base. (And no - from enterprise experience - the carbon fiber composite isn't generally "as good or better", esp. for failure modes like punctual force on the display. Or...grabbing the laptop by the display and using it to fan your BBQ, which doesn't faze my old X41 :) ).
> The keyboard switch from the old 7-row (whose pinnacle was at the x220/T420 era with double-height esc and del)
I think xx30-series has such a good reputation because you could use a T420 keyboard (with a tiny modification to better fit the chassis and not short out the backlight pin).
We had an expensive IBM ThinkPad model (too long ago to remember what model it was) and the keyboard and several other parts were worn down in three years of mostly in-home use. So ¯\_(ツ)_/¯.
At least a lot of modern ThinkPads are still modular. Recently got a 5th gen T14 AMD. Memory, NVMe SSD, WWAN modem, battery, and a bunch of other components are really easy to replace. I think I prefer the keyboard over my MBP, it feels less harsh.
I’m a fairly steadfast holder of the “I like apples walled garden, it’s my choice to be there” argument, and I think as a dissenting opinion on this forum I get a lot of flak for it. But that’s not a moderation problem, it’s the fact that my opinion is different and I have 10x the number of people disagreeing with me than agreeing with me.
React and the react ecosystem fail at many of the criteria you’ve listed. You might argue “that’s not reacts fault” but when I look at a website that takes 15+ seconds to load its content on a gigabit connection , I’m never surprised when it’s react. Lots of sites have massive issues with rendering performance, scalability and maintainability even with react.
What react does do is give you a clean separation of concerns across team boundaries and allow for reusable components . But the cost you pay for that is a boat load of overhead, complexity, maintainability concerns, and react specific edge cases
A 15+ second load on a gigabit connection is impossible to have anything to do with the React library, as React is only kilobytes big and has no impact on the host.
It’s not just about the size of the payload, it’s about the stuff happening in the background and the constant diffing of the various DOMs that react has. I can tell I’m on a react site by how laggy it is. Some devs manage to do better but I can still tell.
Another wonderful feature of react is it will fully render the page on my iPad and then quickly replace it with an error message. Absolute brilliance.
Depends on the project. For websites I like to use a statically generated Nextjs site served by nginx[1] and for SPAs Nginx that serves a Vite-built static bundle with a backend that’s in whatever language the backend team chose.
You didn't ask me, but I have what seems like an esoteric use for React: I only use React for the components, because vanilla/native web components are a major pain in the ass to work with.
You don’t need to wrestle with React’s state management monster unless you’re into that sort of thing.
> You might argue “that’s not reacts fault” but when I look at a website that takes 15+ seconds to load its content on a gigabit connection , I’m never surprised when it’s react.
> A 15+ second load on a gigabit connection is impossible to have anything to do with the React library, as React is only kilobytes big and has no impact on the host.
Perfectly proving my point.
It's not react-the-framework's fault, yet those sites are always react sites.
This is the "everyone who drinks water dies, therefore drinking water is deadly" argument. I'd also guess lesser known frameworks have a higher proportion of better developers - i.e people taking the time to research and try new technologies probably are more competent. You're also forgetting about the trillions of wordpress (and similar) websites that exist.
It’s much more of a “everyone who chooses this flavor of soda dies”. Maybe it’s the soda, maybe it’s the people it attracts.
> I'd also guess lesser known frameworks have a higher proportion of better developers - i.e people taking the time to research and try new technologies probably are more competent.
And this is the “you’re holding it wrong” argument.
Facebook and Airbnb are the poster children for react (one of them wrote react). They are both perfect examples of absolutely disastrously monsters of websites plagued with issues related to all of the problems react supposedly solved.
For who? Nvidia sell GPUs, OpenAI and co sell proprietary models and API access, and the startups resell GPT and Claude with custom prompts. Each one is hoping that the layer above has a breakthrough that makes their current spend viable.
If they do, then you don’t want to be left behind, because _everything_ changes. It probably won’t, but it might.
That’s the business model
reply