Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AI is stifling new tech adoption? (vale.rocks)
508 points by kiyanwang 8 months ago | hide | past | favorite | 426 comments


No, AI isn't.

Any new tech, or version upgrade, or whatever, takes time for people to become familiar with it. You might as well say "Stack Overflow is stifling new tech adoption" because brand-new stuff doesn't have many Q's and A's yet. But that would be a silly thing to say.

I'm not going to adopt a brand-new database regardless of LLM training data cutoff, just because enough people haven't had enough experience with it.

And LLM's have a commercial incentive to retrain every so often anyways. It's not like we're going to confront a situation where an LLM doesn't know anything about tech that come out 5 or 10 years ago.

Early adopters will be early adopters. And early adopters aren't the kind of people relying on an LLM to tell them what to try out.


Strong disagree here. I've been trying to learn Zig, and I'm thwarted enough by chatgpt giving me outdated information on Zig's unstable API that if I didn't have a strong incentive to learn it for it's cross compiler, I'd likely turn my efforts towards another language. This effect can greatly alter the adoption curve of a new tech, which can leave it dying on the vine.

You're not wrong though, in that Stack Overflow has the exact same problem. The main difference is that with Stack Overflow, there was a bonus in becoming the first expert on the platform, so while it does stifle new tech adoption, it at least encourages new tech content creation, which in turn encourages new tech. Though, I don't know if it's a net positive or negative in aggregate.

I think this problem will likely lessen as training becomes cheaper and faster. But right now, there is really strong incentives to avoid any tech that has had breaking changes in the last 3 years.


Someone recently put it that it’s the first 10% of adoption that sets the future for a product.

There are people online calling it the “tech adoption cycle” but this is a concept I encountered in a literal Business 101 class. 2.5% of the population are Innovators. 12.5 are early adopters. Then there’s 70% in the middle where most of your cash comes in, and by the time the laggards hit you’re optimizing for cost per unit because you’ve had to drop the price so much due to competition from copycats and from the next new thing.

So by the time 60% of your early adopters are onboard it’s already been decided if you’re on a rocket ship or this is just a burp.

Early adopters have a high tolerance for inconveniences but it’s not infinite. If they bleed enough they will find something else, and then you are DOA.


Crossing the Chasm, by Geoffrey Moore:

https://en.m.wikipedia.org/wiki/Crossing_the_Chasm


Products with volatile demand don't necessarily follow.


Having spent the past week deep in cursor, it's amazing for building out a basic web app.. but getting it to a point of productivity takes a while..

Command line install for latest svelte.. nope npx install is now deprecated, have to do it another way.. ok, let's go old school and read the docs.

Great, it's up and running, but nope, Svelte has just hit V5 and the LLM isn't aware of the changes.. ok, do I drop back to 4 on a new code write, or spend the time to get a decent .cursorrules in place to make sure it's using the right version. Ok, that's in, but look tailwind is too new too.. ok, fine let's get that into .cursorrules..

Oh look, daisy has a v5 coming in 15 days, luckily I got in there just in time..

I thought svelte, tailwind and daisy were the one true way nowadays!

I now have a rule in my cursorrules that asks for any errors that I spot in the code (related to wrong versions) results in both the fix and a rule for the cursorrules so it doesn't happen again. That works well.


BTW You can extend cursor out with new knowledge and the svelte community has a config file you can throw at cursor that'll add svelte 5 support.


The deepseek advancements will allow more frequent retraining incorporating updates, it used to likewise take months for the Google index to update: you could tell because they listed how many sites were indexed on the homepage.


The unfortunate truth is that the one true way these days is React, and if you're doing anything else you're in for a world of pain.


The actual truth is that you don’t need any of these frameworks at all. Vanilla JS & HTML has come a long way, and LLMs like Claude can build Vanilla JS with no trouble.


So AI does stifle adoption of new tech then?


Both things can be true. I have also been frustrated trying to get LLMs to work with SwiftUI, given that it’s slightly newer, changing frequently, and has lower adoption relative to something like React.

Honestly this is a short term problem. Eventually the models will have larger context windows and will be able to easily browse brand new documentation or example code.


No such thing as one true way with React and the pain is just as bad if not sometimes worse.

Do you choose createreact app / webpack / vite / next.js ?

Do you choose built in react state/contexts, redux, recoil, zustand, jotai state libraries?

Do you do data tables yourself or use the behemoths TanStack, react-table-libray, MUI x-data-grid

What about HTTP requests? Fetch api? axios? rtk-quey?

What about forms and validation? KISS simple, do it yourself? react-hook-form + zod, tanstack form?

Now how do you integrate them all in to your app, managing each edge case for each dependency with their own opinionated way of doing things not quite neatly working with your other dependency but can brute force it to work with some effort and hacks?

I find things like doing basic vanilla JS, or even things like of Svelte or Elm far less pain than React projects i've worked on. Svelte/Elm did require reading the docs not relying on a LLM.


Looking at the react devs in my company, they already live in a world of pain and confusion. So many simple, silly things turn complicated because of it.


But I thought it was a joke, or do people really choose their tech stack based on LLM support?


Have you considered learning by reading documentation and tutorials, instead of asking the lie generator? I mean, how do you think people learned things before 2023?


Yeah.. it's one thing to say "it would be nice if I could query a LLM about this topic" and another to say "I refuse to study this topic unless I can query a LLM."

To be frank, the latter does not sound like the mindset of an effective engineer -- in my experience effective engineers are curious, voracious in pursuit of knowledge.


What about a library that doesn’t have any documentation or tutorials, like the Soundfont lib for JUCE where the LLM was essentially I had to get a working product?


Let's dig into this for a second.

Which "Soundfont lib for JUCE" are you talking about? How did you find it in the first place? Why did you decide to use it if there was no documentation? How did you know it was appropriate or capable for your use case? How did you know it was mature, stable, or safe? Did you read the headers for the library? Did they have comments? Did you read the technical specification for the Soundfont format to understand how it modeled sample libraries?

I work in this stuff all the time, and I'm so very puzzled by what you're even suggesting here.


I Googled and got:

https://www.mucoder.net/blog/2016/03/24/sfzero.html

There was boilerplate that wasn’t explained and only existed in a demo for an audio plugin.

I prompted the LLM with the header files and sections of the demo app and after a few back and forth learning sessions I was getting piano samples playing back:

https://github.com/williamcotton/midiplayer

It works, it doesn’t have any memory issues (after I fixed an issue in the module), and is performant.

This is also my first C++ project!


Thanks for answering!

Assuming your original question was not rhetorical and was asked in good faith, other people who've used SFZero before the advent of LLM's made sure they understood juce::Synthesizer (despite its many warts, juce's public interfaces are well documented and represent a strong point) as this just defines a specific implementation, and either referenced the provided example or read the headers, which are pretty straightforward.

It's great that the LLM was able to get you to where you wanted to be, and congratulations on getting your project to that milestone. Especially for not having worked in C++ before! But to the point of the person you were responding to, the LLM was not fulfilling a generally unmet need. Everything a professional or practiced amateur needs was there already.


> > learning by reading documentation and tutorials

> But to the point of the person you were responding to, the LLM was not fulfilling a generally unmet need. Everything a professional or practiced amateur needs was there already.

But isn't this stretching the definitions of documentation and tutorials a little bit here? A demo app and a bunch of undocumented header files became documentation and tutorials with the assistance of an LLM.


Perhaps.

As an experienced audio developer, it's a distinction without a difference. We get used to reading code and deciphering how to use or improve it (wait until you start working with other's DSP code!), and some features and conventions of C/C++, like the segregation of declarations from definitions, make it especially self-documenting for those that are fluent in it.

Having integrated SFZero myself (albeit to a deadend; it's very middling-to-poor), I was surprised to discover that there was not more explicit documentation when you announced it as the library you were referring to, specifically because I remember the work of integrating it was relatively trivial (maybe half a day or so).

For a complete outsider to audio development, JUCE, C++, of course the LLM helped you get done what you needed to get done, far faster than you could have hoped otherwise. And that's great if you're just toying around. On the other hand, you absolutely would have sharpened your skills with all three of those things more had you the luxury to muddle through and force yourself to make sense of it all yourself.

Assuming skills development is your goal (otherwise, you should be using different tools here! C++ and JUCE are not where to start if your personal goal is music making, synthesis, or MIDI), there's a balancing act between making sure you don't get discouraged and committing to the hard work of learning things in-depth. The LLM gave you a way to avoid feeling stuck and discouraged, and perhaps represents the difference between you sticking with the project or just throwing your hands up. But keep in mind the tradeoff that it implies for skill development and consider only turning to it when the frustration starts to get too high to bear.


The way I use LLMs is as a learning tool. Sure, it can get me unstuck when I add some error messages to the context, but I always take the time to have it explain things, e.g. the reference counted memory model of some of the classes in JUCE. I prompt for a personalized tutorial with the specific problem I'm working on as the subject matter.

I'm also a C programmer and Max/MSP user so this syntactical stuff is mainly just different ways of managing the stack and the heap while turning MIDI into sounds!


Where did the LLM get information for the Soundfront lib if there is no documentation, tutorials, answered questions or source code online?


Same place every LLM gets "information" -- plucking random bits from the latent space that are most likely to follow the input context.


  > What about a library that doesn’t have any documentation or tutorials
This always sucks, but since it is something you have to deal with, the answer is "read the code." Hopefully the code has some test cases somewhere, and that is the best place to start. But if not, you unfortunately got to profile and trace the code yourself. It takes awhile, but you usually come out with a pretty good understanding (after A LOT of confusion) and can usually find some good things to fix.

How do you profile and trace it? A debugger is actually one of your best friends here. But this is also what things like cscope and tags (ctags) are for. I suggest becoming familiar with grep, find, sed, and awk as these will also greatly help. I strongly suggest also grabbing a pen and paper. You'll want to draw diagrams. It sucks, but it gets easier over time.

I'll also say that having to do this shitty task drills into you the importance of writing documentation, comments, and writing concise functions in your code. Only a psychopath could go through this trouble and think "fuck it, the next guy can do it too." Unfortunately there's lots of psychopaths lol. Remember, usually "the next guy" is you. You when you forget everything 6 months and 3 projects later. So document and comment while you trace. If your boss tells you not to worry, he's a psychopath too. Just wasting company money making it harder to onboard new employees. And in the worst/best case? Your documentation/commenting just makes the LLM better and more accurate. So there's really no good reason to not doc/comment other than excuses about moving fast. But remember, even though there's never time to do it right, there's always time to do it twice (when you fix what's wrong). And all this work reduces the tech debt and time everyone spends debugging. You have to invest in your work.


Yeah dude, I’ve got a wife, two young children and a full-time job in legal services. These are the side projects that LLMs give me the time and cognitive energies to work on.

Congratulations on having so much free time!

I do lot of data analysis so I am pretty familiar with awk, et. al. But you know, thanks for the condescending lecture, I guess?


I didn't realize it was a rhetorical question. It was not intended to be condescending. There's a lot of people who do not have that skill


The lie generator can very efficiently pull the relevant parts of the documentation and tutorials, and it's not that hard to programmatically ask it for a quote and compare it against the actual tutorial.


Have you considered just walking backwards uphill in the snow to your local library and checking out an (out of date) dead tree book, like - you know like us "real" programmers used to do it?

The gatekeeping around using llms for coding is so goofy (and expected)...


He's giving advice of how to overcome what has been described as a limitation, literally not gatekeeping but opening the gate for the OP.

Instead of imbuing into it some grievance narrative, why shouldn't people take the advice, if LLM's do suffer from an anti-recency bias, to find another way to learn new information that LLM's don't have in their models?

If some source of information you were looking for online wasn't available but it did exist at your local library in a book, why wouldn't you go check out the book?


They don't suffer from recency bias when you dump the docs right into their context.


Have you tried Cursor instead of a chatbot? I don’t trust it to do much coding for me but it has this feature where you give it the URL of the documentation, and it indexes it for you. Then you can reference that documentation in the chat with the @ symbol and Cursor applies RAG to the indexed content to include relevant documentation. I usually index autogenerated API docs and any website docs the library has like guides and tutorials.

The experience is night and day and it’s the only reason I pay for Cursor on top of OpenAI/Anthropic. The answers it gives are much more accurate when including documentation and it helps a lot with exploring APIs. Probably helps a lot with the code generation too, but I don’t use Composer as much.


Yeah, RAG is a great way to handle outdated docs


That's because you're using ChatGPT instead of Claude.


or you could try leraning zig in the old-school way, the way we used to learn before LLMs

who am I kidding? LLMs have changed the game forever. making new computer programming languages is no longer professionally useful, like hand-knitting


I think it will become standard to provide up to date references specifically tailored for LLMS.

For example: https://svelte.dev/docs/llms


Have ChatGPT always use search when you ask questions. I find referencing a version number and requiring search improves things dramatically. Improves does not fix.

v0 for vercel/nextjs

Try phind or other things more into looking stuff up.


Honestly, the shittiness of Google and terrible LLM answers has just made me start reading docs more. In the past few years I think I've become a substantially stronger programmer because of this. So in a way, thanks Google for becoming shit and thanks GPT for being frustrating as hell.

The problem is that sometimes doing things the hard way is the better way. I think learning programming, math, or any tough subject requires struggle. People like these things because it removes a lot of struggle but it more seems like this is a temporary advantage instead of a more long term one. My big concern is how we are becoming more and more addicted to instant gratification, at so many levels. Businesses are valuing quarterly profits above yearly. We've taken "ship early, ship often" to the extreme, as well as "move fast and break things." These are find adages, but like all things have limits. But the world gets more complex and as a result, solving problems becomes harder. You can't get away with first order approximations now and we're just building empires of technical debt. Too big to fail, too entrenched to compete against. But I still think, at some point, we have to say "fuck it, I'm doing things the right way, not the fast way."


Imagine a world without ChatGPT or Stack Overflow. Is it easier to build your Zig project in that world?


Possibly yes. But this is hard to know and I certainly don't. What I can say is that it would certainly be easier without SEO and that both things you listed tend to exacerbate SEO Hell. SO less so, but it'd be naive to think it is immune. Though not quite to the degree of Reddit. But LLMs tend to reflex the training, so...


The simple answer is no.

However, it's naive to deny that people are driven to technology with large amounts of samples and documentation. When trained on it, LLMs can essentially create infinite, customized versions of these.

LLMs artificially inflate the amount of documentation and samples for older technologies, and many lack the knowledge of newer ones. This creates a feedback loop until the newer technology is included, but the damage has already been done.

That's just a perspective, I don't know that I fully agree with the premise, and certainly don't know a solution. The best solution I can think of is more LLMs that are essentially constantly RAGing from the internet, rather than mostly using pre-trained info. Obviously this part is no longer just the model, but I imagine models could be made to make the RAG process more efficient, or better.


No, but the relative ease of picking up Zig is lower compared to another language that has a lot of training data (maybe golang could be used for similar projects) in the LLM world. So users might be less reluctant to pick up Zig at all now.

That could slow down adoption of new languages, frameworks, libraries etc.


I feel myself taking AI’s base knowledge of a tech stack into account when I work. Otherwise it feels like I am swimming upstream. I can’t be the only one, and this article resonated for me.


Sure, but what I'm saying is that's not where the knowledge bottleneck is.

The knowledge bottleneck is in the human population, which is then reflected in Stack Overflow and blogs, which is then reflected in LLM's.

LLM's aren't doing anything special to stifle new tech adoption. New tech is harder to adopt because it's new tech, because people in general are less familiar with it.

(And there's a little bit of a training delay in LLM's updating, but that's a minor aspect here compared to the years it takes for a new Python package to become popular, for example.)


It’s generally true that the people who come up with the first interesting idioms in a new space get an outsized influence on the community. It takes a while for people to borrow or steal their ideas and run with them.

On the plus side while the AI can’t write in that language there’s money to be made doing the work organically.


Yes, but I think it's also true that LLMs are acting as a force multiplier to the underlying effect you describe.

Something like (my apologies to maths for the abuse) LLMs are adding one to the exponent of the adoption feedback cycle.


"Early adopters will be early adopters. And early adopters aren't the kind of people relying on an LLM to tell them what to try out."

Why is this? 'Early adopter' is a pretty arbitrary category of people.


One thing that is interesting is that this was anticipated by the OpenAI Codex paper (which led to GitHub Copilot) all the way back in 2021:

> Users might be more inclined to accept the Codex answer under the assumption that the package it suggests is the one with which Codex will be more helpful. As a result, certain players might become more entrenched in the package market and Codex might not be aware of new packages developed after the training data was originally gathered. Further, for already existing packages, the model may make suggestions for deprecated methods. This could increase open-source developers’ incentive to maintain backward compatibility, which could pose challenges given that open-source projects are often under-resourced (Eghbal, 2020; Trinkenreich et al., 2021).

https://arxiv.org/pdf/2107.03374 (Appendix H.4)


ChatGPT and Gemini default to create-react-app, which has been considered poor practice for 2 years at least.


What's considered better practice? NextJS?!


Literally yes? https://react.dev/learn/creating-a-react-app lists several options because different people will click with different solutions. Or find one of the many other ones yourself of course, they're not hard to find, but cutting your teeth on an official recommendation before moving on to greener pastures is always a good idea.

The first one is always to learn from and then throw away.


Sure, or something like create-vite. Dealers choice.


vite


It's also officially dead now (finally, lol).


I think if you specify a technology in your prompt, any LLM should use that technology in its response. If you don't specify a technology, and that is an important consideration in the answer, it should clarify and ask about technology choices, and if you don't know, it can make a recommendation.

LLMs should not have hard-wired preferences through providers' prompt structure.

And while LLMs are stochastic parrots, and are likely to infer React if a lot of the training corpus mentions React, work should be done to actively prevent biases like this. If we can't get this right with JS frameworks, how are we going to solve it for more nuanced structural biases around ethnicity, gender, religion or political perspective?

What I'm most concerned about here is that Anthropic is taking investment from tech firms who vendor dev tooling - it would not take much for them to "prefer" one of those proprietary toolchains. We might not have much of a problem with React today, but what if your choice of LLM started to determine if you could or couldn't get recommendations on AWS vs Azure vs GCP vs bare metal/roll your own? Or if it suggested only commercial tools instead of F/LOSS?

And to take that to its logical conclusion, if that's happening, how do I know that the history assignment a kid is asking for help with isn't sneaking in an extreme viewpoint - and I don't care if it's extreme left or right, just warped by a political philosophy to be disconnected from truth - that the kid just accepts as truth?


The more likely and far more mundane outcome isn’t that LLM providers actively tip the scales, but rather that they just entrench existing winners.

As others have pointed out, it's a flywheel: Popular library gains traction → LLMs are trained to produce the “most likely response”, which naturally aligns with what’s already popular → people stop seeking alternative solutions and instead double down on the existing mainstream choices. (Hypothetically) It's not that OpenAI decides to push AWS, it's just that at the time it was trained AWS was the only real option so it just regurgitates a common view from a point in time.

To extend your analogy, the more realistic scenario isn’t that kids slip in extreme view points and take them as ground truth in their history assignments, it’s that they don’t take a stance on anything at all: Their essays become like CSPAN transcripts, perfectly regurgitating what happened without taking any position or applying any critical thinking one way or the other.

Imagine kids writing about civil rights, but all their reference material was stuck in time at 1953: That's what's more likely to happen.


> The more likely and far more mundane outcome isn’t that LLM providers actively tip the scales, but rather that they just entrench existing winners

100%. This isn't that different from the previous status quo (googling how to build a web app will give me guides from digital ocean, vercel, etc about deploying the currently popular technology on their platforms)

As in all things, though, the new technology reinforces this feedback loop faster.

Fwiw, I haven't had any trouble using Claude in cursor to write svelte5 code - there are tools (cursorrules, the svelte 1-pager documentation for LLMs) that you can use to make sure it uses the tech you want. It just requires intention from the prompter and good documentation from the tooling


This is actually a very interesting insight, not only do you have to worry about sponsored results but people could game the system by spamming their library/language in a places which will be included in the training set of models. This will also present a significant challenge for security, because I can have a malicious library/package spam it in paths that will be picked up in the training set and have that package be referenced by the LLM.


> I think if you specify a technology in your prompt, any LLM should use that technology in its response. If you don't specify a technology, and that is an important consideration in the answer, it should clarify and ask about technology choices, and if you don't know, it can make a recommendation.

I'm sure we'd all love that but this pipe dream is simply incompatible with the way LLMs work.

orchestration/deployment/agent networks may be able to do that, but that's basically impossible for the LLM itself.


Can confirm, I recently gave up on learning anything new re: data visualization and have just been using matplotlib instead. Training data for it has been piling up since 2008. The AI's are so good at it that you hardly ever have to look at the code, just ask for changes to the graph and iterate.

Honestly it's been kind of fun, but I do feel like the door is closing on certain categories of new thing. Local maxima are getting stickier, because even a marginal competence is enough to keep you there--since the AI will amplify that competence in well-trained domains by so much.

Emacs lisp is another one. I'd kind of like to build a map of these.


Even trivial requirements they fail at when my boss says something and I have no idea if MPL supports it - visualization and visual mediums with complex interconnected bits are actually one of the harder things to do with a text/programmer basis... we literally fail to write tests to validate visual behavior all the time without tools like selenium.


I actually asked this a while back, but got little response: https://news.ycombinator.com/item?id=40263033

> Ask HN: Will LLMs hurt adoption of new frameworks and technology?

> If I ask some LLM/GPT a react question I get good responses. If I ask it about a framework released after the training data was obtained, it will either not know or hallucinate. Or if it's a lesser known framework the quality will be worse than for a known framework. Same with other things like hardware manuals not being trained on yet etc.

> As more and more devs rely on AI tools in their work flows, will emerging tech have a bigger hurdle than before to be adopted? Will we regress to the mean?


I like what FastHTML folks did by preparing what amounts to instructions for LMs. From their Github [1]:

> This example is in a format based on recommendations from Anthropic for use with Claude Projects. This works so well that we’ve actually found that Claude can provide even better information than our own documentation!

This is the file: https://docs.fastht.ml/llms-ctx.txt

[1] https://github.com/AnswerDotAI/fasthtml


I do not know how FastHTML works with AI, but for the recent Svelte 5, which has a similar llms.txt file, it's clear to me that actual usage patterns are required over explanatory llms.txt content with instructions. In Svelte 5 usage, it's consistent at certain things which the docs spell out for version 5 and would not be in the training data (runes) but not in their placement, reactivity, or usage changes implied by the system (e.g. changes of information flow outside of props/states)

It seems similar to a cheat sheet which has formulas, vs a cheat sheet with worked through sample problems that align perfectly with the test.

The latter exists for historic modules - the former is the best you can do for recent libraries/versions.

I am not sure where React stands as I know they have changed their reactivity model and introduced patterns which reduce boilerplate code. Can anyone comment on the latest react version used with AI?


This example is based on this proposed standard (making to robots.txt or security.txt): [The /llms.txt file – llms-txt](https://llmstxt.org/)


It seems self-evident it will, and it's largely self-reinforcing.

Less documentation/examples of new tech -> New model doesn't have enough info on new tech to be useful -> Less uptake of new technology -> Less documentation/examples to build a corpus....

I do wonder if this problem could get solved by basically providing documentation explicitly written for LLMs to consume and produce more detailed "synthetic" documentation/examples from. No idea if that's possible or even wise, but probably a problem space worth exploring. Or if these LLMs develop some sort of standardized way to rapidly apply new bodies of work that avoids costly retraining - like kernel modules, but for LLMs.


Since the current chatbots have the ability to tap into Google Search, it's not unlikely they could gather their own up to date documentation on-the-fly. This would create a slew of new attack vectors where malicious actors will try to add backdoors into the documentation, which the LLM would reproduce.

A seasoned software engineer will easily pick it up, but the large amount of folks that are just copy pasting chatbot output to make their own apps will certainly miss it.


With people using LLMs instead of googling, there isn't an incentive for people to write tutorial blog posts.


> Less documentation/examples of new tech -> New model doesn't have enough info on new tech to be useful -> Less uptake of new technology -> Less documentation/examples to build a corpus....

Q: Could it be that those who aren't relying on ChatGPT (or similar) might have a significant competitive advantage?


Just a head start. The LLM retraining cycle could probably become shorter and shorter over time.


Supporting anecdata: I was interested to see that chatgpt doesn’t know how to use one of my (small, not too popular) open source python packages — despite having blog posts and documentation on them, all from more than five years back.


+1 ChatGPT 4o still does not understand that React's ChakraUI is now on version 3 - which is incompatible with it's ChakraUI version 2 training data :')


That hasn’t been my experience at all. Could you share some examples?

> As of February 14, 2025, the latest version of Chakra UI for React is 3.8.0, released on February 9, 2025. This release introduced new hooks such as useElementRect, useForceUpdate, useLiveRef, usePrevious, and useSafeLayoutEffect, as well as a new FocusTrap component. Additionally, the Breadcrumb component received fixes for RTL support, and the Group component was updated to handle invalid children appropriately.

https://chatgpt.com/share/67aff366-db20-8011-8549-861b1fc31c...


No, new tech is just 100% extractive, wealth-generating garbage, and people are sick and tired of it. Come up with something new that isn't designed to vacuum up your data and your paycheck, and then maybe people will be more enthusiastic about it.


Who's it extracting from?


You didn't read the article


AI may be exaggerating this issue, but it's always existed.

New tech has an inherent disadvantage vs legacy tech, because there's more built-up knowledge. If you choose React, you have better online resources (official docs, tutorials, answers to common pitfalls), more trust (it won't ship bugs or be abandoned), great third-party helper libraries, built-in IDE integration, and a large pool of employees with experience. If you choose some niche frontend framework, you have none of those.

Also, popular frameworks usually have better code, because they have years of bug-fixes from being tested on many production servers, and the API has been tailored from real-world experience.

In fact, I think the impact of AI generating better outputs for React is far less than that of the above. AI still works on novel programming languages and libraries, just at worse quality, whereas IDE integrations, helper libraries, online resources, etc. are useless (unless the novel language/library bridges to the popular one). And many people today still write code with zero AI, but nobody writes code without the internet.


I've been looking all through here for someone to finally make this most obvious point.

Even for those of us who use mostly stack overflow/google, it's much cheaper to wait on someone else to run into your problem and document the solution than to be first into the fire. We've relied on this strategy for a couple of decades now.

I don't think the OP has demonstrated that adoption rates for new tech have changed in any way since AI.

> Also, popular frameworks usually have better code, because they have years of bug-fixes from being tested on many production servers, and the API has been tailored from real-world experience.

Overall I am very resistant to the idea that popular==good. I'd say popular==more popular. Also I think there's often a point where feeping creaturism results in tools that are overcomplicated, prone to security bugs and no longer easy to use.


>Consider a developer working with a cutting-edge JavaScript framework released just months ago. When they turn to AI coding assistants for help, they find these tools unable to provide meaningful guidance because their training data predates the framework’s release. [... This] incentivises them to use something [older].

That sounds great to me, actually. A world where e.g. Django and React are considered as obvious choices for backend and frontend as git is for version control sounds like a world where high quality web apps become much cheaper to build.


> A world where e.g. Django and React are considered as obvious choices for backend and frontend as git is for version control sounds like a world where high quality web apps become much cheaper to build.

Imagine you saying this twenty years ago. Would you still want to be writing your back-end in JavaBeans, your front end in VisualBASIC, and storing your data in Subversion?


VisualBASIC made much nicer (and more responsive) UIs than Electron, so this isn't the slam dunk you think it is.


We dunk on Electron too, don't worry.


I miss VB too.


What if it happened just before React, and you therefore got stuck with angular? Should we now be stuck with React forever just because it's okay-ish, never allowing future better framework to emerge?


>What if it happened just before React, and you therefore got stuck with angular?

Still a good thing. :) The massive bump in developer market liquidity is far more valuable in my eyes than any inherent DevEx advantages. You'd still have much cheaper high quality web apps, although, if React truly has a technical advantage over Angular (doubtful), maybe not as much cheaper, but still much cheaper than pre-LLM.

If you truly want to figure out where I think the equation sign flips, it's probably like, pre-Smalltalk somewhere.


Or we ended up writing jqueryscript forever!

    $(function() {
      // yay
    })


I’m all for boring technologies but can we please at least use compiled languages with types


Nobody's stopping you?


>With Claude 3.5 Sonnet, which is generally my AI offering of choice given its superior coding ability, my “What personal preferences should Claude consider in responses?” profile setting includes the line “When writing code, use vanilla HTML/CSS/JS unless otherwise noted by me”. Despite this, Claude will frequently opt to generate new code with React, and in some occurrences even rewrite my existing code into React against my intent and without my consultation.

I noticed this too. Anyone found out how to make Claude work better?


Claude is particularly bad about this, almost makes it unusable for my frontend use cases. I specify the exact tech stack in my prompt and it responds with a solution using whichever packages are available in its environment (Tailwind, shadcn/ui, etc.).

My request to model providers: the strength of your offering is its generality. Please let it be general-purpose and resist adding features (alignment, system prompts, custom stuff like Claude artifacts) that limit this.


Since the system prompt tied to the artifacts feature seems to be the reason for it having a preference for react, seems like the solution would be to use the API instead. Plenty of front ends available nowadays that let you use your own API key. I've been using typingmind since I paid for it over a year ago but I'd be interested to know if some good open source alternatives have popped up more recently.


The main blocker to using the API with an alternative frontend is the cost.

Daily API usage can easily go above the $20/month subscription cost since output tokens are expensive and each new message reuses the whole message chain as input tokens. Especially true if you often upload images or documents.


Third party providers work through the APIs and thus do not have these issues. I use Kagi but there are many options out there for a price similar to the normal subscription, and plus you get a wider choice of models.


Like several other commenters in this thread, I also wrote[0] something recently on a related topic: Google's AI Overviews and ChatGPT harm the discovery of long tail information - from a product builder's perspective. Basically, users are having a tougher time finding accurate info about your product (even if the correct answer to their query is in Google's own search results). And I also found the basic tier of ChatGPT hallucinated my app's purpose in a way that was borderline slanderous. AI can make it tougher (at scale) for creators trying to break through.

[0] - https://keydiscussions.com/2025/02/05/when-google-ai-overvie...


AI is going to solidify "convention wisdom" and "common sense" as whatever the AI says. That's why there's such a fight over what assumptions and biases get baked into that.


> And I also found the basic tier of ChatGPT hallucinated my app's purpose in a way that was borderline slanderous.

I'm curious about this, what exactly did ChatGPT write and how was it borderline slanderous? Sounds like a big danger.


So ChatGPT seemingly guessed its purpose just from its name. Its name is CurrentKey Stats, and it inaccurately described it as an app that kept stats on the current keyboard keys you were pressing, so essentially a key logger, which is again completely wrong. I was actually somewhat hesitant to make this comment out of the fear that the next AI models will train on it and reinforce the false idea that that's what my app is. Sad times


FWIW, one thing they're pretty competent at is sentiment analysis, so if they read your comment, even in isolation, what they'll probably learn that reading that kind of thing into a name is really bad in general.

It's not like the bad old days where sentiment analysis was a bag of words model, add up all the "positive" words and subtract from that total the number of "negative" words — back then, they would mis-identify e.g. "Product was not as described, it did not come with batteries, and the surface wasn't even painted" as "this review favours this product" because they couldn't handle "not" or "wasn't".


You may be happy to learn that I just asked ChatGPT (the free tier that you can access without signing in) "What does the CurrentKey Stats app do?" and got this back:

----

CurrentKey Stats is a macOS application designed to enhance your productivity by offering detailed insights into your app usage and providing advanced management of your Mac's virtual desktops, known as "Spaces."

Key Features:

Time Tracking: Unlike macOS's Screen Time, which tracks total app usage, CurrentKey Stats monitors the time each app spends in the foreground, offering a more accurate representation of active usage.

Room Management: The app allows you to assign custom names and unique menu bar icons to each Space, facilitating easy identification and navigation. You can switch between these "Rooms" using the menu bar icon or customizable hotkeys.

Automation Support: For advanced users, CurrentKey Stats supports AppleScript, enabling you to automate actions when entering or leaving specific Rooms. This feature can trigger scripts to perform tasks like launching applications, adjusting system settings, or controlling smart home devices.

Privacy-Focused: The application prioritizes user privacy by keeping all data local on your device, ensuring that your usage statistics are not shared externally.

By combining detailed app usage statistics with enhanced Space management and automation capabilities, CurrentKey Stats aims to help you use your Mac more efficiently and deliberately.

----

But more generally, I think people often look at LLMs and assume that the current flaws will be around forever and this will be horrible. But all technologies have big, obvious flaws when they're first introduced and these get fixed because there's a strong incentive to have a less bad product.


The model and features available to it matters.

o1 says "I’m not aware of a widely recognized or universally known product called “CurrentKey Stats,” so it may not be a mainstream or highly publicized app. There are a few possibilities, though:" and then some guesses.

o3-mini is similar: "I couldn’t find any widely recognized information about an app specifically called CurrentKey Stats. It’s possible that:"

When I turn on its shiny new search capability, it correctly identifies and summarizes the app.

o3-mini-high, search turned off, asks for clarification.

Which means the default free 4o is quietly doing retrieval-augmented generation behind the scenes. I thought o3-mini would search if it didn't know the topic, but I might be misremembering.


Fascinating. I would have thought the fancier models would have definitely known about it. The app is six years old, has ~120 ratings globally with ~4.5 stars, with several articles written about it etc.. It should be pretty widely available in the training data


That's good. When i wrote the article, I was able to get a similar response when i used chatgpt "search", but not with the basic tier default prompt with the prompt "i own a Mac, would currentkey stats be good for me?". Were you using the default chatgpt prompt here or chatgpt "search" or a better model?


I went to https://chatgpt.com, typed "What does the CurrentKey Stats app do?" into the box, and pressed Enter.


Cool - thanks for giving it a shot, and I'm glad the basic tier is giving an accurate response with that query


> Basically, users are having a tougher time finding accurate info about your product (even if the correct answer to their query is in Google's own search results).

That's a gigantic "even if".

In my experience, I'm able to find stuff much easier with LLM's that Google search couldn't surface.

If I'm looking for a product that does exactly X, Y but doesn't Z, keyword search can be pretty terrible. LLM's actually understand what I'm looking for, and have a much higher probability of pointing me to it.


Yeah, i have also found LLMs useful, and sometimes with broad search queries. Which makes it quite the paradox when you own a product on the other side of bad LLM results


There is a counter example for this: MCP, a standard pushed by Anthropic, provides a long txt/MD optimized for Claude to be able to understand the protocol, which is very useful to bootstrap new plugins/servers that can be used as tools for LLMs. I found that fascinating and it works really well, and I was able to one-shot improve my CLInE extension (a coding agent similar to cursor.sh) to work with existing APIs/data.

It's so easy to bootstrap that even though the standard is a couple of months old, already has a few hundred (albeit probably low quality) implementations to adapt to different services.

- txt/markdown for LLMs: https://modelcontextprotocol.io/llms-full.txt

- server implementations: https://github.com/modelcontextprotocol/servers#-community-s...


Now we're going to see sites specifically optimized to promote something to AIs. It's the new search engine optimization.


Prompt Engine Optimization


Guess I figured out my niche as a SWE: have a later knowledge cutoff date than LLMs


Niche: know anything about anything.


Larger context windows are helping solve this, though.

I use ALpineJS which is not as well known as React etc, but I just added a bunch of examples and instructions to the new cursor project rules, and it's now close to perfect.

Gemini models have up to 2M context windows, meaning you can probably fit your whole codebase and a ton of examples in a single request.

Furthermore, the agenetic way Cursor is now behaving, automatically building up context before taking action, seems to be another way around this problem


I also suspect reasoning models will start contributing genuine innovations to public repositories in the near future.


> OpenAI’s latest models have cutoffs of late 2023.

The first paragraph is factually incorrect; the cutoff is June 2024 for 4o.

Awww, no more new JavaScript frameworks and waiting only for established technologies to cut through the noise. I don't see that as a bad thing. Technologies need to mature, and maintaining API backward compatibility is another advantage.


> Awww, no more new JavaScript frameworks and waiting only for established technologies to cut through the noise. I don't see that as a bad thing. Technologies need to mature, and maintaining API backward compatibility is another advantage.

I think this kind of discussion is immature and downplays the point of the article.

A good example of this that I just encountered: Rust. Just asked Claude/ChatGPT for rust stuff recently, and it still gives a lot old/depreciated methods for a lot of things. This has been the case for Godot 3 vs 4 as well.


Same here. In my TypeScript code, Cursor/Claude seem quite fluent. In my Rust code, I often turn Cursor Tab off because it’s just suggesting nonsense.


It does not really matter as even though the models get updated, the new data was produced with the help of the older models, it is feeding on itself.

Just imagine how hard it would be to push a new programming language. No AI models would be able to generate code in that new language, or they would be extremely limited. This would make adoption much more difficult in a world where all developers use AI tooling extensively.

I believe this trend could create new opportunities also: as everyone uses AI tools to generate statistically average quality code, only those not using AI tools will be able to create true innovation.


In some sense I am hopeful that AI will be able to just write everything directly in binary. Everything written ideally, with no abstraction, fluff or bumpers for human brains. Computers don't talk in any high level programming language, they talk in binary. If anything we should probably be focusing LLMs on getting good at that.

I can only imagine that the amount of energy wasted on CPU cycles from layers of bloated programming languages makes stuff like bitcoin mining look like a rounding error.


Not sure that’s always a good thing - see the occasionally erratic behavior of Tesla’s autopilot. It directly speaks the language of the systems it connects with, and also occasionally steers into dividers, killing people - and nobody knows why. We need to be able to verify correctness of what the models generate.


So you can't even debug or check the LLM's output for correctness? Good luck with that.

If anything, the best use for LLMs is to produce code in very strong languages like Lean, where correctness of the code can be established. Just trusting machine code output would be irresponsible.


Author here. May I request a source for that?

Platform docs state:

> The knowledge cutoff for GPT-4o models is October, 2023.

https://platform.openai.com/docs/models#gpt-4o


> Updates to GPT-4o in ChatGPT (January 29, 2025)

> By extending its training data cutoff from November 2023 to June 2024 […]

https://help.openai.com/en/articles/9624314-model-release-no...


Thank you. I'll make a correction next I can.

I do wonder why this information is lacking from the platform docs though. They specifically mention a model that is the "GPT-4o used in ChatGPT".


It seems a bug to me. If content in the doc does not match the actual behavior, either the doc is outdated, or the software has a bug. Which is a bug either way.


What if you are using boring technology, but surprise, it or some of its libraries got updated? React is on version 19. Show some imagination.


I hear you, and I’ve run into similar issues where AI falls short—take aerospace documentation systems or tools like MCP (tiling manager) as prime examples. That said, framing this as a stifling of technology feels like a leap to me. The hypothesis strikes me as anecdotal at best. For instance, I don’t see coworkers abandoning libraries just because LLMs can’t assist—it’s not that hard to dive into the library code yourself, or even attach docs in Cursor if clarity’s a priority. If anything, the friction here feels minor, not existential. To claim AI is holding progress back? That argument needs more robust evidence, in my opinion.


The problem is acute with APIs that move fast and deprecate (Shopify and some of the Google ones)


I can't wait to see how these AI models maintain backward compatibility


I noticed this as I experimented with alternatives for React and all of them I tried were terrible on OpenAI/ChatGPT. Either it doesn't know them, or it makes weird mistakes, or uses very outdated (no longer working) versions of the code.

It is also annoying that most modern JS things have 4 versions to do the same thing: With TS, With TS + Decorators, With plain JS, with JSX, etc. so code generation picks one that isn't compatible with the "mode" you use.


I've noticed ChatGPT/GH Copilot is also particularly bad at PowerShell (I do sysadminy things), especially anything to do with the MS Graph API.

It just makes up Cmdlets a lot of the time. If you prod it enough though it will eventually get it right, which strikes me as odd, it's like the training data was just full of really bad code.

By contrast, anything I've asked it to do in Python has been more or less spot on.

I fear that in the future the choice of tech stack is going to be less on the merits of the stack itself and more centered around "Which language and framework does ChatGPT (or other AI) produce the best output for"


If AI stifles the relentless churn in frontend frameworks then perhaps it's a good thing.


It isn't only frontend frameworks.

I currently AI-coma / tab-complete C++17 with decent results for stuff ridiculously far away from frontend, but I do wonder who is providing the training data for C++23 and onwards as there isn't wide adaptation yet.


I can chime in with a similar anecdote. I use co-pilot extensively in "fancy tab completion" mode. I don't use the conversational features - just the auto-complete to help my coding along.

I specifically found it very useful when dealing with a bunch of boilerplate C++ shim code that used some advanced template magic to declare families of signatures for typed thunks that wrapped and augmented some underlying function call with record/replay log.

It was arcane, bespoke stuff. Highly unlikely to be imitated in training data. The underlying code was written by a colleague (the CTO) but it was hard to grok just because of all the noise the template magic added to the syntax, and carefully keeping track of what was getting substituted where.

The changes I was making were mostly to add new shims to this collection of shims, and co-pilot happily used the pattern of shims in previous lines of code to extrapolate what my new shims should look like and offer me tab-completions that were sensible.

That included some bad guesses (like inferred names for other templated forms that referred to different argument types), but overall it was structurally close enough to use as a reference point and fix up as needed.

It was really great. Took what would have been about day's worth of work carefully figuring out from first principles how the template system was structured.. and made it into about half an hour of "get it to generate the next shim, verify that the shim does the right thing".


> It was really great. Took what would have been about day's worth of work carefully figuring out from first principles how the template system was structured.. and made it into about half an hour of "get it to generate the next shim, verify that the shim does the right thing".

That also seems to highlight the disadvantage too - if you'd taken a day, you would have come away with a deeper understanding of the template system.


Fair point. In that particular circumstance I had no desire to learn the details of the system - the need of the day was to get in there, get the shims in, and get out to the other code that mattered.

If I hadn't worked for more than a decade with C++ and already been reasonably fluent in template semantics, there's a good chance I might have introduced bugs as well.

I think the issue is that these feel like incredibly safe tools because all they do is generate text, but at the same time it can lead to bad hygiene.

I personally never use AI to write prose (I'll keep my voice and my quirks thanks). And for code, I utilize the system specifically as a fancy pattern extension system. Even in well-abstracted code, about 90% of the structure of it can be inferred from the other parts. I let the AI complete that for me with tab-completions and verify that each line is what I would have otherwise typed before moving on.

I'm not comfortable engaging at a deeper level with these tools yet.


I agree. Most frontend rewrites are totally unnecessary, caused by resume-driven development, desire for the latest fancy thing and most JS frameworks having a short lifespan. If LLM's reduce that behavior by steering devs towards most popular solutons, then it's only a good thing.


EXACTLY.

I suspect there has been a decade long Three-card monte on the front end in that change is good because change keeps front end salary up.

Personally, the sooner LLMs make all front end developers unemployed the better.


The answer to this seems obvious: continuous training of live models. No more "cutoff dates": have a process to continually ingest new information and update weights in existing models, to push out a new version every week.

Note that I said "obvious", not "easy", because it certainly isn't. In fact it's basically an unsolved problem, and probably a fiendishly difficult one. It may involve more consensus-based approaches like mixture of experts where you cycle out older experts, things like that -- there are dozens of large problems to tackle with it. But if you want to solve this, that's where you should be looking.


While you might be able to continuously update the model, are you able to continuously update the moderation of it? As the article says, it takes time to tune it and filter it; if you allow any content in without some filtering of outputs you might end up with another Tay. You'd have to think the liability would slow down the ability to simply update on the fly.

Also, if the proportion of training data available is larger for more established frameworks, then the ability of the model to answer usefully are necessarily dictated by the volume of content which is biased towards older frameworks.

It might be possible with live updating to get something about NewLibX but it probably would be a less useful answer compared to asking about 10YearOldLibY


Moderation is the real reason it will be difficult to have online learning models in production. I think the technical side of how to do it will not be the biggest issue. The biggest one will be liability for the output.


Talking to non-technical, but otherwise well-informed people, there is a broad assumption that AIs already "learn" as they're used for inference. IME people are surprised to find training and inference to be entirely separate processes. Human intelligence doesn't have such a stark distinction between learning and applied learning.


Yeah I think every lab would love to do this and the field has been thinking about it forever. (Search lifelong learning or continual learning on Google Scholar). I don't think a technological solution is likely enough that we should pursue it instead of social solutions.


I think this is true because I myself said to myself: "it is useless for me to create a library or abstraction for the developers of my project, much better to use everything verbose using the most popular libraries on the web". Until yesterday having an abstraction (or a better library/framework) could be very convenient to save time in writing a lot of code. Today if the code is mostly generated there is no need to create an abstraction. AI understands 1000 lines of code in python pandas much better than 10 lines of code using my library (which rationalises the use of pandas).

The result will not only be a disincentive to use new technologies, but a disincentive to build products with an efficient architecture in terms of lines of code, and in particular a disincentive to abstraction.

Maybe some product will become a hell with millions of lines of code that no one knows how to evolve and manage.


This is completely wrong and assumes that an LLM is just much better at its job than it is - an LLM doesn't do better with a chaotic code base, nobody does - a deeply nonsensical system that sort of works is by far the hardest to reason about if you want to fix or change anything, especially for a thing that has subhuman intelligence.


LLMs work best matching patterns. If 1k loc matches patterns and the 10 loc doesn’t, it’s a problem.

The only thing the OP is missing which combines the best of both worlds is to always put source of and/or docs for his abstractions into the context window of the LLM.


If your abstractions match common design patterns then you've solved your problem. It's ridiculous to assume that an LLM will understand 1k LOC of standard library code better than 10 lines of a custom abstraction which uses a common design pattern.

It's more prone to hallucinating things if your custom abstraction is not super standard but at least you'd be able to check its mistakes (you're checking the code generated by your LLMs right?). If it makes a mistake with the 1k LOC then you're probably not going to find that error.


LLMs are not human, they see the whole context window at once. On the contrary it’s ridiculous to assume otherwise.

I’ll reiterate what I said before: put the whole source of the new library in the context window and tell the LLM to use it. It will, at least if it’s Claude.


Attention works better on smaller contexts since there's less confounding tokens so even if the LLM can see the entire context, it's better to keep the amount of confounding context lower. And at some point the source code will exceed the size of the context window; even the newer ones will millions of tokens of context can't hold the entirety of many large codebases.


Of course, but OP’s 1kloc is nowhere near close to any contemporary limit. Not using the tool for what it’s designed because it isn’t designed for a harder problem is… unwise.


I have experienced quite a few of mistakes by claude as documentation grows larger (and not necessarily too large compared to certain standards). Eg some time ago, I fed a whole js documentation for some sensors into the context window and asked to generate code. The documentation mentioned specifically that it does not fully support ES6, and also explicitly that it does not support const. Claude did not bother and used const. And many times I have experienced that Claude makes mistakes using syntax in a (much less common than js or python) language that would make sense in some other language maybe, but not that one. I have inserted instructions not to do the specific mistakes in system prompts, told it to make sure it is valid syntax for X language, but Claude once in a while keeps doing the same mistakes. Negative prompts are hard, especially when probably going against a huge bunch of the training set.


> Maybe some product will become a hell with millions of lines of code that no one knows how to evolve and manage.

That is exactly what will happen, so why would you do that?


I think I might be forced to do this by the metrics that measure me at work "things have to work right away and have to scale quickly to other low-skilled people"


On the other hand you should ask yourself why do you care? If you assume no human will ever read the code except in very extraordinary circumstances, why wouldn’t you do that?


Wow, and this posture doesn't apply to junior developers, ie a good abstraction is needed to avoid overwhelming the human "context window."

But it is a shame--and possibly an existential risk--that we then begin to write code that can only be understood via LLM.


Only in one sense. As code is now cheaper, abstractions meant to decrease code quantity have decreased in value. But abstractions meant to organize logic to make it easier to comprehend retains its value.


I like this take.

Previously there was a tension between easy-to-write (helper functions to group together oft-repeated lines of code, etc) vs easy to read (where often modest repetition is fine and is clearer). I felt this tension a lot in tests where the future reader is very happy with explicit lines of code setting things up, whereas the test author is bored and writes layers of helper functions to speed their work up.

But for LLMs, it seems readability of code pretty much equals its writability?

To make code more authorable by LLM, we approximately just need to make it more readable in the traditional sense (code comments, actual abstractions not just code-saving helper functions, etc).


I hope so, but it adds an extra difficulty Easy to understand is not always an absolute metric, a project with many lines of code can be easy to understand for a team with a certain experience and difficult to understand for another team with a different experience (not less but different). Now I will have to think about "easy to understand" for AI


Why would AI stifle tech adoption when ~nobody uses it? I think HN is in a bit of a bubble here. People on here seem to often think that everyone is using AI at work, it is really common and widely appreciated, etc.

It really, really isn't. Most people in the software industry do not use it. Its use in other industries and in the professions is even lower. AI coding tools are bad enough at widely used things like Python and JS. They are DOGSHIT at generating C or C++. They are basically terrible at doing anything other than regurgitating things from Medium blogspam tutorials.

The result is not people moving to only using technology that AI is "good" at (relatively, given it is terrible at coding anything at all). It is that the overwhelming majority don't use it at all. The thing is, nobody really talks about this because it isn't interesting _not_ to use something. You can't write many high-engagement blog posts to content-market your company by saying you still just use vim and ctags and documentation to write code, just like you did 10 years ago. That isn't noteworthy and nobody will read it or upvote it. HN is always biased by this towards the new, the noteworthy, changes to practices, etc. Just like browsing HN would lead you to believe people are rewriting their websites in new JS frameworks every 6 months. No, but posts about doing that obviously generate more engagement than 6-monthly "Update: Our website is still written in Ruby on Rails" posts would.


Is this such a bad result? Do we need office CRUD apps to use bleeding edge technologies?


It's also a problem when adopting new functionality in existing frameworks (eg. upgrading an app to new Android release), dropping the use of deprecated functionality, taking advantage of more readable syntax in programming languages, etc.


> taking advantage of more readable syntax in programming languages

"AI" is the programming language here. The readability of any lower level language(s) that may exist as a compiler target of the "AI" is about as important as how readable assembly language is to someone writing software in Rust.


If they don’t, then will the bleeding edge technologies ever be used by more serious apps? Will it even be ready if we don’t test it with easier use-cases?


Probably not, but I guess the worry would be, if nobody adopts and uses them, bleeding edge technologies don't become the new normal. Unless you think we've reached perfection, it's almost guaranteed that future developers will look at React/Python/etc as we look at developing in Assembly or COBOL.


This makes me fear less for web development jobs being lost to AI, to be honest. Look, we can create new frameworks faster than they can train new models. If we all agree to churn as much as possible the AIs will never be able to keep up.


Until the AI groks the principle of deviation and starts generating frameworks faster than we do.


This ought to be called the qwerty effect, for how the qwerty keyboard layout can't be usurped at this point. It was at the right place at the right time, even though arguably its main design choices are no longer relevant, and there are arguably better layouts like dvorak.

Python and React may similarly be enshrined for the future, for being at the right place at the right time.

English as a language might be another example.


QWERTY is a poor example. The keyboard layout is not the bottleneck for anyone who does not participate in olympic typing competitions. DVORAK is just as arbitrary as QWERTY to everyone else including professionals, and there's value in backwards compatibility e.g. old keyboards don't become e-waste.


It sounds like you're thinking purely about the speed of a skilled typist. Alternative keyboard layouts offer a tangible ergonomic benefit even at lower WPM counts, and can have a lower hunt-and-peck time for novices by clustering frequently-used letters together. (This last effect is particularly pronounced on small touch screens, where the seek time is non-trivial and the buttons are much too close together for any sort of real touch-typing.)


I expect dvorak would actually be pretty bad for a phone screen keyboard. All your most common letters are right next to each other, making autocorrect/swipe type's job harder because adjacent letter pairs are much more likely to be interchangeable. (Especially since vowels are on the left side and consonants on the right.)


I remember reaching a really high speed on the keyboard of the original iPod Touch. It actually did feel like touch typing - I didn't really have to look at the on-screen keyboard. I can't pin down exactly what's been missing from newer keyboard apps. Something about the ergonomics and UI came together just right.


original iPod touch having a 3.5" screen probably had a lot to do with it, with thumb typing a smaller keyboard could be better - finer movements per keystroke. Modern iPhone 14 is 6.1"


The keyboard layout becomes a bottleneck after you develop carpal tunnel syndrome and bad posture due to internal shoulder rotation


It would be interesting if Apple and Google and Samsung agreed to make dvorak (or a mutually created better mobile/swipe variant) the default mobile keyboard for all NEW accounts going forward. Unlike hardware, software keyboards could be swapped in a generation or two.


> arguably better layouts like dvorak

I don’t think this has any truth outside of causing such argument. I am sure there is a paper on it that showed negligible difference for different layouts.

I used to use Dvorak but then I stopped when I was around 17? Qwerty for life.


Pathologists as a specialty has been grousing about this for several years, at least since 2021 when the College of American Pathologists established the AI Committee. As a trivial example: any trained model deployed will necessarily be behind any new classification of tumors. This makes it harder to push the science and clinical diagnosis of cancer forward.

The entire music community has been complaining about how old music gets more recommendations on streaming platforms, necessarily making it harder for new music to break out.

It's absolutely fascinating watching software developers come to grips with what they have wrought.


I know nothing about pathology, but in terms of software, I think slower adoption to new tech is what we need, especially when the "new tech" is just a 5% faster javascript framework.

By the way, for content creation, the only platfrom that really favors new creators is TikTok. Whether it leads to higher content quality is left for one's judgement.


> I know nothing about pathology, but in terms of software, I think slower adoption to new tech is what we need, especially when the "new tech" is just a 5% faster javascript framework.

I hope that's not the definition people are using when discussing adoption of "new tech".

When it comes to the topic of AI and "new tech adoption", I think about something like the Rust programming language.

I apologize if it chafes the people reading this comment that I'm something of a Rust evangelist and I'm working from a point of view that Rust's very existence is a (large) net-positive when it comes to programming and how we think about programming language design.

My fear with AI tools in their current state is that it will slow down innovation in programming languages. Rust gained popularity because it brought things to the table that made writing safe, performant, and correct (thinking about the strong, expressive, static type checking) software much easier than it had been with the old incumbents (in certain domains).

But, if Rust were released today or in the near future, would it take off? If we could, hypothetically, get to a point where an AI tool could spit out C or C++ code and push it through some memory sanitzers, Valgrind, etc and just iterate with itself until it was very likely to be free of memory safety bugs, why would we need a new language to fix those things? I guess we wouldn't. And it wouldn't really matter if the code that gets generated is totally inscrutable. But, it saddens me to think that we might be nearing the end of human-readable programming language research and design.


It will be harder for new languages and frameworks. The AI Its exasperates the bootstrapping problem.

An interesting example is perl which is essentially static at this point (perl 6 was renamed and never got traction).

I know from experience running pipelines that those old perl scripts almost always work, where if I come across an old python script (2x) I will have to go in and make some adjustments. Maybe a library has changed too…

People like new shinny things though. Maybe the new languages will try to train the ai and release there own models, but that’s a huge lift.


Might be easier than you think. If DeepSeek can train a model cheaply, so you can you. Probably more cheaply as the technology and models get better.

People used to be worried that AI performance was going to degenerate if models are trained on AI slop, but it's been found that synthetic data is the bee's knees for coding, reasoning and such, so it may well be that a new language comes with a large amount of synthetic examples which will not just be good for AI training but also for documentation, testing and all that.


I'm also going to argue that Rust is a less AI-friendly language than, say, Go.

GC languages have many benefits that come from 'you don't have to think about memory allocation'. For instance you can just smack an arbitrary library into a Java program with maven and not think at all about whether the library or the application is responsible for freeing an object. The global problem of memory allocation is handled by a globally scoped garbage collector.

LLMs are great at superficial/local translation processes (e.g. medium-quality translation of 中文 to English doesn't require constraint solving any more than remembering which of the many indexing schemes is the right one for 'how do I look up this row/column/whatever in pandas') But fighting with the borrow checker (getting global invariants right) is entirely outside the realm of LLM competence.


All you're talking about there in the end would be another compilation step.

I'm highly bearish on the concept of anything like that ever being possible (and near perfectly reliable) with llms, but if it were then it'd make sense as just another processing phase in compilation.


That's not wrong. There is a lot of hype-driven development in the programming world. People are always jumping on the latest web frameworks and such. A little bit more stability is not a bad thing.

That being said, I think that people underestimate how fast LLM technology can evolve. At the moment, lots of training data is needed for LLMs to learn something. This may not always be the case. In 2 to 5 years, it may be possible to train an LLM to be helpful with a new programming language with much less data than is needed today. No reason to assume that the current situation is what things will be like forever. It's not like this technology isn't evolving incredibly fast.


After watching the entire world’s reaction to AI, at this point my conclusion is that hype driven development is human nature, and we just need to come to terms with that (but you will have to drag me kicking and screaming).


Maybe if you think artificially inflating the hype through massive ad campaigns, marketing campaigns, and shoe-horning AI into every product then yeah the world has a reaction to AI. It's mostly been meh, things like Apple Intelligence and Office Copilot have largely fallen flat.

If the hype was real none of these AI initiatives would be struggling to make money, but they are.

I don't really see it different than the artificial web3 hype, the only difference being that LLMs are use for extreme happy path scenarios.


> I don't really see it different than the artificial web3 hype

It's also little different than the .com bubble...

I think this teaches us that a thing can be hyped into the stratosphere, suck very much, crash and burn, and then go on to eat the world...


The problem is that Apple intelligence is currently kinda useless. They rushed it into production in a misguided effort to "stay relevant". It may take a few years but we should eventually get useful personal assistant type AIs.

I would say LLMs are very useful for specific scenarios. They're also getting better. Just takes time to iron out the kinks.


a 5% _slower_ javascript framework


Surely 50% slower, compounding each year?

Jokes aside, I find it curious what does and doesn't gain traction in tech. The slowness of the IPv6 was already an embarrassment when I learned about it in university… 21 years ago, and therefore before the people currently learning about it in university had been conceived.

What actually took hold? A growing pantheon of software architecture styles and patterns, and enough layers of abstraction to make jokes about Java class names from 2011 (and earlier) seem tame in comparison: https://news.ycombinator.com/item?id=3215736

The way all of us seem to approach code, the certainty of what the best way to write it looks like, the degree to which a lone developer can still build fantastic products and keep up with an entire team… we're less like engineers, more like poets arguing over a preferred form and structure of the words, of which metaphors and simile work best — and all the while, the audience is asking us to rhyme "orange" or "purple"


The slowness to adopt IPv6 is because it's not a great design.

Going from 32-bits to 128-bits is complete overengineering. We will never need 128-bits of network address space as long as we are confined to this solar system, and the resulting addresses are extremely cumbersome to use. (Can you tell someone your IPv6 address over the phone? Can you see it on one monitor and type it into a different computer? Can you remember it for the 10 seconds it takes to walk over to the other terminal?)

48-bit addresses would have been sufficient, and at worst they could have gone with 64-bit addresses. This is already too cumbersome (9-12 base36 digits), but maybe with area-code like segmentation it could be rotated into manageable. 128-bits is just not workable.


> extremely cumbersome to use. (Can you tell someone your IPv6 address over the phone? Can you see it on one monitor and type it into a different computer? Can you remember it for the 10 seconds it takes to walk over to the other terminal?)

That's your idea of "extremely cumbersome"?

128 bit is exactly as hard as four groups of 32 bit.


Yes, it makes it untenable. There is no way I can remember 4 IP addresses at once.


Maybe I'm overlooking something but just looking at the past decade or so, a lot of new technologies and practices have been adopted. I assume most people would call these changes progress. So with this in mind, if in 10 years, we're by and large using the same technologies with AI injected in it, I feel that we would be missing something, as this article points out.

It's kind of sad to think that there may never be new technologies like Rust that break out and gain a critical traction. I'm hoping I'm wrong.


I guess it makes sense to differentiate technological areas where we want progress at "any possible pace" vs "wait and see pace". I don't know if pathologists or other medical professionals feel the same about their field.

On a related note, are there any techniques for facilitating tech adoption and bringing all users up to speed?


Tiktok does not favour new creators, its users do. And only because it's a new generation of consumer for the most part, who want to consume content from their chosen platform. The same thing will happen with alpha.


Netflix and YouTube push new content/favor new creators or directors.

They don’t push the classics like music platforms do, and I don’t think it’s just streamers’ tastes.


>5% faster javascript framework

Now you are bullshitting


You are correct he meant 1% faster


More like 30% slower, 10% easier for trivial stuff and 70% more complicated for nontrivial stuff


1% faster and 100% more complicated


Web development isn't your thing either i see ;)


> The entire music community has been complaining about how old music gets more recommendations on streaming platforms, necessarily making it harder for new music to break out.

Compared to what though? Compared to Limeware/Kaazaa back in the day, or compared to buying records in a store?

Personally, I find it easier than ever to find brand new music, mostly because Spotify still surfaces new things with ease for me (and always have been, since I started using it in 2008), and platforms like Bandcamp makes it trivial to find new artists that basically started uploading music yesterday.


> Compared to what though? Compared to Limeware/Kaazaa back in the day, or compared to buying records in a store?

Compared to curation by other humans. Be it music labels, magazines, radio DJs, or a person sharing their playlist or giving you a mixtape.

In this model, tastes never overlap perfectly, so you're exposed to unfamiliar music fairly regularly, often in some emotional context that makes you more likely to accept something new.

Algorithms don't really do that. They could, but no one is designing them that way. If I listen predominantly to female vocalists on Spotify for a week, I'm only getting female vocalists from now on.


I don't get that. Its recommender has been great for me. And there are lots of playlist if I want to try something completely different.


Or compared to the days of radio, having labels decide what's on the mainstream and the indie college stations doing the unpaid work (and giving listeners the gift) of discovering "lost" hits.


Compared to Myspace. The difference for anyone who lived through it is night and day.


I doubt it is easier to retrain a large, dispersed group of humans on a new classification of tumors than it is to retrain a model on the same.


Not if they're trained to work through the problem each time they encounter it and stay up with their clinical training. The day the new classification drops many have already heard about it.

You also assume that all the models in use will in fact be retrained.

Generally, this position flies in the face of lived experience. AI is in fact stifling adoption of new things across many industries.


My position is informed by my own experience; I am not a physician, but have worked closely with a large number of them in a healthcare-oriented career. I've repeatedly noted long-term resistance of many physicians to updating their priors based on robust new evidence.

There are definitely many physicians who do take in the latest developments judiciously. But I find the long tail of default resistance to be very, very long.


It's not bad actually, considering how many times the new shiny thing has turned out to be quite dangerous a few years later. In a field as high stake as healthcare you want the whole spectrum, from early adopters to die hard skeptics. Especially since we know the reproducibility problem of research, the influence of big pharma and big insurance on healthcare, etc etc


I was just explaining to a UK colleague about how the American health care system makes getting treatment (and getting it paid for) into a DIY project. And so as a medical shopper, if I’m getting a very standard established treatment I might go for the older experienced doctor, but if it’s a new thing I’d opt for someone more recently graduated.

I’m sure the same thing applies worldwide.


There's no way that doctors in the US continue their training.

I suffer from a chronic illness. I saw 5 different specialists over the past 2 years, and each one gave me different treatment. A couple even relied on information from the 70s and 80s. One even put me in the ER because he changed my treatment after I explicitly told him not to.

Another example: Back in my 20s, I injured my back. I took my MRI results to six different doctors - and I'm serious here, every one gave me a different diagnosis. In the end, I fixed it myself by doing my own research and treatment (2 years of physical therapy). One doctor said I had pain because spine fragments were lodged in my spinal cord (not true). Two of the doctors were even pushing me into invasive surgeries, and I'm so glad I told them no.

I don't understand the praise for doctors. If I had to generalize, I'd guess the majority give up learning after achieving wealth and status. seems like an art and not a science. I will emphatically welcome AI treatment.


Continuous improvement is the norm in cancer treatment, not in other areas where diseases are ill-defined and have a huge psychogenic component: back pain is the index case for that, so is TMJ dysfunction. In either case you might go from disabled to 'I have a bad day once a month' with a 20% change in habits and 80% change in attitude.

My dad, who worked in construction, got disabling back pain just in time for the Volcker-Freedman recession [1] His doc wanted to give him surgery which had the risk of being even more disabling, he said he'd go to a chiropractor, his doc said "if you don't do that then don't see me". I remember him taking me along for his chiropractor visits and getting a waterbed (bad idea.) He was on workman's comp at the time but got better around the time the economy got better and work was available again. Not to say he was consciously malingering, but work-associated pain has a lot to do with how you feel about your work.

[1] https://en.wikipedia.org/wiki/Early_1980s_recession


If you don't respect the field in general, why do you think an AI amalgamation of that field to be better?


I respect the research and researchers - but medical researchers are far removed from medical practitioners.


> You also assume that all the models in use will in fact be retrained.

And that deploying the retrained models won’t require a costly and time consuming recertification process. This is medicine after all.


Where I live, specialists can't even speak English, so I doubt very much that they are up to date on anything. And I live in a first-world country.


On a first glance this may look like bad comment but there is sound reasoning, I see it with friends who are top notch doctors and surgeons in french part of Switzerland.

Many articles and conferences are in english, often dont get translated (well), and one friend who is urology surgeon specifically mentioned this as an issue in his (former) department in Switzerland's biggest hospital (HUG in Geneva). They simply lag behind bleeding edge a bit.

Can't comment on other languages/cultures, but french speaking folks and english often dont pair well together.


I'm also in a country where English isn't the first language and for the doctors that do wish to stay up to date on what's going on, there are ways for them to do it, and translation technology is pretty top-notch already.

Aside from time constraints and perhaps no incentive to stay up to date, we do have to remember that some of these new discoveries always take time to find their way into becoming SOTA treatments everywhere, due to costs, regulations needing updates, special training or equipment being required, as well as sometimes only being marginally better than existing treatment options.


It's not, in my experience doctors are woefully behind the curve on the cutting edge, and even a bit hostile towards it.


I think it depends on what you mean by “easier”. Dispersing knowledge through people is more intuitive, and tends to happen organically.


Well, the difference is that people eventually die or retired so they are constantly being replaced


I think a one year gap in adoption of new tech is not that bad. Isn't it better to always go for the mature tech first ? The real change will come from the fact that because of AI, compute will be so cheap in the coming years: https://medium.com/thoughts-on-machine-learning/a-future-of-...


This is assuming that new technology will grow the same as in pre-LLM times, and merely be picked up a year late. But use of LLMs is likely to cause new developments to grow and spread slower, because of the reduced visibility. It may take much longer for a new development to gain currency to the extent that it becomes sufficiently visible in the training data. This also slows competition between evolving technologies.

In addition, as the article describes, the LLM services have biases built in to them even among existing technologies. It amplifies existing preferences, leading to less diversity and competition between technologies. Tech leads will have to weigh between the qualities of a technology on its own merits against how well it is supported by an LLM.


> The entire music community has been complaining about how old music gets more recommendations on streaming platforms, necessarily making it harder for new music to break out.

Why does music continually entrench the older stuff (will we ever stop playing classic rock bands) whereas video streaming platforms like Netflix and YouTube try to hide/get rid of the old stuff?


The main issue with AI, and ironically the reason why ChatGPT is the best one, is whom it works for.

AI doesn't work for the user. It couldn't care less if the user is happy or not. AI is designed first and foremost to make more money for the company. Its metrics are increased engagement and time on site, more sales, sales with better margins. Consequently, the user often has no choice or control over what the AI recommends for them. The AI is recommending what makes more sense for the company, so the user input is unnecessary.

Think of AI not as your assistant, but as a salesman.

One interesting consequence of this situation I found was that Youtube published a video "explaining" to creators why their videos don't have reach in the algorithm, where they essentially said a bunch of nothing. They throw some data at the AI, and the AI figures it out. Most importantly, they disclosed that one of the key metrics driving their algorithm is "happiness" or "satisfaction" partially gathered through surveys, which (although they didn't explicitly say this) isn't a metric that they provide creators with, thus it's possible for Youtube to optimize for this metric, but not for creators to optimize for it. That's because the AI works for Youtube. It doesn't work for creators, just as it doesn't work for users.

People are complex creatures, so any attempt at guessing what someone wants at a specific time without any input from them seems just flawed at a conceptual level. If Youtube wanted to help users, they would just fix their search, or incorporate AI in the search box. That's a place where LLMs could work, I think.

When you look at things this way, the reason why Netflix/Youtube get rid of old stuff has nothing to do with users, but with some business strategy that they have that differs from the music industry.


I wonder if shows like The Office, Parks and Rec, Seinfeld, etc end up becoming the "classic rock" of streaming.


> The entire music community has been complaining about how old music gets more recommendations on streaming platforms, necessarily making it harder for new music to break out.

I can understand other issues, but this has nothing to do with that. Models don't have to be re-trained to recommend new music. That's not how recommendation systems work.


> new music

I keep thinking I'm going crazy until Rick Beato explains that yes, I am just an RNN Meat Popsicle and the world is interpolated:

https://www.youtube.com/watch?v=j_9Larw-hJM


This is the fault of the regulators. There’s no reason that new discoveries are not put in a queue to train a new AI and when there are enough to make it worth the run, you do the run and then you give the doctors old model and new model and they run both and compare the results.


The healthcare diagnosis one may be wrong. For existing known diagnoses, (or at least the sliver of diagnoses in this one study), AI can beat doctors - and doctors don't like listening when it challenges them, so it will disrupt them badly as people learn they can provide data from tests directly to AI agents. Sure, this doesn't replace new diagnoses, but the vaaaast majority of failures to diagnose are for existing well classified diagnoses.

https://www.advisory.com/daily-briefing/2024/12/03/ai-diagno...

Edit: yeah, people don't like this.


I'm familiar with the linked study, which presents legitimately challenging analytic problems. There's a difference between challenging analytic problems and new analytic problems.

A new platform poses new analytic problems. A new edition of the WHO's classification of skin tumors (1), for example, presents new analytic problems.

(1) https://tumourclassification.iarc.who.int/chapters/64


Right, but the vast majority of patient issues today are missing existing diagnoses, not new ones.


I think OP was referring to the case where new illnesses that are not part of the training set are never going to be diagnosed by AI.


It's only a problem if hospitals replace doctors with AI. If they employ AI as well then outcomes will improve. Using AI to find the ones AI can identify means doctors have more time to focus on the ones that AI can't find.

Of course, that's not what's going to happen. :/


> Using AI to find the ones AI can identify means doctors have more time to focus on the ones that AI can't find.

That's not how that would work in the real world. In a lot of places a doctor has to put their signature or stamp on a medical document, making them liable for what is on that paper. Just because the AI can do it, that doesn't mean the doctor won't have to double check it, which negates the time saved.

I would wager AI-assisted would be more helpful to reduce things doctors might miss instead of partially or completely replacing them.


Interesting. Do you see any versions of the future where use of AI could actually make the physician take more time?


Let's assume you program it so that if it believes with 95% certainty that a patient has a certain condition it will present it to the doctor. While the doctor doesn't agree with it, the whole process between doctor-patient-hospital-insurer might be automated to the point where it's simpler to put the patient through the motions of getting additional checks than the doctor fighting the wrong diagnosis, thus the doctor will have to spend more time to follow up on confirming that this condition is not really present.

I don't have a crystal ball, so this is a made-up scenario.


Never is a long time.

Sure, LLMs might not do this anytime soon, but once models understand enough biology, they're going to identify patterns we don't and propose new diagnoses. There's no reason why they wouldn't.


Unfortunately, that's not how LLMs work.


It has been interesting to see the excuses from doctors, why we need error prone humans instead of higher quality robots.

>Empathy (lol... from doctors?)

>New undetectable cases (lol... AI doesnt have to wait 1 year for an optional continuing education class. I had doctors a few years ago recommending a dangerous expensive surgery over a safer cheaper laser procedure)

>corruptible (lmaooo)

We humans are empathetic to the thought our 'friendly' doctor might be unemployed. However, we shouldn't let that cause negative health outcomes because we were being 'nice'.


So… we put all of our trust (wait, at that point it might be called faith) into this machine…

If it ever turns on us, begins to malfunction in unforeseen ways, or goes away completely—then what?

Shortsighted, all of it.


>Pathologists as a specialty has been grousing about this for several years, at least since 2021 when the College of American Pathologists established the AI Committee.

This sounds like Moral Coating for what is otherwise protection of the Status Quo.

High paid doctors do not want to be replaced by AI. They will use every excuse to keep their high paying job.


I'm on the fence with this. I've been using Copilot with vscode constantly and it has greatly increased my productivity. Most important it helps me maintain momentum without getting stuck. Ten years ago I would face a problem with no solution, write a detailed question on Stack Exchange, and most likely solve it in a day or two with a lot of tinkering. Today I ask Claude. If it doesn't give me a good answer, I can get the information I need to solve the problem.

I've been thinking a lot of T.S. Eliot lately. He wrote and essay, "Tradition and the Individual Talent," which I think is pertinent to this issue. [0] (I should reread it.)

[0] https://www.poetryfoundation.org/articles/69400/tradition-an...


As the saying goes:

    while (React.isPopular) {
      React.isPopular = true
    }
It's actually quite sad because there are objectively better models both for performance and memory including Preact, Svelte, Vue, and of course vanilla.


But equally impressive that the Javascript community has actually managed to continue to use a single framework for more than five minutes without jumping to the next.


had to have hooks though. And vanilla-OO had to go in favor of trapping state in closures, which is cooler because it has functions and not methods.


You can still use vanilla-OO React.

Do you complain when other frameworks add new features without breaking backwards compatibility?


    > You can still use vanilla-OO React
What we want is signals-based React because it would singularly fix the issue that the compiler is solving for and remove the need to even have `useMemo` and `useCallback`, improve performance, and remove a huge footgun.

Because it has such a huge marketshare, a fix like this would have tremendous positive effects in web dev. We are all "forced" to use React so it would do all of us a great service if the team would just backstep and say "Oops, we made a design mistake here!". Instead, they spent almost 3 years building the compiler to sprinkle in memoization primitives because enough devs cannot get it right.


I certainly did complain! And I'm sure I'll do it again if the new features aimed to supplant the old ones are worse.

If you search anything about React now, 90% of the docs are hook-based. Beginners of React in 2025 will be guided to use a default pattern which has worse runtime footprint and adds a whole suite of new tool-specific coding guidelines ("rules of hook"). After years with it, I struggle to see what it has added in terms of building front-end SPas, yet the pattern is now the default for all using React.


React is almost 12 years old and has dominated frontend development for almost a decade. I'd bet most JS backend projects still use Express as the webserver, and it's even older than React.

Can we please retire this meme? It's stale and adds nothing to the conversation.


Yes thanks to React we can finally retire this meme. It was very true before react became dominant though.


Does that really matter to most companies/developers? I’d much rather have a good enough solution with a large ecosystem built around it. It also takes a lot of investment for companies to change their tech stack


    > Does that really matter to most companies/developers?
If you're asking about performance and memory, then yes, it does.

This is especially true in e-commerce where many studies have shown that overall page performance has a correlation to conversion. Add to that the fact that a lot of e-commerce has moved to mobile web, there's a strong case for picking the best performing technologies versus developer preference -- especially if AI is generating it.

But even outside of e-comm, consider government websites where you may have low income users that are using cheaper, older, lower capability devices to access information and services using lower speed networks to boot.

I do my day-to-day work on an M3 Max with 64GB RAM and fiber to the home; it's easy for developers to forget that many times, their end users can be on older devices, on low performing networks, and other factors that affect performance and usability of web applications.

    > ...with a large ecosystem built around it
When you can generate whatever you want nearly instantly, what does "ecosystem" mean? Your possibilities are endless and infinite. Your mindset is in a world where it's necessary to depend on the work of others because it's too expensive for you to write your own component library. Yes, that was true 1 year ago; why would you waste time and energy to create your own calendar component? But if an LLM can generate any bespoke component that you need in < 3 seconds, do you still need a third party component library?

In fact, you may be better off for not creating the added external dependency.


> I do my day-to-day work on an M3 Max with 64GB RAM and fiber to the home

And you still pay a few cents extra in power used because of all those inefficient and memory hungry "applications". You just don't notice.


So, if you're a large e-commerce company that's trying to juice out last percentage points of conversion, and are researching alternate javascript libraries, is it plausable that the only research a development team would be to ask ChatGPT?


Unfortunately not the case because of GPT's bias towards React (the point of the article).


If you use next.js with react server components you can get enough performance out of react for e commerce.

Also the react compiler is improving client side performance as well by automatically memo-izing everything to reduce rerenders


The auto-memoization is:

1) Trading memory pressure for performance

2) An admission of a broken model because it's taken them 2+ (almost 3?) years to build as a recognition that developers can't get it right

The reason other frameworks don't need this is because they use signals connected to callbacks instead of the reactive callback pointing to the component (as React does). Thus Preact, Solid, Svelte, Vue, etc. do not have this issue and you rarely (if ever?) have to manually memoize.

The React team foot-gunned themselves and every React dev out there.

I have some examples that walk through this concept using JSFiddle so you can understand exactly why this design choice from React created this problem in the first place: https://chrlschn.dev/blog/2025/01/the-inverted-reactivity-mo...


> If you're asking about performance and memory, then yes, it does.

Most places just don't care. I've worked 15 years as a contractor and only in once place have the business cared about optimisation. As long as it wasn't unbearable than it was "good enough".

> This is especially true in e-commerce where many studies have shown that overall page performance has a correlation to conversion. Add to that the fact that a lot of e-commerce has moved to mobile web, there's a strong case for picking the best performing technologies versus developer preference -- especially if AI is generating it.

This may have been true back in 2014. 5G networks are pretty fast and the the mobile web is pretty bloated. Performance is way down the list of concerns typically even by places that should care. I can write blazingly fast custom JS frameworks, the number of times anyone cares is exactly one time.

> I do my day-to-day work on an M3 Max with 64GB RAM and fiber to the home; it's easy for developers to forget that many times, their end users can be on older devices, on low performing networks, and other factors that affect performance and usability of web applications.

I have a 2010 Dell E6410 with 8GB of ram and an i7 640M (Dual Core, 4 thread). Almost every modern phone is faster now.

I am not arguing we should make things bloated. I am just saying there isn't an incentive to optimise for low end devices because low end is better than a reasonably power Business Laptop of 10-15 years ago.

> why would you waste time and energy to create your own calendar component? But if an LLM can generate any bespoke component that you need in < 3 seconds, do you still need a third party component library?

The code from the LLM probably hasn't been battle tested. The open source react component library with 1000s of stars on github definitely has been. If you run into a problem with the LLM code you are probably going to be by yourself fixing it. I will take the component library over the LLM code everyday of the week.


Have you ever worked for a place that cared about meeting CWV? Poor JS performance will definitely hurt rankings.


Yes. It was one contract only and I was the lead frontend dev.

I wrote a very lightweight JS framework (it was just a few classes really) so we could have plugins. A plugin implemented was just an object that implemented an interface, I also wrote a poor man's React for two or three pages that needed to build a lot of DOM Dynamically.

At launch the site was getting basically 100 on the lighthouse tests with the gzipped CSS and JS coming it at ~80KB. Of course that lasted for a week because people will put up a huge image that hasn't been optimised for the web.

The site was fast because I wrote it like a website from mid-2000s. Everything was basic OOP and almost all the code was procedural.


Fwiw, I work at a saas company and we do have some performance issues. It's about 50/50 split between not using react optimally and slow backend.

If we were using svelte we would still have performance issues, but they would probably be centered more on "is the data in a sane shape such that this page can query it?"


React has become the Java of late 90’s to mid 2000’s.

Loads of libraries, documentation, and developers which creates a flywheel that will grow those aspects over the next X years.

Until something comes up that is magnitude better in performance/maintainability, and even then it’ll take years to dethrone the status quo.

Good questions in these comments essentially asking, does the level of training data on these models now contribute to the inertia we see from libraries, documentation, developer support?

I believe so, but then again I think we’ll soon have more niche models for specific areas of development (like openart has with a variety of image gen models)


> React has become the Java of late 90’s to mid 2000’s.

Not comparable.

Java may not have had big releases during this time but there were patches and support. Java 8 had numerous versions (>400). You can get paid support from Sun/Oracle. In terms of frameworks Spring was constantly upgraded and supported.

React has none of these. Older React versions rarely get upgraded and just look at the amount of minor/patch releases React gets these days. It's almost as though Meta no longer cares. Earlier (<16) React was constantly updated. Nowadays it's just to peddle Vercel.


At least Java is good. React is absolutely terrible.


Most of the alternatives on the GP's comment have more than one order of magnitude better performance.

Maintainability doesn't even enter the question. React's way is to rewrite. All of the alternatives on the GP's comment are possible to maintain.


I think react hits a really good sweet spot in the trade off space. Sure it’s not the best thing that can exist, but it really does solve a lot of problems in a way that isn’t overly restrictive.

My personal opinion is that a lot of the hate directed at react is due to experiences with code bases that aren’t good react code.


React is better than all of those because of the existence of React Native, React Three Fiber, Remotion, etc…

It has the best ecosystem of libraries and it’s not even close.

If you write your web app in Vue and decide you want mobile apps later you won’t be able to share much code there.


Doesn't Preact use the same model but prioritise bundle size over performance?


It's not the same model.

Many, many (if not most) devs probably do not realize that React has an "inverted" model of reactivity and is in fact the root cause of it's performance woes.

To the extent that the React team spent 2+ (almost 3?) years working on a compiler to address the issue by adding in the correct memoization primitives in a compile phase (trading increased memory consumption for more runtime performance...).

I wrote about it here with code examples that work in JSFiddle: https://chrlschn.dev/blog/2025/01/the-inverted-reactivity-mo...

The short of it is that by pointing the reactive callback to the component function, it means that state within the component function has to be managed carefully. This doesn't happen in Vanilla, Preact, Solid, Svelte, and Vue because they point the reactive callback (via "signals") to a handler function that captures the component state in a closure. This is also why they are all faster and consume less memory than React.

Because React points the reactive callback to the component function, it effectively starts from a clean slate each re-render so the purpose of React Hooks is to move state out and inject them back (thus they are called "hooks") when it re-renders. In Preact, this is not the case since it uses signals: https://preactjs.com/guide/v10/signals/

A short video of the same examples if you prefer: https://youtu.be/7OhyP8H7KW0


The blogpost in the first link doesn't mention Preact at all.

Preact is mostly API-compatible with React, and it having a different underlying model is an extraordinary claim that requires extraordinary evidence.

I've read the docs on Preact signals, and they look like React refs, but put outside of components.

edit: the last paragraph about refs


    > ...a different underlying model is an extraordinary claim that requires extraordinary evidence
You are looking at the syntax and not the reactivity model (how it determines what functions to call when a change happens).

The post doesn't need to mention Preact because every other framework is signals-based except for React. Vue is simply the stand-in for "signals-based reactivity". Vue has different syntax from Preact (though it, too, can also use JSX), but it has the same reactivity principle.

https://preactjs.com/guide/v10/signals/

    > What makes Signals unique is that state changes automatically update components and UI in the most efficient way possible. Automatic state binding and dependency tracking allows Signals to provide excellent ergonomics and productivity while eliminating the most common state management footguns.
It uses the same underlying reactivity model as Vue, Svelte, Solid, and Qwik.

Vue docs: https://vuejs.org/guide/extras/reactivity-in-depth.html#conn...

Solid docs: https://www.solidjs.com/docs/latest/api#createsignal

Svelte: https://svelte.dev/blog/runes#Signal-boost

Qwik docs: https://qwik.dev/docs/components/state/#usesignal

In the blog post, Vue is the stand-in for signals-based reactivity. All signals-based reactivity models work the same way at a high level (their difference being primarily in their low-level DOM diff and update).

My prediction is that even React will eventually end up signals based because of TC39 Signals: https://github.com/tc39/proposal-signals


Preact is not signals-based, it uses the React model with state and props as the basic model, but also provides Signals as a parallel model.


I guess that's like arguing a Prius Prime isn't electric because even though it has a battery and can drive all electric, it also has a gas engine. :shrug:

But very clearly, Preact has the option of the exact same reactivity primitive and design as Vue, Solid, Qwik, and Svelte

https://preactjs.com/guide/v10/signals/

    > In Preact, when a signal is passed down through a tree as props or context, we're only passing around references to the signal. The signal can be updated without re-rendering any components, since components see the signal and not its value. This lets us skip all of the expensive rendering work and jump immediately to any components in the tree that actually access the signal's .value property.


I'll accept an analogy with a hybrid car. In this analogy, Preact would be a gas car with an additional electric engine bolted on as an afterthought.


But you know, the Prius Prime is a PHEV as in "plug in hybrid electric vehicle"


Oh, I see what you mean now! This sounds like a very good analogy.


it's my goto along with NodeJS on the backend and Electron as a wrapper or React Native

NGINX for my server though recently I ran into an out of connections problem that was new on an Azure VM


and of course vanilla

That depends on who is writing it and what the app is. Most frontend code is written by people who don't have as much time to focus on performance and optimization as core framework developers, so their once their apps reach a critical mass of 'actually big enough to benefit from a framework' the app is worse than it would have been if it was written with a framework in the first place.

The problem for all of us, and where frameworks often make the web suck, is that very few apps are actually that big. Frontend developers love to put React in a page that has one form input a button, which is dumb.


Ostensibly, an unbiased, well-trained model would solve this because it would/could write fast, performant vanilla. As a product owner, you probably care more about the performance of the product over the specific technical implementation details.

I take that to be the point of the article: the bias towards React and of course the training data being stale means that generated code will always have a bit of staleness and as we provide less context for the AI (e.g. StackOverflow), the bias towards staleness will amplify given the large body of stale information it has ingested and amalgamated.


Sadly, as a person who used write AVX in C for real time imaging systems: don't care shipped.

I love dingling around with Cursor/Claude/qwen to get a 300 line prototype going in about 3-5 minutes with a framework I don't know. It's an amazing time to be small, I would hate to be working at a megacorp where you have to wait two months to get approval to use only GitHub copilot (terrible), in a time of so many interesting tools and more powerful models every month.

For new people, you still have to put the work in and learn if you want to transcend. That's always been there in this industry and I say that as a 20y vet, C, perl, java, rails, python, R, all the bash bits, every part matters just keep at it.

I feel like a lot of this is the js frontend committee running headlong into their first sea change in the industry.


It's always been a thing with modes of encapsulating knowledge. The printing press caused the freezing of language, sometimes in a weird place*

Where great documentation was make or break for a open source project for the last 10 years, I think creating new projects with AI in mind will be required in the future. Maybe that means creating a large amount of examples, maybe it means providing fine tunes, maybe it means publishing a MCP server.

Maybe sad because it's another barrier to overcome, but the fact that AI coding is so powerful so quickly probably means it's worth the tradeoff, at least for now.

*https://www.dictionary.com/e/printing-press-frozen-spelling/


That article is so good that it deserves its own top-level Hacker News post.

Regarding documentation: isn't the whole point of these LLMs to hoover up information in whatever form it's currently in and try to produce intelligible results? For SEO, things are going to get interesting, but software projects don't necessarily have such perverse incentives.


> However, a leaked system prompt for Claude’s artifacts feature shows that both React and Tailwind are specifically mentioned.

Damn.


I think as new data gets vacuumed up faster, this will be less of an issue. About a year ago here on HN I complained about how LLMs were useless for Svelte as they did not have it in their training data, and that they should update on a regular basis with fresh data. At the time my comment was considered ridiculous. One year later, that’s where we are, of course; the average cutoff of “LLM usefulness” with a new subject has dropped from multiple years to months and I see no reason that the trend will not continue.


I don’t like that this conclusion seems to be that if humans adopt every new technology before AI can train on it that their jobs will be more secure. That is its own kind of hell.


> I don’t like that this conclusion seems to be that if humans adopt every new technology before AI can train on it that their jobs will be more secure. That is its own kind of hell.

It's the hell we'll be forced into. The powers that be care not one whit for our well being or comfort. We have to duck and weave (or get smashed), while they "creatively" destroy.


We have the power to destroy, too.


No nearly as much. And when we do it, we tend to get punished.


>Consider a developer working with a cutting-edge JavaScript framework released just months ago. When they turn to AI coding assistants for help, they find these tools unable to provide meaningful guidance because their training data predates the framework’s release. This forces developers to rely solely on potentially limited official documentation and early adopter experiences, which, for better or worse, tends to be an ‘old’ way of doing things and incentivises them to use something else.

I can't help but feel that a major problem these days is the lack of forums on the Internet, specially for programming. Forums foster and welcome new members, unlike StackOverflow. They're searchable, unlike Discord. Topics develop as people reply, unlike Reddit. You're talking to real people, unlike ChatGPT. You can post questions in them, unlike Github Issues.

When I had an issue with a C++ library, I could often find a forum thread made by someone with a similar problem. Perhaps because there are so many Javascript libraries, creating a separate forum for each one of them didn't make sense, and this is the end result.

I also feel that for documentation, LLMs are just not the answer. It's obvious that we need better tools. Or rather, that we need tools. I feel like before LLMs there simply weren't any universal tools for searching documentation and snippets other than Googling them, but Googling them never felt like the best method, so we jumped from one subpar method to another.

No matter what tool we come up with, it will never have the flexibility and power of just asking another human about it.


I’m working on a side project that actually probably could use AI later on and I’m doing everything I can not to “put a bird on it” which is the phase we are at with AI.

I might be willing to use a SAT solver or linear algebra on it if I ever get to that point but there’s a lot else to do first. The problem space involves humans, so optimizing that can very quickly turn into “works in theory but not in practice”. It’d be the sort of thing where you use it but don’t brag about it.


It's pretty interesting and mildly shocking that everyone is just making the same 'who needs a new JS library' joke.

What about closed source tooling? How do you expect an AI to ever help you with something it doesn't have a license to know about? Not everything in the world can be anonymously scraped into the yearly revision.

If AI is going to stay we'll have to solve the problem of knowledge segmentation. If we solve that, keeping it up to date shouldn't be too bad.


>What about closed source tooling? How do you expect an AI to ever help you with something it doesn't have a license to know about? Not everything in the world can be anonymously scraped into the yearly revision.

This is not a novel problem. Proprietary toolchains already suffer from decreased resources on public forums like Stack Overflow; AI did not create this knowledge segmentation, it is scraping this public information after all.

>It's pretty interesting and mildly shocking that everyone is just making the same 'who needs a new JS library' joke.

Surely the proprietary toolchain is itself the 'new JS library'?

Most developers I know don't enjoy working with esoteric commercial solutions that suffer from poor documentation, when there exists an open-source solution that is widely understood and freely documented.

I do not see why AI code generation further incentivizing the use of the open-source solution is a problem.


> This is not a novel problem. AI did not create this knowledge segmentation, it is scraping this public information after all.

I think you misunderstand the situation. You as a person can be privy to private knowledge. Relying on AI enough that you can't use that private knowledge is the novel situation.

> I do not see why AI code generation further incentivizing the use of the open-source solution is a problem.

Maybe you don't get it and have never experienced it but there's a missive amount of development done against unreleased APIs or hardware. Game engines, firmware, etc. I doubt Apple is going to publish new SDKs for their new widgets long before any devs use them.


I am curious, have you used a code generation tool that is linked to your IDE?

If I ask it, "why is the structure I am feeding into the `do_cool_stuff()` function not working", it won't say "sorry, I can't figure that out because I don't know the library that implements `do_cool_stuff()`. It will:

1. read my code where I construct the data to feed into the function,

2. see that the function is imported,

3. then read the file containing the linked `do_cool_stuff()` function to understand exactly what data structure it expects.

Crucially, the linked function can be anything from my own personal super secret UI framework for building widgets, to the `lib` folder containing my proprietary driver code for my unreleased hardware. This is fundamentally the LLM figuring out how to do "development [] against unreleased APIs or hardware".

I don't really know what "private knowledge" you are referring to, given you talk about Apple developing widgets against their new unreleased framework. The code is not really "private"; certainly to you or I it might be, but Apple owns the code on both sides of the interface, no? What precludes their internal generative AI tooling from doing the equivalent of Ctrl-Clicking through to the linked `do_cool_stuff()`?

If anything, this is perhaps a benefit of genAI; for security reasons, I am sure your average programmer at Apple is expected to work solely against the internal documentation regarding how `do_cool_stuff()` works, and has no access to the actual implementation of it. Yet, an in-house inference server could certainly have access to both sides without the risk of code leakage you have directly sharing the other side of an interface with the consuming developers.

To me, in this context, "closed source" implies something the LLM does not have access to the implementation details for, e.g. source unavailable for the user invoking the LLM; in this case, I think my original analysis holds re: genAI further encouraging the development on top of open source technologies.


I don't think this is unique to AI. There are categories of knowledge that are infested with bad practices (webdev, enterprise software), and even a direct web search will lead you to those results. AI definitely regurgitates many of these bad practices, I've seen it, but it's not obvious to everyone.

I think it's unrealistic to expect a general purpose LLM would be an practical expert in a new field where there are potentially 0 human practical experts.


Sonnet + Tailwind is something of a force multiplier though -- backend engineers now have a fast/reliable way of making frontend changes that are understandable and without relying on someone else -- you can even give 4o a whiteboard drawing of a layout and get the tailwind back in seconds

On the wider points, I do think it is reducing time coders are thinking about strategic situation as they're too busy advancing smaller tactical areas which AI is great at assisting -- and agree there is a recency issue looming, once these models have heavy weightings baked in, how does new knowledge get to the front quickly -- where is that new knowledge now people don't use Stackoverflow?

Maybe Grok becomes important purely because it has access to developers and researchers talking in realtime even if they are not posting code there

I worry the speed that this is happening results in younger developers not spending weeks or months thinking about something -- so they get some kind of code ADHD and never develop the skills to take on the big picture stuff later which could be quite a way off AI taking on


> backend engineers now have a fast/reliable way of making frontend changes that are understandable and without relying on someone else

backend engineers in this context could learn JS.


developers using ai continue to find new and novel ways to make themselves worse


Along similar lines, I found Google auto complete to constrict my search space. I would only search the terms that auto complete.


I think annecdotally this is true, I've definitely seen worse, but older technologies be chosen on the basis of LLM's knowing more about them.

That said, I also think it's a bad choice, and here's some good news on that front- you can make good choices which will put you and your project/company ahead of many projects/companies making bad choices!

I don't think the issue is that specific to LLMs- people have been choosing React and similar technologies "because it's easy to find developers" for ages.

It's definitely a shame to see people make poor design decisions for new reasons, but I think poor design decisions for dumb reasons are gonna outlive LLMs by some way.


The central issue is high cost of training the models, it seems:

> "Once it has finally released, it usually remains stagnant in terms of having its knowledge updated. This creates an AI knowledge gap. A period between the present and AI’s training cutoff... The cutoff means that models are strictly limited in knowledge up to a certain point. For instance, Anthropic’s latest models have a cutoff of April 2024, and OpenAI’s latest models have cutoffs of late 2023."

Hasn't DeepSeek's novel training methodology changed all that? If the energy and financial cost for training a model really has drastically dropped, then frequent retraining including new data should become the norm.


> Hasn't DeepSeek's novel training methodology changed all that? If the energy and financial cost for training a model really has drastically dropped, then frequent retraining including new data should become the norm.

Even if training gets way cheaper or even if it stays as expensive but more money gets thrown at it, you'll still run into the issue of having no/less data to train on?


True. One effective test for AGI might be the ability to first create a new language, then also write performant code in that language.


The underlying tech choice only matters at the moment because as software developers we are used to that choice being important. We see it as important because we currently are the ones that have to use it.

As more and more software is generated and the prompt becomes how we define software rather than code i.e. we shift up an abstraction level, how it is implemented will become less and less interesting to people. In the same way that product owners now do not care about technology, they just want a working solution that meets their requirements. Similarly I don't care how the assembly language produced by a compiler looks most of the time.


LLM based AI tools are the new No/Low Code.


It is, but it's an order of magnitude better than the last set of no/low code tools for anyone who has the basics of programming already.


This is the same problem as google/search engines: A new technology has less web presence, and thus ranks lower in the mechanisms for information distribution and retrieval until people put in the work to market it.


This will be solved eventually on the AI model side. It isn't some law of nature that it takes a million tokens for an AI to learn something; just the fact that we can prompt these models should convince you of that.


Maybe, but why would they bother? If 80% of the demand is met by generating really good Python and generating really good X is a lot more work but only 2% of the demand it’s likely there isn’t going to be a reason to solve that problem well.


That's assuming it's a novel problem to deal with, see e.g. C++, JavaScript or every standard in networking ever. The barrier between better tech and worse tech that accidentally made it into production and became legacy cruft has always been high, without any AI.


I was talking about this the other day - to some extent it feels like React (and Tailwind) has won, because LLMs understand it so deeply due to the amount of content out there. Even if they do train on other technologies that come after, there maybe won’t be the volume of data for it to gain such a deep understanding.

Also it doesn’t hurt that React has quite a stable/backwards compatible API, so outdated snippets probably still work… and in Tailwind’s case, I suspect the direct colocation of styles with the markup makes it a bit easier for AI to reason about.


I, too, wrote a shittier version of this a little while back: https://www.paritybits.me/stack-ossification/

Another observation since then: good documentation for newer tech stacks will not save the LLM's capabilities with that tech. I think the reason, in short, is that there's no shortcut for experience. Docs are book learning for tech stacks - millions (billions) of lines of source code among the training data are something else entirely.


In contrast, I suggest AI could accelerate new tech adoption.

> if people are reluctant to adopt a new technology because of a lack of AI support, there will be fewer people [emphasis added] likely to produce material regarding said technology, which leads to an overall inverse feedback effect. Lack of AI support prevents a technology from gaining the required critical adoption mass, which in turn prevents a technology from entering use and having material made for it,

At present. But what if this is a transient? It depends on the new technology's dev team being unable to generate synthetic material. What happens when they can create for themselves a fine tune that translates between versions of their tech, and between "the old thing everyone else is using" and their new tech? One that encapsulates their "idiomatic best practice" of the moment? "Please generate our rev n+1 doc set Hal"? "Take the new Joe's ten thousand FAQ questions about topic X list and generate answers"? "Update our entries in [1]"? "Translate the Introduction to Data Analysis using Python open-source textbook to our tech"?

The quote illustrates a long-standing problem AI can help with - just reread it swapping "AI support" to "documentation". Once upon a time, releasing a new language was an ftp-able tar file with a non-portable compiler and a crappy text-or-PS file and a LISTSERV mailinglist. Now people want web sites, and spiffy docs, and Stack Overflow FAQs, and a community repo with lots and lots of batteries, and discuss, and a language server, and yes, now LLM support. But the effort delta between spiffy docs and big repo vs LLM support? Between SO and LLM latency? That depends on how much the dev team's own LLM can help with writing it all. If you want dystopian, think lots of weekend "I made my own X!" efforts easily training transliteration from an established X, and running a create-all-the-community-infrastructure-for-your-new-X hook. Which auto posts a Show HN.

AI could at long last get us out of the glacial pace of stagnant progress which has characterized our field for decades. Love the ongoing learning of JS churn? Just wait for HaskellNext! ;P

[1] https://learnxinyminutes.com/ https://rigaux.org/language-study/syntax-across-languages.ht... https://rosettacode.org/wiki/Category:Programming_Languages ...


Working in Zed I'm full of joy when I see how well Claude can help me code. But when I ask Claude about how to use Zed it's worse than useless, because it's training data is old compared to Zed, and it freely hallucinates answers. So for that I switch over to Perplexity calling OpenAI and get far better answers. I don't know if it's more recent training or RAG, but OpenAI knows about recent Zed github issues where Claude doesn't.

As long as the AI is pulling in the most recent changes it wouldn't seem to be stiflling.


I tried uses chatgpt 4o to write a simple website that used the chagpt api. It always generated code for their deprecated API. I’d paste the error about using old calls, and it would recognize its error and.. generate old calls again.

Couldn’t ever use it owns api.


Generative AI is fundamentally a tool that enables acceleration. Everything mentioned in this already true without Gen AI. Docs of new versions aren’t as easy to find till they aren’t as new. This is even true for things in the zeitgeist. Anyone around for the Python 2 to 3 or React class to hooks transitions knows how annoying that can be.

Yes new programmers will land on Python and React for most things. But they already do. And Gen AI will do what it does best and accelerate. It remains to be seen what’ll come of that trend acceleration.


Related: https://www.mjlivesey.co.uk/2025/02/01/llm-prog-lang.html

It’s doesn’t matter if a minority of passion techies will still be up for new tech, if the average developer just wanting to get the job done and relying on LLMs finds it harder, it will be a significant barrier.


The lack of new training data also makes it bad at projects that are still maturing because it will suggest outdated code - or worse it will mix/match old and new syntax and generate something completely broken.

I worry that the lack of new examples for it to train on will self-reinforce running old syntax that has bad patterns.

If the "AI" could actually store its mistakes and corrections from interactive sessions long-term I think it would greatly alleviate this problem, but that opens up another whole set of problems.


I don't think this is a bad thing. Pretty much all of the author's examples of "new and potentially superior technologies" are really just different flavors of developer UX for doing the same things you could do with the "old" libraries/technologies.

In a world where AI is writing the code, who cares what libraries it is using? I don't really have to touch the code that much, I just need it to work. That's the future we're headed for, at lightning speed.


The problem is if that code hasn't already been written in some form or another, then the LLM is much less effective at giving recommendations.

I've been playing around with embedded systems, specifically LoRa libraries on ESP32s. Code from LLMs is next to useless for a lot of what I'm trying to do since it is relatively niche.


> In a world where AI is writing the code, who cares what libraries it is using? I don't really have to touch the code that much, I just need it to work. That's the future we're headed for, at lightning speed.

This attitude works for write-and-forget workflows where the only thing that matters is whether it returns the answer you want (AKA "hacking it").

Once you add in other concerns: security, performance, maintainability, it can fall apart.


> Once you add in other concerns: security, performance, maintainability, it can fall apart.

Does anyone care about that? E.g. CRWD is all time high, after all. There is zero need to change anything according to market.


> Does anyone care about that?

Yes


That’s not much to counter my example.


> who cares what libraries it is using?

Presumably the people who have to read, debug and maintain the resulting garbage.

Then again, we have so much garbage code before LLMs, that it was clearly never that important.


This is precisely why I have said that every new framework/library should have a markdown or text or whatever is the best format for LLM models endpoint that has all the docs and examples in one single page so you can easily copy it over to a models context. You want to make it as easy as possible for LLMs to be aware of how your software works. The fancy nested navigation guide walkthrough thing is cool for users but not optimized for this flow.


This is something I'd like to have for pretty much any framework/library anyway.


This is literally just SEO 2.0


I have noticed this. I think it also applies to the popularity of the projects in general and the number of training examples it has seen.

I was testing Github copilot's new "Agent" feature last weekend and rapidly built a working app with Vue.js + Vite + InstantSearch + Typesense + Tailwind CSS + DaisyUI

Today I tried to build another app with Rust and Dioxus and it could barely get the dev environment to load, kept getting stuck on circular errors.


I don't know how you solve the "training data and tooling prompts bias LLM responses towards old frameworks" part of this, but once a new (post-cutoff) framework has been surfaced, LLMs seem quite capable of adapting using in-context learning.

New framework developers need to make sure their documentation is adequate for a model to use it when the docs are injected into the context.


People used to live in villages and places that were not connected by roads. Now since we have roads, any place that is not connected by a road will be seen a rough place. The difficulty is caused by usage of roads and vehicles, because it was not perceived or felt back in those days. So technology and assistance create new perceived problems.


I tried a new agent library with a model a few weeks ago. Just pasted the relevant api docs in and it worked fine.

However, while I’m proud of the outcomes, I’m not proud of the code. I’m not releasing anything open source until I feel it’s mine, which is another step. I’d be a bit embarrassed bringing another dev on.

“I’m Richard and I’m using AI to code” Support Group: “Hi Richard”


So... it slows down adoption by providing easier alternatives for beginners? I guess you could look at it that way too.

Eventually it will go either of the two ways, though:

- models will have enough generalization ability to be trained on new stuff that has passed the basic usefulness test in the hands of enthusiasts and shows promise

- models will become smart enough to be useful even for obscure things


Seems like a short-term problem. We're going to get to the point (maybe we're already there?) where we'll be able to point an AI at a codebase and say "refactor that codebase to use the latest language features" and it'll be done instantly. Sure, there might be a lag of a few months or a year, but who cares?


> Claude’s artifacts feature

The article mentions that Claude’s artifacts feature is opinionated about using react and will even refuse, to code for Svelte Runes. It's hard to get it to use plain JavaScript because react is in the system prompt for artefacts. Poor prompt engineering in claude.


Looks like I might be the minority, but I disagree with this prediction. Better models will also be better at abstracting and we have seen several examples (e.g. the paper LIMO: Less is More for Reasoning) that with a small amount of training data, models can outperform larger models.


Has there been any progress or effort on solving the underlying problem?

I'm not entirely sure why AI knowledge must be close to a year old, and clearly this is a problem developers are aware of.

Is there are a technical reason they can't be, for instance, a month behind rather than close to a year?


If people are skipping one shelf of tech, and jumping to the next shelf up with only ai trying to cover everything, and are let down, maybe there is an opportunity to share that there may be more realistic offers in the interim to offer across both.


The Arrows of Time by Greg Egan (Orthogonal, Book 3) deals with something analogous to this: Our characters must break themselves out of a cycle which is impeding innovation. If you like your scifi hard, the Orthogonal series is a lot of fun.


Neovim author TJ DeVries Express similar concerns in a video earlier this year: https://youtu.be/pmtuMJDjh5A?si=PfpIDcnjuLI1BB0L


Always get the response with and without a web search. The web search may yield a newer solution.

Also, each package should ideally provide an LLM ingestible document. Upload this for the LLM, and have it answer questions specific to the new package.


I was thinking the other day how coding assistants would hinder new languages adoption.


This is like saying in the 90s that Google Search would stifle tech adoption ...

I don't buy it. AI can teach me in 5 minutes how to write a kernel module, even if I've never seen one. AI brings more tech to our fingertips, not less.


Did you read the article? It makes valid points that your comment doesn't address. It isn't a brainless "AI is making us stupider" post.


Yes, but do search engines not have a bias towards existing technologies?


Yes but they are much more up-to-date. And they don't have the react issue he mentioned.


I think it may hold for mediocre programmers who brainlessly do what AI tells them. Good programmers will always look for better ways, and for them AI is a great way to explore time-tested ideas in addition to a fast tab-completer. It will certainly not drive them away from new ideas (or at least the arguments in this piece didn't convince me).


I've wondered this myself. There was a post about gumroad a few months ago where the CEO explained the decision to migrate to typescript and react. The decision was in part because of how well AI generated those, iirc.


I’ve been out of web dev for a while, but maybe the problem is there’s a new framework every 6 months and instead of delivering value to the end user, developers are rewriting their app in whatever the new framework is.


This is obviously a pretty common belief, but I do think web dev gets held to a weird standard when framed as "the one with too much tech churn".

Just because someone solves a problem with a new library or framework does not mean they solved a problem for all of web development, and I think the current concentration of applications made with boring things sort of reflects that. [0]

> developers are rewriting their app in whatever the new framework is.

That is obviously an over-exaggeration. Most devs, most teams, most companies are not open to rewriting any application ONLY because it's a new framework. If they are, they probably have other priorities that point at X framework/library when rewrites do happen, because a rewrite is big enough already without having to also create the pattern/library code.

I will absolutely agree that we ignore the user more than we should. That should change. But I think people being excited about something on HN or some other metric of "a new framework every 6 months" isn't as causative as the usual hive mind would imply.

[0] https://www.statista.com/statistics/1124699/worldwide-develo...


It's a lot better now, but there was absolutely a period when it felt like there was a new framework/library weekly. This was 5-10 years ago. 'Cambrian explosion' was the term going around.

A lot of it was in react space, with router libraries or the newest data store being the thing you needed to be using. Definitely turned me off react, personally at least. The angular.js/angular2 migration was also a big pain around this time as well.

There was a lot of social pressure from influencers on youtube and various social media that you NEEDED to switch now. This was the new hotness, it was blazing fast, anything else was obsolete tech debt. There was one instance of 'or you should be fired' that sticks with me.

I think we're just used to the hyperbole and are all a lot more jaded now.

Compare this to the backend, where django, rails, and the others haven't really changed. I haven't felt the need or pressure to rewrite my views/controllers/whatever at all.


> There was a lot of social pressure from influencers on youtube and various social media that you NEEDED to switch now

I wish I had something more coherent to say about this, but: I think this is true, and it frustrates and saddens me that anybody took seriously what influencers on youtube had to say. It seems so obvious that they are there to get views, but even beside that, that they don't know anything about my app or the problems my users have.


I don’t think that’s really the case anymore. The vast majority are on React, Vue, or Svelte (in order of my perception of their popularity). On the CSS side it seems like Tailwind and PostCSS has taken over. The heavier framework category is covered by Next.js. Other than Next, most of that popularity started solidifying in 2020. There are a bunch of newer frameworks like Astro and HTMX and so on, but it doesn’t seem like their adoption is eating into much of the bigger player’s “market.”

There’s still the problem of many library authors playing fast and loose with semver breaking changes and absolutely gigantic dependency trees that exacerbate the issue. That’s where most of my churn comes from, not new frameworks.


I think their combined market share is finally shrinking for leading edge projects. https://news.ycombinator.com/item?id=43008190 https://news.ycombinator.com/item?id=42388665 Also, AI makes a lot of terrible mainstream stuff because the natural bias is towards the mainstream. That's where I count its tendency to default to React if I ask for frontend code without further context.


Yeah but React is swapping major recommendations / styles ever 18 months still. How do you create a react application these days? Not create-react-app apparently?


NextJS for a SSR app, or Vite for a create-react-app style SPA.

Vite has been great for the whole web-dev ecosystem, since it's super easy to configure, and very pluggable, so most frameworks are built around it now. That means you can write, e.g. a new CSS preprocessor, and if you make a Vite plugin, it works with (almost) every framework without extra effort


But this isn't even true, and NextJS is well into egregiously complexity. Remix was an alternative option in the space that is now deprecated in all-but-name for React Router v7, which (for those just tuning back in), react router is now a framework.

If you wrote your app in NextJS 2 years ago, you would already have to rewrite chunks of it to get it to compile today. These tools are NOT solidified, they are releasing breaking changes at least once a year.


IMO create-react-app was a crutch for the last generation of build systems like webpack which were egregiously complex. Nowadays you just use Vite and start with whatever Github template has all the tech you need. Even starting a project from scratch is really simple now since the config files to get started are tiny and simple. There’s always the problem of outdated tutorials on the internet, but as a frontend dev spinning up a new project has never been simpler.

The pace of development is definitely challenging. There’s so many libraries involved in any project that many fall behind in updating their code and application devs get caught in the crossfire when trying to upgrade (and woe be to you if someone added react = “*” somewhere in their peer dependencies).


Never been simpler you say? I think that heavily depends on your project. If you are not building an app, but a website, I think it has been simpler for at least a decade, because then you wouldn't worry about lots of the stuff like bundler, build system (for a few scripts!!!) and some plain CSS.

But aside from all that topic, I have a question about Vite. Does it in any way bind you to using their service for hosting, or can one just as well and easy host such an app on ones own bare metal server?


What service? It’s a bundler that outputs HTML, Js, and CSS. You can upload that or run a Node webserver wherever you want.


I think I mixed it up with vercel. Thanks, you answered my question.


This would be a symptom of bad management, not bad developers.

And to add to what others have said, this stereotype never really held up in my experience either. Any serious web dev shop is going to have the framework they use and stick with it for both long- and short-term clients. And there are many mature options here.

I don't doubt this happens, a lot, but again, I think it's more about bad management than anything - and bad management will always make bad tech decisions, no matter the topic.


That's only part of the problem. Complexity and expectations have exploded in the last decade, while investment is getting tighter and tighter (for the average corporation anyway, not necessarily the big boys).

The constant frameworks churn is one attempt at solving the complexity problem. Unfortunately I don't think it's working.


Nice stereotype. Does it hold water?

I’ve been using the same backend framework professionally for over 10 years, and the same frontend framework for over 6 years. Clearly your thoughts on the matter are not reflective of reality.


Yes. If you have a working system built on 10 years old web framework, it's considered obsolete and in need of being upgraded/rewritten. Why? Imagine houses needing to be rebuild because its foundation is 10 years old.


Not talking about a single system here. The framework has gone through five major versions in that time and my projects usually last a couple months to a couple years.


Can you name a single web framework with wide adoption that was released in the last 6 months? I expect you to delete your comment if you can’t.


Anecdotally, working on an old Vue 2 app I found Claude would almost always return "refactors" as React + Tailwind the first time, and need nudging back into using Vue 2.


I have a similar experience when I tell ChatgPT I need Ruby 2.x code. It always gives me a Ruby 3 version even after I tell it I need code that works with version 2. I need to scream and curse at it before it fixes it so it works


while it is true that there is a gap between what most LLMs "know" and current time that gap is getting smaller not larger with time, it is also possible to teach a model pass the knowledge cut-off with tools and an LLM might be encouraged to be aware of the gap and reach out for the latest information when it might have changed (pi is still pi but the country with the most people might have changed)


Not entirely sure it's a hard fact but this is definitely an example of bias in an AI system.

More reason to decouple and think for ourselves.


Are we really at the point where we are concerned how abstraction levels are not being abandoned as quickly as they were yesterday?


Seems plausible, especially in combination with the AI-coma that occurs when you tab-complete your way through problems at full speed.


Upgrading to Tailwind v4 was horribly frustrating as every AI insisted on using v3 even though it technically knew the v4 api.


I don't know how this is surprising.

LLM-provided solutions will reinforce existing network effects.

Things that are popular will have more related content...


This should not be relevant with cursor being able to include docs in every query. For those who don’t use this I feel for ya.


I think llms will be great for Lang's like elm personally. Especially with agents that can operate in an eval loop.


Huh how long until advertisers pay to get their product preferred by AI? If it isn't already happening...


If you can build an app that an AI cannot, then you know some sort n-month head start on the competition.


If the only new feature is AI, its not worth the upgrade. Outside the lil tech bubble, people hate it.


…Isn’t this the website that constantly encourages people to “choose boring technology” for their web tech startups?

Aren’t a reasonable portion of the readers here people who bemoan the constant learning curve hellscape of frontend development?

And now we’re going to be upset that tools that help us work faster, which are trained on data freely available on the internet and thus affected by the volume of training material, decide to (gasp) choose solutions with a greater body of examples?

Just can’t satisfy all the people all the time, I guess! SMH.


There is an enormous fear from mainstream media of AI, but the thing that excites me the most about this is in health care. AI will find the cure to Alzeimers and countless other diseases, there's no doubt about it. This simple fact is enough to make it acceptable.


> Once it has finally released, it usually remains stagnant in terms of having its knowledge updated....meaning that models will not be able to service users requesting assistance with new technologies, thus disincentivising their use.

I find such argument weak. We can say the same thing about a book, like "Once The Art of Computer Program is finally published, it usually remains stagnant in terms of having its knowledge updated, thus disincentivizing people to learn new algorithms".


Truly and honestly 99% of developers haven't even heard of chatgpt or copilot let alone the general public. It's a self-emposed problem on the orgs that choose to use such tools. More to the point, recency bias is so much stronger I'd rather have a system that points people to the current correct solution than a slightly better solution that is somehow harder to understand despite it's claimed simplicity by fanatics.


Shove the docs as context. Gemini has 2m context length.


i do wonder if this could be mitigated by sufficiently popular newer libraries submitting training data of their library or whatever in action


Yup, python pretty much wins due to training data


We could call this the hamster-wheel theory.


Does anyone know what font is used here?



Thanks!


Herein lies the key for IP protection. Never use cloud hosted coding tools as the world will soon be able to copy your homework at zero cost.


I for one love it that we can copy each other's homework. The open source mindset is what made me fall in love with this industry, and I love the fact that sharing code got easier. If you really want to continue reinventing wheels, go ahead.


Yes, open is overall a great thing. But what if I want my work not to be open source, and the LLMs make it so without my consent. As we've seen, this and related matters of fair use, are working their way through the courts.


Yeah maybe. But I think the thing I like is that is takes me a much shorter amount of time to create solutions for my users and myself. Then I can worry about “tech adoption” once I’ve achieved a relevant solution to my users.

If performance is an issue then sure let’s look at options. But I don’t think it’s appropriate to expect that sort of level of insight into an optimised solution from llms - but maybe that’s just because I’ve used them a lot.

They’re just a function of their training data at the end of the day. If you want to use new technology you might have to generate your own training data as it were.


I don't agree, because the people using these tools for their wore work were never doing innovative tech in the first place.


Perhaps reasoning will help?


Yes! Reasoning is the answer! It will solve all of our problems! General AI is just around the corner!!!!!!!!!!!!!!!!!!!


Eh a cooldown period between the fanfare of a new thing and some battle testing before it gets added to the next AI’s training set is a good thing

the delay is like 8 months for now, thats fine

I think this is also great for some interview candidate assessments, you have new frameworks that AI can't answer questions about yet, and you can quiz a candidate on how well they are able to figure out how to use the new thing


I have definitely noticed that ChatGPT is atrocious at writing Polars code (which was written recently and has a changing API) while being good at Pandas. I figure this will mostly resolve when the standard reasoning models incorporate web search through API documentation + trial and error code compilation into their chain of thought.


Not a problem. I'm sure that being able to work well with new information is the next goal most researchers are working towards, so the entire post feels like a boomer complaining "computers are bad because they're big and bulky" thirty years ago, not being able to imagine the smartphone revolution.


This is truly terrible.

What happened to a new JS front end library every week?

If this keeps up, we won't get to completely throw away all of our old code and retool every two years (the way we've been operating for the last 20 years)

How will we ever spend 85% of our time spinning up on new js front end libraries?

And don't even get me started on the back end.

If AI had been around in 2010, we probably still have some people writing apps in Rails.

OMG what a disaster that would be.

It's a good thing we just completely threw away all of the work that went into all of those gems. If people had continued using them, we wouldn't have had the chance to completely rewrite all of them in node and python from scratch.


Is this what insecure, mono-language developers really think and talk like? Wild.


Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.

https://news.ycombinator.com/newsguidelines.html


My bad dang


This website is mostly full of very junior developers that just click on articles they think "sound smart" or are a hot take they agree with.

Don't get me wrong, it's also one of the few places where you find experts from all sorts of industry mingling. But the quality of commentary on HN has plummeted in the last 10 years.


"Please don't sneer, including at the rest of the community." It's reliably a marker of bad comments and worse threads.

https://news.ycombinator.com/newsguidelines.html


As if web/css wasn't a new gui library no one asked for. Peak programming was VB6 & Delphi 6 (maybe 7). Everything after that was just treading water in increasingly degenerate ways.


Eh. Web is the best write-once-run-everywhere we've achieved so far, especially with the proliferation of WASM. I'd be lying if I said it was perfect, but it's better than Java.


>Web is the best write-once-run-everywhere we've achieved so far,

Web for a decade or more now has been "Rewrite a hundred times, run only in chrome"


The implication here that AI itself does not come with its own churn and needless wheel spinning feels a little out of touch with our current reality.


PHP everywhere?


Some real revisionist history as Rails cribbed most of those gems from Python. Now Python just rebranded for web and its doing everything Rails does and more.


New JS frameworks every week stopped around the time React became popular.


>What happened to a new JS front end library every week?

Yeah I don't think this ever happened.


know what this will be about without reading

Python 3.12-style type annnotations are a good example imo, no one uses the type statement because dataset inertia


Usually, i remember that type annotations exist when I'm debugging after things aren't working. If you look at the python code that I've written, type annotations are a sure sign that "there was a problem here". it's like a scar on repaired code


Type annotations by themselves, are little more than a comment


They're more than that as they allow for easy consistency checking of types across calls. This makes all the difference.


So what.

…if society continues to delegate more of their work to AI then we are going to fall back into the grips that inform us that some people are better at things than other people are and some are worse at things than other people are and this is what lies beneath the bridge of relying or not relying on AI to leverage your capacity to think and act on what you feel.

I think that People who will be willing to put in effort for their crafts without AI will be the ones who will be willing to try out new things and seek opportunities for ingenuity in the future. I think that the problem people have with this idea is that it runs counter to notions related to—ahem

diversity, equity and inclusion…

On one hand and on it’s little finger is the legitimate concern that if companies who develop LLMs are not transparent with the technologies they make available to users when generating code, then they’ll hide all the scary and dangerous things that they make available to the people who’ll think, act and feel corrupt regardless of the tools they wield to impose disadvantages onto others. But I don’t think that will make a difference.

The only way out is hard work in a world bent on making the work easy after it makes you weak.


I think it’s very easy to say people dislike the notion you said cause it goes against DEI (the e stands for equality btw), like it’s such an easy scapegoat.

People just don’t wanna put the work in, or aren’t able to put the work in cause they are busy surviving day to day, y’know, putting food on the table. Cause that is not a given for everyone.


> DEI (the e stands for equality btw),

According to whom? https://en.wikipedia.org/wiki/Diversity%2C_equity%2C_and_inc...


I know I didn’t make it easy to catch, but I think you may have misread me.

I wasn’t referring to “DEI” as in the corpo-state initiative but the concepts themselves as they’re received independent of how they’re packaged in the acronym; in a political context.

In this way, I think to call it “scapegoating” would do a disservice to a legitimate social conflict.

I agree with your final observation in general, but what’s your point?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: