Hacker News new | past | comments | ask | show | jobs | submit login

I was considering this other day. AI tools are stuck at a particular point in time. And even training them on newer stuff, there's only so much information to train on. I've been exploring this being a _good_ thing. In software we spend so much time chasing the latest tooling, language features, frameworks, etc. Maybe it'll be a positive that it all stagnates a bit and we just use the tools we have to get work done instead of creating new hammers every 6 months.



It would be nice if some AI tools could be developed to actually evaluate new libraries and frameworks. For instance, if there are already 10 libraries to do something and I develop a new one that's objectively faster than all of them (true story), could some AI do the work of installing and benchmarking it and incorporate the results in its knowledge base? And periodically update it? Is there any way to leverage AI for discovery of solid code? I suspect this is beyond current capabilities, but one can dream.


The problem is that for most domains our hammers are still pretty bad.


Agree. A lot of the most popular libraries/frameworks are not necessarily the best. Removing more fitness checks will only worsen this problem.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: