Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am also building an agent framework and also used chat coding (not vibe coding) to generate work - I was easily able to save 50% of my time just by asking GPT.

But it generates mistakes like say 1 in 10 times and I do not see it getting fixed unless we drastically change the LLM architecture. In future I am sure we will have much more robust systems if the current hype cycle doesn't ruin its trust with devs.

But the hit is real, I mean I would hire a lot less If i were to hire now as I can clearly see the dev productivity boost.. Learning curve for most of the topics are also drastically reduced as the loss in Google search result quality is now supplemented by LLMs.

But thing I can vouch for is automation and more streamlined workflows. I mean having normal human tasks being augmented by an LLM in a workflow orchestration framework. The LLM can return its confidence % along with the task results and for anything less than ideal confidence % the workflow framework can fall back on a human. But if done correctly with proper testing, guardrails and all, I can see LLM is going to replace human agents in several non-critical tasks within such workflows.

The point is not replacing humans but automating most of the work so the team size would reduce. For e.g. large e-commerce firms have 100s of employees manually verifying product description, images etc, scanning for anything from typos to image mismatch to name a few. I can see LLMs going to do their job in future.



I just left my CTO job for personal reasons. we tried coding agents, agentic coding, LLM-driven coding whatever. the code any of these generate is subpar (a junior would get the PR rejected for what it produces) and you just waste so much time prompting and not thinking. people don't know the code anymore, don't check the code and it's all just gotten very sloppy. so not hiring coders because of AI is a dangerous thing to do and I'd advise heavily against it. your risk just got way higher because of hype. maintainability is out of the window, people don't know the code and there are so many hidden deviations to the spec that it's just not worth it.

the truth is that we stop thinking when we code like that.


You are misunderstanding coding vs logic. Coding is making our logic (which is creativity) fit into someone else's syntax. If a machine is able to translate your logic expressed in your language into running code, whats wrong? Common thats our dream isn't it? Its like you not using calculator because you are worried kids won't learn how to divide. Is assembly language better than C++? Yes - did that prevented high level languages from taking over the world? No.

If done right we all code through spec written in English, not code.


Your comment is tangental at best. You're projecting quite a lot tbh. I'm saying: the stuff it produces is shit and we don't have any connection to the outcome anymore, because our brain goes into "watching TV mode". So the outcome is garbage in any sense.


>the stuff it produces is shit

Not everyone has that opinion. I am not talking about non programmers jumping on the bandwagon but real technologists using it in real world programming [1].

> our brain goes into "watching TV mode"

How many people would have thought the same when calculator came? We can either think like - oh this tool is making kids dumb or you can think like the new tool can make them faster and efficient.

[1] https://antirez.com/news/154


the very next post from that page is literally this one: https://antirez.com/news/153 (let me quote a comment from the article you referenced: "only HN noobs get a hardon for that")

I am talking from my point of view. I am quite experienced and know what I'm doing in a lot of areas, having 18 years of experience. I am faster without agents, produce better code, know the code and can guarantee maintainability and less bugs. Why on earth would I change that? That it's going to get better in the future is a hypothetical which needs to be proven yet.

It is not a calculator tho. So stop with that nonsensical comparison already. You know exactly what I'm talking about and you're not arguing against my point. That's why I'm saing that your comment ins tangental. LLMs are not a calculator. My point is that LLMs make us neither faster nor more efficient. You trade quality and maintainability (slob) against quality. That's a different trade off and makes the two outcomes uncomparable.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: