Hacker Newsnew | past | comments | ask | show | jobs | submit | imdsm's commentslogin

I've written code for almost 30 years, and the last 4 years I've slowly used AI more and more, starting with GitHub Copilot beta, ChatGPT, Cursor, Windsurf, Claude, Gemini, Jules, Codex. Now I mostly work with Claude, and I don't write any code myself. Even configuring servers is easier with Claude. I still understand how everything works, but I now change how I work so I can do a lot more, cover a lot more, and rely less on people.

It isn't much different to how it works with a team. You have an architecture who understands the broader landscape, you have developers who implement certain subsystems, you have a testing strategy, you have communication, teaching, management. The only difference now is that I can do all this with my team being LLMs/agents, while I focus on the leadership stuff: docs, designs, tests, direction, vision.

I do miss coding, but it just isn't worth it anymore.


> I still understand how everything works,

That's partly an illusion. Try doing everything manually. After only using inline suggestions for six months a few years ago, I've noticed that my skills have gotten way worse. I became way slower. You have to constantly exercise your brain.

This reminds me of people who watch tens of video courses about programming, but can't code anything when it comes to a real job. They have an illusion of understanding how to code.

For AI companies, that's a good thing. People's skills can atrophy to the point that they can't code without LLMs.

I would suggest practicing it from time to time. It helps with code review and keeping the codebase at a decent level. We just can't afford to vibecode important software.

LLMs produce average code, and when you see it all day long, you get used to it. After getting used to it, you start to merge bad code because suddenly it looks good to you.


I disagree. I used to do a lot of math years ago. If you gave me some problems to do now I probably wouldn't be able to recall exactly how to solve them. But if you give me a written solution I will still be able to give you with 100% confidence a confirmation that it is correct.

This is what it means to understand something. It's like P Vs NP. I don't need to find the solution, I just need to be able to verify _a_ solution.


I have a hard time using languages I know without an LSP when all ive been doing is using lsp and its suggestions.

I cant imagine how it is for people tha try to manually write after years of heavy llm usage


The GP seems to run a decentralized AI hosting company built on top of a crypto chain.

Can you get any fadd-ier than that? Of course they love AI.


Well, I‘m still using my brain from morning to evening, but I‘m certainly using it differently.

This will without a doubt become a problem if the whole AI thing somehow collapses or becomes very expensive!

But it’s probably the correct adaptation if not.


> That's partly an illusion. Try doing everything manually. After only using inline suggestions for six months a few years ago, I've noticed that my skills have gotten way worse. I became way slower. You have to constantly exercise your brain.

YMMV, but I'm not seeing this at all. You might get foggy around things like the particular syntax for some advanced features, but I'll never forget what a for loop is, how binary search works, or how to analyze time complexity. That's just not how human cognition works, assuming you had solid understanding before.

I still do puzzles like Advent of Code or problems from competitive programming from time to time because I don't want to "lose it," but even if you're doing something interesting, a lot of practical programming boils down to the digital equivalent of "file this file into this filing," mind-numbingly boring, forgettable code that still has to be written to a reasonable standard of quality because otherwise everything collapses.


Want to try to do anything more complicated? I have seen a lot of delusional people around, who think their skills are still on the same level, but in interviews they bomb at even simple technical topics, when practical implementations are concerned.

If you don't code ofc you won't be as good at coding, that's a practical fact. Sure, beyond a certain skill level your decline may not be noticeable early because of the years of built-in practice and knowledge.

But considering every year there is so much more interesting technology if you don't keep improving in both hands-on learning and slow down to take stock, you won't be capable of anything more than delusional thinking about how awesome your skill level is.


I could write the same comment myself. Also >30 years of experience.

I actually think *more* than I used to, because I only get the hardest problems to solve. I mostly work on architectural documents these days.


This is a consequence of introducing LLMs in software development. If you imagine it as a pyramid that starts from the bottom, the easiest tasks that happen more frequently, to the top, the hardest challenges that happen once in a while, LLMs can definitely help in automating the base of such pyramid, leaving the human with an harder job to do because now he statistically encounters harder tasks more often.

If this is the price to pay to unlock this productivity boost, so be it but let’s keep in mind that:

- we need to be more careful not to burnout since our job became de facto harder (if done at the maximum potential);

- we always need to control and have a way to verify what LLMs are doing on the easiest tasks, because even if rarely, they can fail even there (...but we had to do this anyway with Junior devs, or didn’t you?)


>anyway with Junior devs..

A junior dev is accountable, but an LLM subscription is not.


If your email is on the commit, you are responsible.

> If your email is on the commit, you are responsible.

Humans shouldn't exist as whipping-boys for machines. It's a cop-out for shitty technology. People weren't designed for continuous passive monitoring and do really poorly at that task.


Yea, pseudo autonomous driving and all that..

>I only get the hardest problems to solve.

So do you review all that code your LLM generates for you?


Yes, very much so, in detail. Just as I would with programmers. Also, LLM doesn't just "generate code" for me, we work together on design documents first. See, I started saying "we", because I found it to be such a good partner.

So can you please describe the kind of coding that you had success with by using LLMs.

No, because I do not understand what "kinds of coding" there are. Also, given the tone of this discussion, I am not sure I want to invest my time into it.

Using Agile methodology with agents actually works pretty well in my experience. We do sprints and then code reviews, testing and revision, optimization. During code review, I inspect everything the agents created and make corrections and then roll the corrected patterns into the training documentation for the agents so they learn and don't make the same mistakes.

Just curious. What stuff did you make before the LLMs, with regular coding?

> I do miss coding, but it just isn't worth it anymore.

This pretty much sums up my current mood with AI. I also like to think, but it just isn't worth it anymore as a SE at bigCorp. Just ask AI to do it and think for you and the result only has to be "good enough" (=> works, passes tests). Makes sense business wise, but it breaks me, personally.


Sorry, good for you, but how is this relevant?

Imagine somebody writes a blog post "why I bike to work". They detail that they love it, the fresh air, nature experience biking through a forest, yes sometimes it's raining but that's just part of the experience, and they get fit along the way. You respond with "well I take the car, it's just easier". Well, good for you, but not engaging with what they wrote.


The difference is that everyone knows that it’s faster and to take the car but you get to exercise your muscles. But imagine it was 1920 when cars were still up for debate and the post was “why I ride my horse to work”. It’s still a common argument whether you’ll get better results coding manually or using AI.

> It’s still a common argument whether you’ll get better results coding manually or using AI.

Except the post has nothing to do with “better results” of the generated output, it concerns itself with the effect it has on the user’s learning. That’s the theme which is relevant to the discussion.

And we already know LLMs impact your learning. How could it not? If you don’t use your brain for a task, it gets worse at that task. We’ve know that, with studies, since before LLMs.


It boggles my mind how AI discussion is so abrasive that people get their jimmies rustled over just about anything in here.

Ironically your comment looks AI written with that analogy.

I'm sure people used analogies before the invention of LLMs. After all, the very concept of an analogy must have made it into the training data.

Roncesvalles' law: Bad posts have bad comments.

Welcome to Hacker News!

Did you read the post yourself? It doesn’t sound like it. It is composed of the title and three mystical-sounding quotes. How is one supposed to engage with this? Doing literary critique? A counter point to the statement “I don’t use LLMs” would probably count as valid engagement in any circumstance but especially in this one.

I did. The three quotes clearly express a shared sentiment for enjoyment of building and learning while doing so. That's certainly something one can engage with by providing a counterpoint. But just saying "that's not what I do" isn't one.

The original poster “expresses a shared sentiment” by posting three quotes, but the poster you replied to, who offers a fairly detailed account of the value LLMs bring to their daily work life, and how they feel about it, does not. OK.

Sure.

The original post is a blog post that somebody put into their blog. Its purpose isn't (necesaarily) to engage into a discussion or even interact with anybody. It's the root of a discussion tree, if you will, a place to make a bold statement or just express a random thought.

In contrast, the post I replied to is a response, which by definition (and purpose of this forum) is meant to contribute to a discussion. It's an inner node of a discussion tree and thereby needs to engage with the presented argument.

So, this is an apples-vs-oranges situation, not a double-standards situation.


The irony of starting to claim that someone doesn’t engage with an “argument” (put forth by three quotes, and nothing else), and then ending up with this absolute word salad and an irrelevant metaphysical quip on the categories.

I've seen this at a few orgs i've visited, where the seniors have leaned into LLM programming more than the juniors for these reasons.

I don't think the split is along seniority lines. Many juniors have adopted LLMs even faster. In many quarters it has also become a kind of political issue where "all the people I hate love LLMs so I must hate them."

Except your team is full of occasionally insane "people" who hallucinate, lie, and cover things up.

We are trading the long term benefits for truth and correctness for the short term benefits of immediate productivity and money. This is like how some cultures have valued cheating and quick fixes because it's "not worth it" to do things correctly. The damage of this will continue to compound and bubble up.


I agree. The further I have progressed into my career the more I have been focused on the stability, maintainability and "supportability" of the products I work on. Going slower in order to progress faster in the long run. I feel like everyone is disregarding the importance of that at the moment and I feel quite sad about it.

Not only that, there’s this immense drive for “productivity” so they have more time to… Do more work. It’s insanity.

I have not found that to be true on a personal level, but in fairness it does seem to be a widely reported problem. At its core, I think it is an issue of alignment. That is something different than skill.

I agree with you, but considering the state of modern software, I think the values "truth and correctness" have been abandoned by most developers a long time ago.

Be that as it may, we shouldn’t be striving to accelerate the decline, and be recruiting even more people who never learned those values.

It’s the Eternal September of software (lack of) quality.


This is a fair argument but it’s rapidly becoming a non-argument.

LLMs have come a long way since ChatGPT 4.

The idea that they’ll always value quick answers, and always be prone to hallucination seems short-sighted, given how much the technology has advanced.

I’ve seen Claude do iterative problem solving, spot bad architectural patterns in human written code, and solve very complex challenges across multiple services.

All of this capability emerging from a company (Anthropic) that’s just five years old. Imagine what Claude will be capable of in 2030.


> The idea that they’ll always value quick answers, and always be prone to hallucination seems short-sighted, given how much the technology has advanced.

It’s not shortsighted, hallucinations still happen all the time with the current models. Maybe not as much if you’re only asking it to do the umpteenth React template or whatever that should’ve already been a snippet, but if you’re doing anything interesting with low level APIS, they still make shit up constantly.


> All of this capability emerging from a company (Anthropic) that’s just five years old. Imagine what Claude will be capable of in 2030.

I don't believe VC-backed companies see monotonic user-facing improvement as a general rule. The nature of VC means you have to do a lot of unmaintainable cool things for cheap, and then slowly heat the water to boil. See google, reddit, facebook, etc...

For all we know, Claude today is the best it will ever be.


The current models had lots and lots of hand written code to train on. Now stackoverflow is dead and github is getting filled with AI generated slop so one begins to wonder whether further training will start to show diminishing returns or perhaps even regressions. I am at least a little bit skeptical of any claim that AI will continue to improve at the rate it has thus far.

If you don't really understand how LLMs of today are made possible, it is really easy to fall into the trap of thinking that it is just a matter of time and compute to attain perpetual progress..

> Except your team is full of occasionally insane "people" who hallucinate, lie, and cover things up.

Wait.. are we talking about LLMs or humans here?


Humans are accountable, an LLM subscription is not..

The humans operating the LLM are accountable.

That is the point. It is nonsense to delegate your responsibility to something that is neither accountable nor reliable if you care about not tanking your reputation..

> even configuring servers is easier with Claude

To what extent is Claude configuring these servers? Is this baremetal deployment with OS configuration and service management? Or is it abstracted by defining Terraform files to use pre-created images offered by a hosting service?


Me:

  codex
  “run my dev server”
My laziness knows no bounds.

> These are the depths of my laziness and I have yet to hit the ground.

I only hope that when you do, you don’t take anyone else with you.

It’s one thing to be careless and delete all your own email; quite another to be careless and screw the lives of people using something you worked on and who had no idea you were YOLOing with their data.


Edited my comment before your response. But yeah, lighten up, it’s a joke! I’m not that lazy.

The only thing I do that I’d consider remotely lazy is put my API keys in my AGENTS.md so I don’t have to keep pasting it in my chat.


> lighten up, it’s a joke! I’m not that lazy.

Maybe you aren’t, but there are definitely people who are and do exactly what you described, including senior staff at companies like Meta and Microsoft, so the point stands.


Fair.

I like this perspective.

"Man spends hundreds of dollars a month on API tokens, claims coding isn't worth it anymore."

Onion articles really write themselves these days. I for one would still rather keep the money and write 25% of it myself.


I'll change my default branches to main when Masterclass change their name to Mainclass


And mastering a subject is changed to maining it?


RIP


If we're able to produce an LLM which takes a seed and produces the same output per input, then we'd be able to do this


There must be good reasons why we don’t have this. I suspect one reason is that the SOTA providers are constantly changing the harness around the core model, so you’d need to version that harness as well.


And it's broken


the people most likely to analyse books like this are those of us who are more likely to read them as well


we call them luddites


I'm not entirely sure that's a fair association. The Luddites weren't against technology in general, they were fighting for their livelihoods. There very well could be a fresh luddite movement centered around the use of AI tools, but I don't think "luddite" is the right term in this specific case.


No that was a labor issue, abusive factory owners got targeted.


commenter above probably didn't read the post, ironically


Guess we need “reading across hacker news articles with Claude code.”


For what it's worth, Cowork does run inside a sandbox


that looks pretty good


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: