Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a preface, I think lots of people will not like this take.

A lot of people are going to have to come to the realization that has already been mentioned before but many find it hard to grasp.

Your boss, stakeholders, and especially non-technical people literally give 0 fucks about "quality code" as long as it does what they want it to do. They do not care about tests insofar as if it works it works. Many have no clue about nor do they care about whether something just refetches the world in certain scenarios. And AI whether we like it or not, whether it repeats the same shit and isnt DRY, doesn't follow patterns, reinvents the wheel, etc - is already fairly good at that.

This is exactly why all your stakeholders and executives are pushing you to use it. they've been fed that it just gets shit done and pumps out code like nothing else.

I really think a lot of the reason some people say it doesn't give them as much productivity as they would like is due largely to a desire to write "clean" code based on years and years of our own training, and due to having to be able to pass code review done by your peers. If these obstacles were entirely removed and we went full bandaid off I do think AI even in its current state is fairly capable of replacing plenty of roles. But it does require a competent person to steer to not end up in a complete mess.

If you throw away the guardrails a little bit and not obsess about how nice code looks anymore, it absolutely will move things along faster than you could before.



A sane person writes clean code because they are going to have to maintain it themselves one day a few years into the future when the baby kept them up all night for three nights in a row and coffee isn't working for them anymore and they can't remember anything about it and it's falling over so it's really urgent and f**** those guys they swore they'd get someone to take this on with a proper handover and surely there was a goddamn sent email somewhere about it but nothing is coming up when you search and goddamnit it used to compile did people ignore your comment about how it won't build with the new version yet so don't update the build tools and and and

You write good code because you own it.

If you get ChatGPT or Copilot or Claude or whateverthe****else to write it, you're going to have a whole lot less fun when it's on fire.

The level of irresponsibility that "vibe coding" is introducing to the world is actually worse than the one that had people pouring their savings into a shitcoin. But it's the same arseholes talking it up.


>The level of irresponsibility that "vibe coding" is introducing to the world is actually worse than the one that had people pouring their savings into a shitcoin. But it's the same arseholes talking it up.

It's a broader ethos of irresponsibility and disposability that's infected so much of modern tech. Why polish up a game when you can ship it now and fix it later? Why delay an iPhone release that's "Built for Apple Intelligence (TM)" when you can sell the hype today and maybe deliver next quarter?

Everyone wants the reward but none of the work and accountability. You're not even doing the minimum level of work; you're just putting up the appearance of effort. Who cares if what you built is just a brittle husk? Vibe coding is just the natural progression of that cynical philosophy.


I suspect experienced folks are well aware of the subjectivity of quality. How? Change control.

Quality control exists until The Business deems otherwise. The reasons vary: vulnerability, promotion, whatever. Usually not my place to say.

Personally, my 'product' isn't code. Even the 'code' isn't code. For every 8 hours of meetings I do, I crank out maybe 20 lines of YAML (Ansible). Then, another 4 hours of meetings handing that out/explaining the same basics for Political Points.

The problem(s) relating to speed or job security have remarkably little to do with code; generated or not. The people I work with are functionally co-dependent because they don't use LLMs or... manuals.

All this talk about "left behind"... to survive a bear, one doesn't have to be the fastest. Just not the slowest.


I lost all my snobism about that years ago and I do just follow the paradigm, "does it work". But for some reasons, even with AI I ain't on another level.

And those reasons are, it all collapses very quickly once the complexity reaches an medium amount.

And if I want to rely on things and debug them - I cannot just have a pile of generated garbage, that works as long as the sun is shining. For isolated tasks it works for me. For anything complex, I am faster on my own.


Most replies here make the claim that all AI generated code is "garbage". And I can't help but think most of the people who say that do not actually use it in their day to day with the most recent models and actually give it good instructions/requirements.

No, it is not always perfect. Yes you will have to manually edit some of the code it generates. But yes it can and will generate good code if you know how to use it and use sophisticated tools with good guidance. And there are times where it will even write better more performant code than you could given the time requirements.


It writes code well enough in simple contexts, sometimes. But that code is also easy to write, indeed often easier to write than to review. It struggles in more complex contexts and with more complex constraints. Unfortunately, it’s the latter case where I most often have any desire to reach for an aid, and it has failed so consistently and so often there that I have largely stopped trying.

It’s nice when you need to do something simple in an unfamiliar but simple context, though.

It seems though that a lot of the narrative here from its proponents is that we’re just not trying hard enough to get it to solve our problems. It’s like vimmers who won’t shut up about how it’s worth the weeks of cratered productivity in order to reach editing nirvana (I say this as one of them).

Like with any tool, the learning curve has to be justified by the results, but the calculation is further complicated by the fact that the AI tooling landscape changes completely every 3-6 months. Do I want to spend all that time getting good at it now? No. I’ll probably spend more time learning to use it when it’s either easier to get results that actually feel useful or when it stops changing so often.

Until then I’ll keep firing it up every once in a while to have it write some bash or try to get it to write a unit test.


I find it very hit or miss, but I definitely use it. I just don't think it's making vastly more productive yet, maybe 30%. I do think it will get good enough that my job may turn into 80% writing tickets or refining tickets for AI "engineers" and 20% fixing/debugging issues with AI output. But not there yet, and still don't think it will be trustworthy enough to let loose without someone technical in the loop doing that review/fixing. But that might be enough to make one generalist CRUD engineer into the equivalent of a team of 4 or 5 in a couple years.

I kind of see it replacing outsourcing, not mid level + engineers so far. But expect to lead to making mid level + about as productive as a small team.


I think the root of the argument is that the AI critics are worried because they have the assumption that 1) you aren't experienced enough to know what "good code" looks like, and/or 2) you only care if it "works" and don't understand all the implications bad code will have downstream, again because of inexperience.


If the code is a isolated module or function, all is well.

Otherwise often not. But I am not worried. Give me a AI tool that can work with my whole codebase reliable and I gladly use it.


This is true. But the cost after is high. Creating code, by hand or AI is easier than maintaining or modifying the system.

And this is where a problem (still) appears - except now the AI-assiated authors have even less comprehension of the system.


They will demand use of AI tools for "productivity" and then complain when there are bugs in prod without realizing the root cause.


They won't blame the AI as the root cause. They will blame you.

This is why I mention you need to be competent enough to understand what is being generated or they will find someone else who does. There's no 2 ways around it. AI is here to stay.


Yes I agree. They will and should blame the human. That's a problem when the human isnt given enough time to complete projects because "AI is SO productive"


The idea that you will actually really review all that vibecoded slop code comes across as very naive.

The real question is how many companies have to accidentally expose their databases, suffer business-ruining data losses, and have downtime they are utterly unable to recover from quickly before CxOs start adjusting their opinions?

Last time I saw a "Show HN" of someone showing off their vibecoded project, it leaked their OpenAI API key to all users. If that's how you want to run your business, go right ahead.


> you need to be competent enough to understand what is being generated

We're all competent enough to understand what is generated. That's why everyone is doomer about it.

What insights do you have above us when the LLM generates

    true="false"
    while i < 10 {
      i++
    }
What's the deep philosophical understanding that you have about this that makes us all sheeple for not understanding how this is actually the business's goose laying golden eggs. Not the engineers.

Frankly. Businesses that use this, drop all their actual engineers, and then fall over when the slightest breeze comes.

I am actually in favour, in a accelerationist sense.


Have you actually tried using AI to generate code? Try it with a hobby project. You might be surprised!


In my experience this isn't the case - it gets many details subtly wrong. Unless someone technical can nanny it you get weird fixes. For example, I recently had AI decide to remove a database column because it ran into an execution bug related to the column. Without a human in the loop I have no clue what the end game to that would have been. Similarly, it's getting better, but there've been times where it basically writes security vulnerabilities. Stakeholders in management should be aware of the risk, but seem oblivious to it. Maybe because there's little track record of anyone ever being held accountable for massive screw-ups, but at some point letting AI loose will lead to some kind of major disaster and there'll be some reevaluation.


And then you have 0 domain experts because nobody has built the mental model of the code and what it's doing that you inherently build when you're actually doing the problem solving and code-writing yourself.


> Your boss, stakeholders, and especially non-technical people literally give 0 fucks about "quality code" as long as it does what they want it to do.

We've all already heard this message a thousand times. There's no need to keep repeating it. Don't discourage people, especially young people, who want to do great work.


As long as I am going to be the one being called at night when there's a crash, I'll be the one to dictate what's "good enough". Throw away guardrails if you want to, I like sleeping.


> I really think a lot of the reason some people say it doesn't give them as much productivity as they would like is due largely to a desire to write "clean" code based on years and years of our own training, and due to having to be able to pass code review done by your peers.

Machine doesn't get mad when an app takes forever to start or keeps constantly crashing, but we humans do. Writing "clean" code has the least importance when it comes to machine generated code.


People keep reading off this nonsense that AI generated code is always bad or slow.

This is so far from the truth that I really think anybody who still says this has not actually used it for anything real in at least a couple years.

Yes, I'm not saying it will always generate you the best code, sometimes it may even be bad.

What I am saying is it CAN generate code that is reasonably performant, sometimes even more performant than you would have written it given time constraints, and fulfills requirements (even if sometimes it requires a little bit of manual effort) much faster than we ever could before.


I can only assume that you have significant experience being a junior engineer


First off, I would hope that everyone had experience being a junior engineer for some time.

But if your assertion is that using AI for code generation and being successful with it makes you a junior engineer, then good luck keeping your job in the future. Just take a look at social media and there are a plethora of examples of prominent engineers using it with success.


It generates reasonably performant code because most of the industry isn’t writing code that’s computationally bound. If you have to wait for network I/O anyway, it doesn’t really matter if your code is optimal, because that wait will dominate everything else.


Clean code is nice.

Working code is a requirement.

You missed the point. AI slop doesn't just fail on point 1. It fails on point 2.


I work in an enterprise company where we just recently got access to use cursor, before that copilot.

We have literally stood up entire services built practically entirely with AI that are deployed right now and consumers are using.

AI does work with competent people behind the wheel. People can't keep hiding behind saying that it always churns out code that doesn't work. We are way past those days. If you don't you will end up losing your job. Theres no way around it. The problem is we may end up losing our jobs either way.


> entire services built practically entirely with AI

What kind of services and how complex are they?

I've been using Cursor for a year and struggle to get the agent to make competent changes in a medium sized code base.

Even something isolated like writing a test for a specific function usually take multiple rounds of iteration and manual cleanup at the end.


Most services that are written today (Not just talking about my company/experience, I'm talking broadly) are not complex. Many are basically a copy paste of the same types of things that have been built thousands of times before. Now with that knowledge and knowing how LLM's should work it should come as no surprise that AI will be able to spin up new ones quite competently and quickly.

Regarding tests: that is also something I find and many of my peers find that LLM's excel at. Given X inputs and Y outputs an llm will spit out a whole suite of tests for every case of your functions without issue except in complicated scenarios. End to end tests it may not do quite as well at since usually it requires a lot of externalities/setup, but it can help with generating some of the setup and given examples it can build from there. Of course this depends on how much you value those tests, since some don't even think tests are that useful nowadays.


> AI does work with competent people behind the wheel

So does extremely junior devs that are really bad but you code review EVERYTHING.

(Except jr programmers can learn, AI models can't really, they can be retrained from scratch by big corporations)


The AI will do it faster and cheaper than the junior does and thats what the company cares about.

Not to mention you should still be code reviewing it anyway. In fact with AI you should be reviewing even more than you were before.


> The AI will do it faster and cheaper than the junior does and thats what the company cares about.

Short term. Not long term. The AI will never become a staff developer. Shifting review on to the senior developers is shifting responsibility and workload, which will have the expected outcome. Slower development cycles as you have to consider every footgun. Especially when the AI can't explain the reasoning for esoteric changes. I ask a Jr, it's likely they have a test (codified or manual) that led them to the decision.


I’ve had AI generate code in a few hours that would take some juniors I’ve worked with weeks. Yes, the code was not the best (very long methods) and it took some iteration, but everything has trade offs.


Yep, agreed. I can no longer relate to people who don't recognize how powerful a tool like Cursor can be.

But I also can't relate to people who think they can, today, build fully working software using just AIs, without people who know how software works and are able to understand and debug what is being generated.

Maybe it's true that this will no longer be the case a year from now. I honestly don't know. But at the moment, I think being a skilled practitioner who is also able to effectively use these powerful new tools is actually a pretty sweet spot, despite all the doom and gloom.


Cursor has been confidently incorrect repeatedly when discussing databases at my job. It doesn’t understand how MySQL works, and wants to make terrible indexing decisions because of it. You wouldn’t know that it’s wrong unless you already know the correct answer, because what it recommended will work, it’ll just be bloated and sub-optimal. And therein lies the problem: computers are so fast, people will happily assume that it worked, and then later will scale the size up when the half-baked solution shows its cracks.


Yeah. But you aren't disagreeing with what I wrote.

I think it's breaking a lot of brains that we have these tools now that are useful but not deterministically useful.


Fair point. I don’t hate AI, and use it sometimes, but I’m always painfully aware that it can and will make mistakes, some subtle, that must be caught by someone who already knows most of the answer.


When I've seen examples of such cases, my impression was that they could be improved by taking the AI out of the picture and using some "low code" solution.


I had AI do some tedious work, like migrating to new APIs, upgrading deprecated calls, fixing warnings in old C code. It worked great. Faster than I could do it myself.


Let us know how those services are doing in 2 years


Then who fixes it when it stops working? Or do you just par each other on the back and fold the company?


Well that might be true, but I think the reason why they are blatantly acting like that because stakeholders and decision makers are detached from their decisions timewise.

That's why I bring such topics as maintenance and stability very early on into the discussions and ask those stakeholders how much system downtime they can tolerate, so that they can feel the weight of their decision making, and that gives me an opportunity to explain why quality matters.

Then it's up to them to decide how much crap they tolerate.


Anthropogenic climate change and resource depletion comes to mind...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: