Hacker Newsnew | past | comments | ask | show | jobs | submit | acedTrex's commentslogin

I love this trend of posting vibe coded in a day slop apps to every place on the internet.

Really makes me want to be here.


We're all hustling to stay relevant because we're caught in a cutthroat game of musical chairs as our industry gets automated, and there's little incentive to polish anything before releasing since 90% of the time you'll just get ignored regardless, so you might as well see if your project/landing page hooks before burning a bunch of time.

I polish stuff before promoting because I'm averse to reputation damage, but every day that I see people not doing that and getting upvotes anyhow makes the practice harder to justify.


Man yall are in it for all the wrong reasons. Though i suppose this is a YCOMBINATOR site after all lmao.

These projects would rather miss out on a few good people to stop the bad ones over the alternative.

I think there are better alternatives, we'll let the market weed things out

For example, I will keep making them spin wheels and burn tokens / money, a sort of honeypot, adversarial shadowban. This is even better for disincentivizing them.

Will automate it if it ever gets bad


You can already hardcode the sha of a given workflow in the ref, and arguably should do that anyways.

It doesn't work for transitive dependencies, so you're reliant on third party composite actions doing their own SHA locking.

You can also configure a policy for it [0] and there are many oss tools for auto converting your workflow into a pinned hash ones. I guess OP is upset it’s not in gh CLI? Maybe a valid feature to have there even if it’s just a nicety

[0] https://github.blog/changelog/2025-08-15-github-actions-poli...


I will do what i know gives me the best possible and fastest outcome over the long term, 5-10 year period.

And that remains largely neovim and by hand. The process of typing code gives me a deeper understanding of the project that lets me deliver future features FASTER.

I'm fundamentally convinced that my investment into deep long term grokking of a project will allow me to surpass primarily LLM projects over the long term in raw velocity.

It also stands to reason that any task that i deem to NOT further my goal of learning or deep understanding that can be done by an LLM i will use the LLM for it. And as it turns out there are a TON of those tasks so my LLM usage is incredibly high.


> I will do what i know gives me the best possible and fastest outcome over the long term, 5-10 year period. And that remains largely neovim and by hand. The process of typing code gives me a deeper understanding of the project that lets me deliver future features FASTER. I'm fundamentally convinced that my deep long term understanding of a project will allow me to surpass primarily LLM projects over the long term.

I have never thought of that aspect! This is a solid point!


This is exactly what I’m doing expressed much more succinctly than I could have done myself. Thanks!

I often find myself in the role of the old guy advising a team to slow down a bit, and invest in doing things better now.

I generally frame this as: Are you optimizing for where you will be in 6 months, or 2 years?


I love that take and sympathise deeply with it. I also have come to the conclusion to focus my manual work on those areas where I can get learning from and try to automate the rest away as much as possible.

Yea, using agents and having them do work means not only a lot of context switching, I actually don't have any context.

Idk what the median lifespan of a piece of code / project / employee tenure is but probably way less than 10 years, which makes that "long term investment" pretty pointless in most cases.

Unsuccessful projects: way less than 10 years

Successful projects: quite often much longer than 10 years

Code quality doesn't matter until lots of people start using what you wrote and you need to maintain/extend/change it

God it's a depressing thought that whatever work you do is just a throwaway no-one will use. That shouldn't be your end goal


> God it's a depressing thought that whatever work you do is just a throwaway no-one will use

I didn't say that.

In fact if your code doesn't significantly change over time it probably means your project wasn't successful.


Maybe we're talking about different things?

That's one of the biggest benefits of software quality and the long-term investment: how easy is your thing to change?


Right, but that usually means higher quality software design, and less so the exact low level details of function A or function B (in most cases)

If anything I'd claim using LLMs can actually free up your time to really focus on the proper design of the software.

I think the disconnect here is that people bashing LLMs don't understand that any decent engineer isn't just going around vibe coding, but instead creating a well thought design (with or without AI) and using LLMs to speed up the implementation.


This is the way. I think we’re in for some rough years at first but then what you described will settle in to the “best practice” (I hate that term). I look forward to the really bizarre bugs and incidents that make the news in the next 2-3 years. …Well as long as they’re not from my teams hah :)

> really bizarre bugs and incidents that make the news in the next 2-3 years

I take it that you are not using Windows 11


If you can't deliver features faster with AI assistance then you're either using it wrong or working on very specialized software that AI can't handle yet.

I haven't seen any evidence yet that using AI is improving developer performance, just a bunch of people who "feel" like it does.

I'm still on the fence about codegen but it's certainly helping explain code quickly without manually step through and providing quick access to docs

I've built a SaaS (with paying customers) in a month that would have taken me easily 6 months to build with this level of quality and features. AI wrote I'd say 99.9% of code. Without AI I wouldn't even have done this because it would have been too large of a task.

In addition, for my old product which is 5+ years old, AI now writes 95%+ of code for me. Now the programming itself takes a small percentage of my time, freeing me time for other tasks.


No-one serious is claiming 6x productivity improvements for close to equal quality

This is proving GP's point that you're going off feels and/or exaggerating


Quality is better both from a user and a code perspective.

From a user perspective I often implement a feature and then just throw it away no worries because I can reimplement it in an hour again based on my findings. No sunken cost. Also I can implement very small details that otherwise I'd have to backlog. This leads to a higher quality product for the user.

From a code standpoint I frequently do large refactors that also would never have been worth it by hand. I have a level of test coverage that would be infeasible for a one man show.


> I have a level of test coverage that would be infeasible for a one man show.

When a metric becomes a target, it ceases to be a good metric.


Cool. What's the product? Like, do you have a link to it or something.

It's boring glorified CRUD for SMBs of a certain industry focused on compliance and workflows specific to my country. Think your typical inventory, ticketing, CRM + industry specific features.

Boring stuff from a programming standpoint but stuff that helps businesses so they pay for it.


Okay, but where's the product? You described the product, but didn't share it.

> Anthropic is successfully coding Claude using Claude.

Claude is one of the buggiest pieces of shit I have ever used. They had to BUY the creators of bun to fix the damn thing. It is not a good example of your thesis.


You and the GP are conflating Claude, the company or its flagship model Claude Opus, with Claude Code, a state of the art coding assistant that has admittedly a slow and buggy React-based TUI (output quality is still very competitive)

Yep, its a rather depressing realization isnt it. Oh well, life moves on i suppose.

I think we realistically have a few years of runway left though. Adoption is always slow outside of the far right of the bell curve.


i'm sorry if I pulled everybody down .. but it's been many months since gemini and claude became solid tools, and regularly i have this strong gut feeling. i tried reevaluating my perception of my work, goals, value .. but i keep going back to nope.

After a multi-decade career that spanned what is rapidly seeming like the golden age of software development, I have two emotions: first gratefulness; second a mixture of resignation, maudlin reflection, and bitterness that I am fighting hard to resist.

As someone who’s always wanted to “get home and code something on my own”, I do have a glimmer of hope that I wonder if others share. I’ve worked extensively with Claude and there’s no question I am now a high velocity “builder” and my broad experience has some value here. I am sad that I won’t be able to deeply look at all the code I am producing, but I am making sure the LLM and I structure things so that I could eventually dig in to modules if needed (unlikely to happen I suppose).

Anyway, my hope/question: if I embrace my new role as fast system builder and I am creative in producing systems that solve real problems “first”, is there a path to making that a career (I.e. 4 friends and I cranking out real production software that’s filling a real niche)? There must be some way for this to succeed —- I am not yet buying the “everything will be instantly copyable and so any solution is instantly commodity” argument. If that’s true, then there is no hope. I am still in shape, though, so going pro in pickleball is always an option, ha ha.


Unfortunately you aren't a high velocity builder. The velocity curve has now shifted and everyone having Claude blast out loc after loc is now a high velocity builder. And when everyone is a high velocity builder...nobody is.

“And when everyone’s super, no one will be”.

Fair point, but my hope is that the creativity involved in deciding what to build, with the choice informed by engineering experience (the project/value will not be obvious to everyone) will allow differentiation.


"creativity involved in deciding what to build, with the choice informed by engineering experience (the project/value will not be obvious to everyone) will allow differentiation."

How? Anyone upon seeing your digital product can just prompt the same thing in no time. If you can prompt it, I can prompt it and so can a million other people.

Nobody whether an individual or business holds any uniqueness or advantage to themselves. All careers and skill sets are leveled and worthless. Implementation skills are worthless. Creativity is worthless.

The only valuable thing is data.


Agree on data value, but as mentioned above I am not yet buying the “everything will be instantly copyable and so any solution is instantly commodity” argument … crud web-app sure, something with significant back-end complexity or a multi-service systems level solution, not so much. Perhaps optimistic, admittedly. Cheers.

I hear you. And maybe you're right. Maybe I'm deluding myself, but: when I look at my skilled colleagues who vibecode, I can't understand how this is sustainable. They're smart people, but they've clearly turned off. They can't answer non-trivial questions about the details of the stuff they (vibe-)delivered without asking the LLM that wrote it. Whoever uses the code downstream aren't gonna stand (or pay!) for this long-term! And the skills of the (vibe-)authors will rapidly disappear.

Maybe I'm just as naive as those who said that photographs lack the soul of paintings. But I'm not 100% convinced we're done for yet, if what you're actually selling is thinking, reasoning and understanding.


The difference with a purely still photograph is that code is a functional encoding of an intention. Code of an LLM could be perfect and still not encode the perfect intention of the product. I’ve seen that in many occasions. Many people don’t understand what code really is about and think they have a printer toy now and we don’t have to use pencils. That’s not at all the same thing. Code is intention, logic, specific use case all at once. With a non deterministic system and vague prompting there will be misinterpreted intentions from LLM because the model makes decisions to move forward. The problem is the scale of it, we’re not talking about 1000 loc. In a month you can generate millions of loc, in a year hundreds of millions of loc.

Some will have to crash and burn their company before they realize that no human at all in the loop is a non sense. Let them touch fire and make up their mind I guess.


> Code is intention, logic, specific use case all at once. With a non deterministic system and vague prompting there will be misinterpreted intentions from LLM because the model makes decisions to move forward. The problem is the scale of it, we’re not talking about 1000 loc. In a month you can generate millions of loc, in a year hundreds of millions of loc.

People are also non deterministic. When I delegate work to team of five or six mid level developers or God forbid outsourced developers, I’m going to have to check and review their work too.

It’s been over a decade that my vision/responsibility could be carried out by just my own two hands and be done on time within 40 hours a week - until LLMs


People are indeed not deterministic. But they are accountable. In the legal sense, of course, but more importantly, in an interpersonal sense.

Perhaps outsourcing is a good analogy. But in that case I'd call it outsourcing without accountability. LLMs feel more like an infinite chain of outsourcing.


As a former tech lead and now staff consultant who leads cloud implementations + app dev, I am ultimately responsible for making sure that projects are done on time, on budget and meets requirements. My manager nor the customer would allow me to say it’s one of my team members fault that something wasn’t done correctly any more than I could say don’t blame me blame Codex.

I’ve said repeatedly over the past couple of days that if a web component was done by someone else, it might as well have been created by Claude, I haven’t done web development in a decade. If something isn’t right or I need modifications I’m going to either have to Slack the web developer or type a message to Claude.


Ofc people are non deterministic. But usually we expect machines to be. That’s why we trust them blindly and don’t check the calculations. We review people’s work all the time though. Here people will stop review machine LLM code as it’s kind of a source of truth like in other areas. That’s my point, reviewing code takes time and even more time when no human wrote it. It’s a dangerous path to stop reviews because of trust in the machine now that the machine is just kind of like humans, non deterministic.

No one who has any knowledge or who has ever used an LLM expects determinism.

And there are no computer professionals who haven’t heard about hallucinations.

Reviewing whether the code meets requirements through manual and automated tests - and that’s all I cared about when I had a team of 8 under me - is the same regardless. I wasn’t checking whether John used a for loop or while loop in between my customer meetings and meetings with the CTO. I definitely wasn’t checking the SOQL (not a typo) of the Salesforce consultants we hired. I was testing inputs and outputs and UX.


Having a team of 8 people producing code is manageable. Having an AI with 8 agents that write code all day long is not the same volume it can generate more code in a day that one person can review in a week. What you say is that, product teams will prompt what they want to a framework, the framework will take care of spec analysis, development, reviews, compliance with spec. Product teams with QA will make sure the delivery is functionally correct. No humans need to make sure of anything code related. What we don’t know yet is, does AI will still produce solid code trough the years because it’s all statistical analysis and with the volume of millions of loc, refactoring needed, data migrations etc what will happen ?

For context, I just started using coding agents - codex CLI and Claude code in October. Once I saw that you had to be billed by use, I’m not using my own money for it when it’s for a company.

Two things changed - Codex CLI now lets you use it with your $20 a month subscription and I have never run into quota issues with it and my employer signed up for the enterprise vs of Claude and we each have an $800 a month allowances

My argument though is “why should I care about the code?” for the most part. If I were outsourcing a project or delegating it to a team lead, I would be asking high level architectural, security and scalability questions.

AI generated the code, AI maintains the code. I am concerned about abstractions and architecture.

You shouldn’t have to maintain or refactor “millions of lines of code”, if your code is well modularized with clean interfaces, making a change for $x7 may mean making a change for $x1…$x6. But you still should be working locally in one module at the time. You should do the same for the benefit of coders. Heck my little 5 week project has three independently deployable repos in a root folder. My root Agents file just has a summary of how all three relate via a clean interface.

In the project I am working on now, besides “does it meet the requirements”, I care about security, scalability, concurrency, user experience for the end user, user experience for the operations folks when they need to make config changes, and user experience for any developers who have to make changes long after I’m off this project. I haven’t looked at a single line of code - besides the CloudFormation templates. But I can answer any architectural question about any of it. The architecture and abstractions were designed by me and dictated to the agents

On this particular project, on the coding level, there is absolutely nothing that application code like this can do that could be insecure except hypothetically embed AWS credentials into the code. But it can’t do that either since it doesn’t have access to it [1].

In this case security posture comes from the architecture - S3 block public access, well scoped IAM roles, not running “in a VPC”. Things I am checking in the infrastructure as code and I was very specific about.

The user experience has to come from design and checking manually.

I mentioned earlier that my first stab it scaled poorly. This was caused by my design and I suspected it would beforehand. But building the first version was so fast because of AI tools, I felt no pain in going with my more architecturally complicated plan B and throwing the first version away. I wouldn’t have known that by looking at the code. The code was fine it was the underlying AWS service. I could only know that by throwing 100K documents at it instead of 1000.

I designed a concurrent locking mechanism that had a subtle flaw. Throwing the code into ChatGPT into thinking mode, it immediately found it. I might have been better off just to tell the coding agents “design a locking mechanism for $x” instead of detailing it.

Even maintainability was helped because I knew I or anyone else who touched it was probably going to be using an LLM. From the get go I threw the initial contract, the discovery sessions transcripts, the design diagrams, the review of the design diagrams, my project plan and breakdown into ChatGPT and told it to render a detailed markdown file of everything - that was the beginning of my AGENTS.md file.

I asked both Codex and Claude to log everything I was doing and my decisions into separate markdown files.

Any new developer could come into my repo, fire up Claude and it wouldn’t just know what was coded, it would have full context of the project from the initial contract through to the delivery

[1] code running on AWS never explicitly has to worry about AWS credentials , the SDKs can find the information by themselves by using the credentials of the IAM role attached to the EC2 instance, Lambda, Docker container, etc.

Even locally you should be getting temporary credentials that are assigned to environment variables that the SDK retrieved automatically.


There are so many types of requirements though. Security is one, performance is another. No one has cared about while/for for a long time.

Okay - and the person ultimately leading the team is still responsibility for it whether you are delegating to more junior developers or AI. You’re still reviewing someone else’s code based on your specs

I have this nagging feeling I’m more and more skimming text, not just what the LLMs output, but all type of texts. I’m afraid people will get too lazy to read, when the LLM is almost always right. Maybe it’s a silly thought. I hope!

This is my fear too.

People will say "oh, it's the same as when the printing press came, people were afraid we'd get lazy from not copying text by hand", or any of a myriad of other innovations that made our lives easier. I think this time it's different though, because we're talking about offloading the very essence of humanity – thinking. Sure, getting too lazy to walk after cars became widespread was detrimental to our health, but if we get too lazy to think, what are we?


there are some youtube videos about the topic, be it pupil in high school addicted to llms, or adults losing skills, and not dev only, society is starting to see strange effects

Can you provide links to these videos?

This one is in french (hope you don't mind), https://youtu.be/4xq6bVbS-Pw?t=534 mentions the issues for students and other cognitive issues.

> In most teams, coding - reading, writing, and debugging code - used to be the part that took engineers the most time, but that is no longer the bottleneck.

This is just a blatant lie lmao, this is the core tenant that all these AI takes rely on and it is just flat out not true.


Something being successful and something being a high quality product with good engineering are two completely different questions.

Create the problem, sell the solution remains an undefeated business strategy.

"there are security flaws in the 'tell an llm with god perms to do arbitrary things hub'"

Is such an obvious statement it loses all relevant meaning to a conversation. It's a core axiom that no one needs stated.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: