Hacker Newsnew | past | comments | ask | show | jobs | submit | Garlef's commentslogin

My last experience was with someone giving me answers that where clearly in part LLM generated: It contained payloads/configs that did not even match the actual API

Yes. And AI is bad at design.

But that's why you tell the AI to refactor.

I've started a greenfield project and went 100% AI for learning purposes (of course it's more like 95%) and my takeaway is:

- it's fully possible

-- but the AI is of no great help with figuring out what the architecture or interfaces should be

- Keep a refactoring backlog

-- Spend 30%-40% of your time on refactoring, aligning patterns, improving architecture

-- depending on your codebase, this can happen in parallel

-- sometimes you need to get your hands dirty and do the cleanup yourself

-- ... but usually, you only need to establish the pattern once

- once the patterns are established, it becomes easy to talk to the AI in the context of your codebase

-- you can reference patterns by name or location


re: your last bullet.

This has been very effective in my experience. “See class foo for example implementation “


> producing code that’s structurally sound enough for someone responsible for the architecture to sign off on

1. It helps immensely if YOU take responsibility for the architecture. Just tell the agent not only what you want but also how you want it.

2. Refactoring with an agent is fast and cheap.

0. ...given you have good tests

---

Another thing: The agents are really good at understanding the context.

Here's an example of a prompt for a refactoring task that I gave to codex. it worked out great and took about 15 minutes. It mentions a lot of project specific concepts but codex could make sense of it.

""" we have just added a test backdoor prorogate to be used in the core module.

let's now extract it and move it into a self-contained entrypoint in the testing module (adjust the exports/esbuilds of the testing module as needed and in accordance with the existing patterns in the core and package-system modules).

this entrypoint should export the prorogate and also create its environment

refactor the core module to use it from there then also adjust the ui-prototype and package system modules to use this backdoor for cleanup """


Interesting. Could you provide a bit more detail on how the DAG emerges?


2026 paper titled Evolving Programmatic Skill Networks, operationalized in Claude Code


I recently had the pleasure of reading a few books out of the "Thomas the Tank Engine" Childrens-Content Universe to my nephew.

Slop is definitely everywhere.


> So how can you keep generating disposable software on this layer?

Well... If your "users" are paying customers of a XaaS Subscription service, then there's propably little need and/or room for disposable UI.

But if you're doing something for internal processes with maybe 2-3 users at max, then you might want to do something that does not result in launching an under-budgeted project that could be a full blown SaaS project on its own.


> If it's affecting only a tiny number of users

Tiny number of users with such an enormous user base (10-16% desktop share) still means there's thousands of users affected.


Maybe a subscription based payment model would also work for in general?

Similar to a gym membership where only a small part of the paying users actually show up.


I'm referring to these kind of articles as "Look Ma, I made the AI fail!"


Still I would agree we need some of these articles when other parts of the internet is "AI can do everything, sign up for my coding agent for $200/month"


My thoughts went into a different direction: "Maybe I should buy a small tablet so that I can read code properly without carrying a full laptop?"

(Sure, there might be small laptops of similar dimensions ... But as the name "laptop" suggests these are made for a different UX... and they require more effort to turn on/off)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: