This is a good thing. Native was always a place where gatekeeping and proprietary crap sprout and thrive.
It can't die soon enough. Doesn't all have to be Electron, but web tech for the win and force everything to the browser unless you'd like to write it twice.
This is an article that's so ahead of its time that it's likely to be ignored. The TL;DR is that true agentic development doesn't improve the software dev lifecycle, it throws huge chunks of it in the trash.
When your context environment and constraints are properly designed, many planning, testing, and review stages can simply be skipped. It's remarkable but true.
Yes, LLMs can basically short-circuit the entire product design and development process if you want them to. You can write "Give me a goal tracking app" and pretty reliably one-shot it. Success?
I think a lot of folks would benefit from re-reading the Agile Manifesto [0]. Unfortunately in the corporate world, "Agile" became almost a perfect inversion of the original 12 principles, but in the age of AI, I think it's more relevant than ever. Back when you could only get through a handful of "user stories" per week, there was tremendous pressure on developers to prioritize the "right" ones, which led to more and more layers of planners, architects and delivery leads.
Now the feedback loop between the customer, business and developer is as tight as it always should have been.
That's decidedly not what the article or I said. This only works with carefully controlled context and that context replaces much of the previous SDLC. No one is talking about one-shot or even vibe coding.
As for agile, it was always ceremony (even says so on the tin) and can't die soon enough. A timed sprint makes no sense in an LLM environment. Just ship your damn software and stop having meetings. AI tools get us closer to XP, not agile.
Agreed. People aren’t ready for this, even (maybe especially) on HN.
Everyone’s hung up on how nobody really does waterfall. Or course. But a LOT of people are vibing their code and making PRs and then getting buried in code reviews. Just like the article says, you can’t keep up that way. Obviously. Only agents can review code as fast as agents write it. But I find as of recently that agents review code better than people now, just like how they write it better. Gotta lean into it!
An article about the "most fun" with a new toy ends after the first prompt response?
It might not be as fun after it makes a terrible mistake, breaks something of yours, spends your money, or runs up your token bill. Folks are reporting thousands of dollars in token costs for playing with OpenClaw.
Okay? I suppose your next soapbox lecture will be on combustible engines and how "noxious, noisy, unreliable, and elephantine" and how they vibrate so much they can "loosen ones dentures"?
You know these can be SELF-HOSTED, using LOCAL MODELS like Ollama? Or maybe your just like putting your two cents in without actually doing your own research.
I appreciate your Boomer humor and I get that you don't want to dampen the enthusiasm for these new tools. Nor do I, really, but I think it's disappointing to write an article with so little depth and perhaps unwise to not include any sort of caveats on its use. But that's just my view.
People can tell their story for themselves with these tools, but making a story or a game that's compelling for other people is an entirely different skill, and one you don't get from AI. Like other UGC tools (Roblox, for example) you'll wind up with a few games that are interesting to other people and millions played only by their maker.
It's a good argument. Canned UI and pre designed user interactions are already done for. We can't sit around and think of what a user might want to do or how to present that. Those concepts have to be fluid and chosen by the user.
While agents that can book a hair appointment are interesting, that's more of a workaround than the kind of UI I think we're going for. The visual appearance of the software itself must change dynamically according not only to the task but to the user's preferences. This is something we haven't seen demonstrated yet.
Glad it resonated. You hit the nail on the head regarding agents being a 'workaround', that's exactly why I categorize them as the 'Transitional phase' rather than the destination. They are essentially bots trying to navigate a web that wasn't built for them.
Your point about the visual appearance changing dynamically is the 'Holy Grail' I touch on in the 'Generative UI' section. We are currently stuck designing static screens for dynamic problems.
I agree we haven't seen a true demonstration yet. Do you think that shift happens at the App level first (e.g., a dynamic Spotify), or does it require a whole new OS paradigm (a 'Generative OS') to work?
Good question! I'd say it happens at the app level first because the context of the OS is too big a surface to start with. But a RAG app for a specific vertical could have enough context to dynamically draw a custom UI for every user, given the constraints on what the app is generally about.
That makes a lot of sense, it is definitely the safer place to start.
It implies that design systems are about to change fundamentally. Instead of shipping a library of static components, we'll need to ship a set of constraints and rules that tell the RAG model how it's allowed to construct the UI on the fly.
reply