With the how AI companies are advertising we can just tell the AI what we want and it will be done with no additional human interaction needed, why do we need a new type of development platform? We shouldn't need to collaborate at all.
The why is simple: "because we need the money." The idea is just a vehicle to get that money. Whether or not it works or makes sense is secondary to but does it make the children dream, pa-pa?
It won't be called coding soon; Sometime in the future (soon?) we won't be talking about code. The few leftovers/managers/CEOs will only be talking about products not the code, not programming, not even operating systems. You won't hear about pull requests, or databases, or HTTP or any of that. You won't talk about programmers. At least not outside of "hobbies".
>> Why would I spend time babysitting an LLM when I could have just done it myself
Exactly this. From what I understand an LLM has a limited context and will get that context wrong anyway and that context is on the edge of a knife and can easily be lost.
I'd rather mentor developers and build a team of living, breathing, thinking, compassionate humans who then in turn can mentor other living, breathing, thinking, compassionate humans.
If we're honest - you are not being stopped from mentoring developers. You're saying you'd want to do it on time paid for by others, and competing with teams not doing that.
So the actual pain point is compensation, and IMO we should directly call that out and address it.
if it gets it right; I'd like someone to show me with a brand new install their AI coding flow and see it get it right. I must be broken because when I use claude code it can't get a gradle build file right.
Serious question: so what then is the value of using an LLM? Just autocomplete? So you can use natural language? I'm seriously asking. My experience has been frustrating. Had the whole thing designed, the LLM gave me diagrams and code samples, had to tell it 3 times to go ahead and write the files, had to convince it that the files didn't exist so it would actually write them. Then when I went to run it, errors ... in the build file ... the one place there should not have been errors. And it couldn't fix those.
The value is pretty similar to autocomplete in that sometimes it's more efficient than manually typing everything out. Sometimes the time it takes try select the right thing the complete would take longer to type manually, and you do it that way instead, and sometimes what you want isn't even going to be something you can autocomplete at all so you do it manually because of that.
Like autocomplete, it's going to work best if you already know what the end state should be and are just using it as a quicker way of getting there. If you don't already know what you're trying to complete, you might get lucky by just tabbing through to see if you find the right result, or you might spend a bunch of time only to find out that what you wanted isn't coming up for what you've typed/prompted and you're back to needing to figure out how to proceed.
I mean, it's not actually autocomplete. But it serves the same role. I know approximately what I want to type, maybe some of the details like argument-order are a bit foggy. When I see the code I recognize it as my own and don't have too much trouble reading it.
But I use LLMs one level higher than autocomplete, at the level of an entire file. My prompts tend to look like "We need a new class to store user pets. Base it on the `person` class but remove Job and add Species. For now, Species is an enum of CAT,DOG,FISH, but we'll probably turn that into a separate table later. Validate the name is just a single word, and indicate that constraint when rendering it. Read Person.js, CODE_CONVENTIONS.md, and DATA_STRUCTURES.md before starting. When complete, read REFACTOR.md"
With the inclusion of code examples and conventions, the agent produces something pretty close to what I'd write myself, particularly when dealing with boilerplate Data or UI structures. Things that share common structure or design philosophy, but not common enough to refactor meaningfully.
I still have to read it through and understand it as if I'd written it myself, but the LLM saves a lot of typing and acts as a second pair of eyes. Codex currently is very defensive. I have to remove some unnecessary guardrails, but it will protect against rare issues I might not have noticed on my first pass.
It does NOT remain to be seen. https://www.cnbc.com/2025/09/26/accenture-plans-on-exiting-s... Big players are already moving in the direction of "join us or leave us". So if you can't keep up and you aren't developing or "reinventing" something faster with the help of AI, it was nice knowing you.
I didn't say don't use AI at all, I said give it the boilerplate, rote work. Developers can still work on more interesting things. Maybe not all the interesting things.
That may be fine ... if it remains your choice. I'm saying companies are outmoding people (programmers, designers, managers, et al) who don't leverage AI to do their job the fastest. If one programmer uses AI to do boilerplate and then codes the interesting bits personally and it takes a week and another does it all with AI (orchestrating agents, etc) and it takes 2 hours and produces the same output (not code but business value), the AI orchestrator/manager will be valued above the former.
Yes! I am not advocating for the 2 hours and the "vision" of managers and CEOs. Quite the contrary. But it is the world we live in for now. It's messy and chaotic and many people may (will?) be hurt. I don't like it. But I'm trying to be one of the "smart people". What does that look like? I hope I find out.
I don't like it, either. I hear people ranting about doing "everything with AI" on one meeting, and what a productivity boost it is, then I get tagged on a dumpster fire PR full of slop and emoji filled log statements. Like did you even look at your code at all? "Oh sorry I don't know how that got in there!"
People will pay for quality craftsmanship they can touch and enjoy and can afford and cannot do on their own - woodworking. Less so for quality code and apps because (as the Super Bowl ads showed us) anyone can create an app for their business and it's good enough. The days of high-paid coders is nearly gone. The senior and principals will hang on a little longer. Those that can adapt to business analyst mode and project manager will as well (CEOs have already told us this: adapt or get gone), but eventually even they will be outmoded because why buy a $8000 couch when I can buy one for $200 and build it myself?
In 2003, the best AI could do was the MS Word grammar check giving unnecessary false positives about sentence fragments and Clippy asking if you wanted help writing $template[n]. 20 years from now, I would not be surprised if the job title "programmer" (etc.) goes the same way as the job title "computer".
https://www.youtube.com/watch?v=aJUuJtGgkQg
* This is snarky. Yes. But seriously.