Hacker News new | past | comments | ask | show | jobs | submit login

Honestly, I think we're way way way off what you're suggesting about giving it inputs and getting specifications. This blog post shows chatgpt recreating things with very specific instructions, that it has seen before. The hard part of defining requirements is being specific in them, and that's what programmers do. It's pretty common in these threads to see people talking about chatGPT and friends being totally wrong and it not being aware of how wrong it is.

> ? Is "LLM, go register me an EC2 instance and configure it not to go over $20/mo, here's my credit card number" totally out of the question?

I suspect we are _way_ closer to that than to having it respond to "make me a game like clash of clans".




I don't think your intuition is ambitious enough. We have 12 year olds making roguelikes and platformers today, with templates and toolkits. Sure the initial LLM-made apps will suck, the initial everything sucks, but a couple years of progress on something this new will be considerable. We've barely started tuning LLMs for specific use cases!

> The hard part of defining requirements is being specific in them

True but I don't think you're accounting for iteration. You know the "idea guy" cliche? Today, if/when this "idea guy" hires a team of devs from India to make his million-dollar idea into reality, his inability to write good requirements will lead to a useless app, and the idea guy asking for changes (that are also poorly described) until he runs out of money, right?

Now imagine that his "dev team" works for free and can turn around each new set of requirements in minutes rather than months. How long will it take him to learn how to good acceptance criteria? A day? A week? It's hard but not that hard. And a million or two other "idea guys" are doing the same thing, and sharing notes with each other.


The fact that we have 12 year olds asset flipping from tutorials today shows the value of those asset flips. If chatGPT cannibalises that market, I will then be as scared of ChatGPT taking my work as I am of a 12 year old who can follow a unity tutorial right now.

> It's hard but not that hard.

It really is that hard. It's basically 90% of staff+ engineer's job.

I don't doubt that AI and ML in particular will absolutely change our industry, but as an engineer I see chatGPT as a tool like copilot, a debugger, or a linter. All those tools make me an order of magnitude more productive, but they're useless without the human making the decisions


Writing requirements is not usually done by staff+ engineers, is it? An awful lot of working software, the majority I'd guess, is built from requirements written by a non-technical PM who wasn't even in the software industry three years ago. I wonder if you're too good a programmer to have your thumb on the pulse on the kind of software bad programmers make? Because that's where the real disruption will come from. The question is not "will some bad programmers lose their jobs", it's "what will happen when the kind of simple software that used to be made by bad programmers becomes effectively free."


The working requirements of "when I click X, do Y" comes from PO's, sure. But the remaining implied requirements (what happens if there's an error in your app? How do you identify sensitive data that needs to be handled differently to "basic" data?) in my experience (working with teams of varying degrees of experience) are defined by engineering.

We're talking about being a couple of years away from ChatGPT being able to copy and paste from tutorials here, and with my understanding of AI (I've done a few ML projects in the last decade, but nothing that stuck), that's the "easy" part. The impressive part of things like Mindjourney is the synthesis, and we're not really seeing that or any signs that it's coming IMO.


> what happens if there's an error in your app? How do you identify sensitive data that needs to be handled differently to "basic" data?

Nothing happens, and you don't. That's what I meant by "bad software". I'm reminded of a story Clay Shirky told about AT&T analysts from the mid-90s trying to understand the web hosting business[0]:

> The ATT guys had correctly understood that the income from $20-a-month customers wouldn’t pay for good web hosting. What they hadn’t understood, were in fact professionally incapable of understanding, was that the industry solution, circa 1996, was to offer hosting that wasn’t very good. This, for the ATT guys, wasn’t depressing so much as confusing... For a century, ATT’s culture had prized - insisted on - quality of service; they ran their own power grid to keep the dial-tone humming during blackouts. ATT, like most organizations, could not be good at the thing it was good at and good at the opposite thing at the same time. The web hosting business, because it followed the “Simplicity first, quality later” model, didn’t just present a new market, it required new cultural imperatives.

0: https://gwern.net/doc/economics/2010-04-01-shirky-thecollaps... (shirky's old site has been down for some time)


You’re right that it still needs a skilled person to ask the right prompts, and I don’t see that changing anytime soon

But if a few people asking the right prompts is all you need, what happens the to other 50-100+ people a game like Clash of Clans would normally employ?


I think ChatGPT is actually better at creating the right prompts than doing the answers to the prompts.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: