Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ever since AI came out I’ve been talking about the prompt to output ratio. We naturally assume that the prompt will be smaller than the output just because of the particulars of the systems we use, but as you get more and more particular of what you want, the prompt grows while the output stays the same size. This is logical. If instead of writing an essay, I just describe what I want the essay to say, the description is necessarily gonna be a larger amount of text than the essay itself. It’s more text to describe what’s said, than to just say it. The fact that we expect to do less effort and get back more effort indicates exactly what we’re getting here: a bunch of filler.

In that way, the prompt is more interesting, and I can’t tell you how many times I’ve gone to go write a prompt because I dunno how to write what I wanna say, and then suddenly writing the prompt makes that shit clear to me.

In general, I’d say that AI is way more useful to compress complex ideas into simple ones than to expand simplistic ideas in to complex ones.



This is why it’s unlikely these systems will effectively replace software development. By the time you’ve specified the novel system you want to build well enough in English such that you get exactly the system you want you might as well have written the code.


Yep. To put it another way: In a scenario where you want to say something, you can’t outsource what you want to say to anyone. It doesn’t matter whether or not you want to say it in code or if you want to say it in English.


This is simply not true.

I can describe a novel physics model for a video game. I can do a refresher on concepts like friction, air resistance, gravity, etc. that I don't remember well from school. Then I can describe the constraints and generate code to satisfy it.

If I were to go and learn the physics really in depth and then code it myself, it would take 10x longer.


The comparison is to use a physics library. Only in the LLM case are you trying to write the physics engine yourself. And if its not the kind of physics that's in a library, yes, you will need to learn it to ship a game.


Well… you’re forgetting the part where you can cut out the middleman. Currently a leader has to ask an engineer to build a system, and has to communicate effectively with the engineer until all of the novel details have been ironed out in the specification, and _y_ the engineer builds it.

In a world where the LLM can do the building, the engineer is no longer required.


More often than not, the building is the easy part once the specifics are ironed out.

In my experience, an ideas leader (you know the type) will fail at telling a machine exactly what to do and get bored with the inevitable edge cases, computers saying no, and non-ideas drudgery. This is where I believe every no-code and low-code and WYSIWYG platform and now LLMs fall apart.

A major aspect of programming is translating the messy meatspace to something an extremely fast moron (a computer and I wish I coined this term) understands. And as much of a step change LLMs for writing code are, I have yet to see them take this step.


You just turned your leader into an engineer, is all.


They've tried this before and they made COBOL. Turns out you still need programmers to write COBOL because it's still programming even if the program looks Englishy.


I don't think prompt / output ratio, is the speed to get the output that matters.

If I spend 1 hour and write 500 words of prompt to then attach X additional rows of data (e.g. Rows from a table) and the LLM returns X rows of perfect answers. It shouldn't matter that the output ratio is worse than if I had typed those characters myself.

The important thing is whether within that 1 hour (+ few minutes of LLM processing) I managed to get the job done quicker or not.

It's similar to programming, using LLMs is not necessarily to write better code than I personally could but to write good enough code much faster than I ever would.


> In that way, the prompt is more interesting, and I can’t tell you how many times I’ve gone to go write a prompt because I dunno how to write what I wanna say, and then suddenly writing the prompt makes that shit clear to me.

Bingo. It can be a rubber duck that echoes your mistakes back. Unfortunately, as other commenters have pointed out, the prompt may not be as interesting/iterative as we might suppose: "Here's the assignment, what's the answer".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: