I get you are being sarcastic, but lets actually consider your idea more broadly.
- Machine code
- Assembly code
- LLVM
- C code (high level)
- VM IR (byte code)
- VHLL (e.g. Python/Javascript/etc)
So, we already have hierarchical stacks of structured text. The fact that we are extending this to higher tiers is in some sense inevitable. Instead of snark, we could genuinely explore this phenomenon.
LLMs are allowing us to extend this pattern to domains other than specifying instructions to processors.
And we re-invent the wheel basically. You have to use very specific prompts to make the computer do what you want, so why not just, you know... program it? It's not that hard.
Natural language is trying to be a new programming language, one of many, but it's the least precise one imho.
I think we can agree these are both documents written in natural language. They underpin the very technology we are using to have this discussion. It doesn't matter to either of us what platform we are on, or what programming language was used to implement them. That is not a flaw.
Biological evolution shows us how far you can get with "good enough". Perfection and precision are highly overrated.
Let's imagine a wild future, one where you copy-and-paste the HTML spec (a natural language doc) into a coding agent and it writes a complete implementation of an HTML agent. Can you say with 100% certainty that this will not happen within your own lifetime?
In such a world, I would prefer to be an expert in writing specs rather than to be an expert in implementing them in a particular programming language.
> Biological evolution shows us how far you can get with "good enough".
That are valid points when speaking about human-human interactions where we can allow ourselves for much more freedom in forming thoughts. But still, written text never will be as expressive as face to face communication when you can hear, see and feel emotions. So even for humans, a raw text is not enough. But it's good enough for biological brains.
When speaking about human-computer interactions, you are just being ignorant. Programming is engineering, and engineering is not biology. Biology allows itself for freedom and randomness, in engineering we rather avoid that kind of stuff unless it's a part of the job. Would you use a "good enough" banking system? Would you use a car with "good enough" safety features? Would you happily cross a bridge which is just "good enough"?
We invented computers to replace humans in critical jobs requiring fast and precise actions. Natural language is not sufficient to give precise instructions concisely because it was never meant to. Sure you can hammer nails with pliers, but... we have hammers.
> In such a world, I would prefer to be an expert in writing specs rather than to be an expert in implementing them in a particular programming language.
Words of a man who didn't choose his career path properly.
> Natural language is not sufficient to give precise instructions
On consideration, this is the core of our disagreement and it is probably one that we will not see past.
I have to assume that you see human/computer interactions as master-slave relations. You are the master providing instructions and the computer is a slave that MUST do precisely what you instruct it to do. I would wager that you feel frustrated when you deal with other people who choose to do things in a way that is different than how you want them to be done.
One thing I was taught when I became a manager was that I should focus on outcomes. I should explain to people what the result I wanted was, rather than to be really picky about the implementation. I found that sometimes people could realize my desired outcomes in a way that was better than I would have done it. It was enlightening.
There is an idiom in English, about winning the battle but losing the war. It is when a person becomes so obsessed with controlling tiny details that they lose sight of the overall goal.
I doubt we will see eye-to-eye on this, but I am arguing that we will have better outcomes if we obsess less about the details of how computers implement our goals and we spend more time focusing on the goals we want to achieve.
In this world where the LLM implementation has a bug in it that impacts a human negatively (the app could calculate a person's credit score for example)
I couldn't even tell you who is liable right now for bugs that impact human's negatively. Can you? If I was an IC at an airplane manufacturer and a bug I wrote caused an airplane crash - who is legally responsible? Is it me? The QA team? The management team? Some 3rd party auditor? Some insurance underwriter? I have a strong suspicion it is very complicated as it is without considering LLMs.
What I can tell you is that the last time I checked: laws are written in natural language, they are argued for/against and interpreted in natural language. I'm pretty confident that there is applicable precedent and the court system is well equipped to deal with autonomous systems already.
> If I was an IC at an airplane manufacturer and a bug I wrote caused an airplane crash - who is legally responsible?
I am not sure it is that complicated, from a legal perspective. It is the company hiring you that would be legally responsible. If you are an external consultant, things may get more complicated, but I am pretty sure that for critical mission software companies wouldn't use external consultants (for this particular reason but also many others)
I agree with this. There's so much snake oil at the moment. Coding isn't the hard part of software development and we already have unambiguous language for describing computation. Human language is a bad choice for it, and we already find that when writing specs for other humans. Adding more humaness to the loop isn't a good thing IMHO.
At best an LLM is a new UI model for data. The push to get them writing code is bizarre.
> Coding isn't the hard part of software development
That's actually a relief, when after hours and days of attending meetings and writing documentations, I can eventually sit in front of my IDE and let my technical brain enjoy being pragmatic.
- Machine code
- Assembly code
- LLVM
- C code (high level)
- VM IR (byte code)
- VHLL (e.g. Python/Javascript/etc)
So, we already have hierarchical stacks of structured text. The fact that we are extending this to higher tiers is in some sense inevitable. Instead of snark, we could genuinely explore this phenomenon.
LLMs are allowing us to extend this pattern to domains other than specifying instructions to processors.