Appears to align with good old Ironies of Automation [1]. If humans just review and rubber stamp results, they do a pretty terrible job at it.
I've been thinking for a while now that in order to truly make augmented workflows work, the mode of engagement is central. Reviewing LLM code? Bah. Having an LLM watch over my changes and give feedback? Different story. It's probably gonna be difficult and not particularly popular, but if we don't stay in the driver's seat somehow, I guess things will get pretty bleak.
Didn't realise the pedigree of the idea went back to 1983.
I read about this in a book "Our Robots, Ourselves". That talked about airline pilots' experience with auto-land systems introduced in the late 1990s/ early 2000s.
As you'd expect after having read Ironies of Automation, after a few near misses and not misses, auto-land is not used any more. Instead, pilot augmentation with head-up displays is used.
What is the programming equivalent of a head-up display?
Certainly a relatively tight feedback loop, but not too tight. Syntax errors are very tight, but non negotiable: Fix it now.
Test failures are more explicit, you run tests when you want to and deal with the results.
Code review often has a horrible feedback loop - often days after you last thought about it. I think LLMs can help tighten this. But it can't be clippy, it can't interrupt you with things that _may_ be problems. You have to be able to stay in the flow.
For most things that make programmers faster, I think deterministic tooling is absolutely key, so you can trust it rather blindly. I think LLMs _can_ be really helpful for helping you understand what you changed and why, and what you may have missed.
Just some random ideas. LLMs are amazing. Incorporating them well is amazingly difficult. What tooling we have now (agentic and all that) feels like early tech demos to me.
We should be able to do a lot more than that. I for one would love to have UML as the basis of for system design and architecture, have "pseudo-code repositories" that can be used as a "pattern book" and leave that as the context for LLM-based code generation tools. We could then define a bunch of constraints (maximum cyclomatic complexity, strict type checking, acceptance tests that must pass, removal of dead code) to reduce the chances of the LLM going rampant and hallucinating.
This way I'd still be forced to think about the system, without having to waste time with the tedious part of writing code, fixing typos, etc.
Bonus point: this could become a two-way system between different programming languages and UML as the intermediate representation, which would make a lot easier to port applications to different languages, and would eliminate concerns about premature optimizations. People could still experiment with new ideas in languages that are more accessible (Python/Javascript) and later on port them to more performant systems (Rust/D/C/C++).
> We must negate the machines-that-think. Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program! Our Jihad is a "dump program." We dump the things which destroy us as humans!
That's not surprising but also bleak.