> Many tasks have easier verifications than doing the task.
In the software world (like the article is talking about) this is the logic that has ruthlessly cut software QA teams over the years. I think quality has declined as a result.
Verifiers are hard because the possible states of the internal system + of the external world multiply rapidly as you start going up the component chain towards external-facing interfaces.
That coordination is the sort of thing that really looks appealing for LLMs - do all the tedious stuff to mock a dependency, or pre-fill a database, etc - but they have an unfortunate tendency to need to be 100% correct in order for the verification test that depends on them to be worth anything. So you can go further down the rabbit hole, and build verifiers for each of those pre-conditions. This might recurse a few times. Now you end up with the math working against you - if you need 20 things to all be 100%, then even high chances of each individual one starts to degrade cumulatively.
A human generally wouldn't bother with perfect verification of every case, it's too expensive. A human would make some judgement calls of which specific things to test in which ways based on their intimate knowledge of the code. White box testing is far more common than black box testing. Test a bunch of specific internals instead of 100% permutations of every external interface + every possible state of the world.
But if you let enough of the code to solve the task be LLM-generated, you stop being in a position to do white-box testing unless you take the time to internalize all the code the machine wrote for you. Now your time savings have shrunk dramatically. And in the current state of the world, I find myself having to correct it more often then not, further reducing my confidence and taking up more time. In some places you can try to work around this by adjusting your interfaces to match what the LLM predicts, but this isn't universal.
---
In the non-software world the situation is even more dire. Often verification is impossible without doing the task. Consider "generate a report on the five most promising gaming startups" - there's no canonical source to reference. Yet these are things people are starting to blindly hand off to machines. If you're an investor doing that to pick companies, you won't even find out if you're wrong until it's too late.
This is not an NxM verifier hell. I explicitly talked about one way which is parallel generation + classifier. You can also use majority voting here. Both would give you the right answer at each step without having to write code or test cases, just a simple prompt. There are more ways to do the same, eg: verifier blocks, layering, backtracking search (end to end assertions and then see which step went wrong), simple generative verifiers with simpler prompts and so on.
For non software world, people use majority voting most of the time.
In the software world (like the article is talking about) this is the logic that has ruthlessly cut software QA teams over the years. I think quality has declined as a result.
Verifiers are hard because the possible states of the internal system + of the external world multiply rapidly as you start going up the component chain towards external-facing interfaces.
That coordination is the sort of thing that really looks appealing for LLMs - do all the tedious stuff to mock a dependency, or pre-fill a database, etc - but they have an unfortunate tendency to need to be 100% correct in order for the verification test that depends on them to be worth anything. So you can go further down the rabbit hole, and build verifiers for each of those pre-conditions. This might recurse a few times. Now you end up with the math working against you - if you need 20 things to all be 100%, then even high chances of each individual one starts to degrade cumulatively.
A human generally wouldn't bother with perfect verification of every case, it's too expensive. A human would make some judgement calls of which specific things to test in which ways based on their intimate knowledge of the code. White box testing is far more common than black box testing. Test a bunch of specific internals instead of 100% permutations of every external interface + every possible state of the world.
But if you let enough of the code to solve the task be LLM-generated, you stop being in a position to do white-box testing unless you take the time to internalize all the code the machine wrote for you. Now your time savings have shrunk dramatically. And in the current state of the world, I find myself having to correct it more often then not, further reducing my confidence and taking up more time. In some places you can try to work around this by adjusting your interfaces to match what the LLM predicts, but this isn't universal.
---
In the non-software world the situation is even more dire. Often verification is impossible without doing the task. Consider "generate a report on the five most promising gaming startups" - there's no canonical source to reference. Yet these are things people are starting to blindly hand off to machines. If you're an investor doing that to pick companies, you won't even find out if you're wrong until it's too late.