The article talks about that at the end, then says:
> Let models talk to each other directly, making their own case and refining each others’ answers. Exemplified in patterns like Multi-Agent Debate, this is a great solution for really critical individual actions. But XBOW is basically conducting a search, and it doesn’t need a committee to decide for each stone it turns over whether there might not be a better one.
In general, this seems reasonable to me as a good approximation of what works with humans, but with _much_ faster feedback loops in communication.
> Let models talk to each other directly, making their own case and refining each others’ answers. Exemplified in patterns like Multi-Agent Debate, this is a great solution for really critical individual actions. But XBOW is basically conducting a search, and it doesn’t need a committee to decide for each stone it turns over whether there might not be a better one.
In general, this seems reasonable to me as a good approximation of what works with humans, but with _much_ faster feedback loops in communication.