That part actually does work & makes sense. LLMs can't (yet) detect live mistakes as they make them, but they can review their past responses.
That's also why there is experimentation with not showing users the output straight away & instead let it work on a scratch pad of sorts first
That part actually does work & makes sense. LLMs can't (yet) detect live mistakes as they make them, but they can review their past responses.
That's also why there is experimentation with not showing users the output straight away & instead let it work on a scratch pad of sorts first