... we can't train a model to be 100% correct. There will always be false matches. Another super hard task is confidence estimation - models tend to be super sure of many bad predictions.
In this particular case you're talking about detecting human written texts against stochastic text generation. If you wanted to test if the model regurgitates training data, that would have been easy. But the other way around, to check if it outputs something different from future text, it's a hard, open-ended problem. Especially if you take into consideration the prompts and the additional information they could contain.
It's like testing if I have my keys in the house vs testing if my keys are not outside the house (can't prove an open ended negative). On top of this, the prompts would be like allowing unsupervised random strangers into the house.
... we can't train a model to be 100% correct. There will always be false matches. Another super hard task is confidence estimation - models tend to be super sure of many bad predictions.
In this particular case you're talking about detecting human written texts against stochastic text generation. If you wanted to test if the model regurgitates training data, that would have been easy. But the other way around, to check if it outputs something different from future text, it's a hard, open-ended problem. Especially if you take into consideration the prompts and the additional information they could contain.
It's like testing if I have my keys in the house vs testing if my keys are not outside the house (can't prove an open ended negative). On top of this, the prompts would be like allowing unsupervised random strangers into the house.