No it is absolutely nonsense. You cannot possibly use NLP to evaluate novel claims and evaluate them for truthfulness. The NLP model couldn't know that. Social engineering is detectable because it usually includes something like a request for your personal information.
A novel fake news claim would not look anything like prior fake news claims. You'll have false positives, yes. You'll also have false negatives. Probably a lot of them.
A generic fake news detector is very different from, say, a detector of claims that Ukraine is a nazi state. The latter is easy to detect.
A novel fake news claim would not look anything like prior fake news claims. You'll have false positives, yes. You'll also have false negatives. Probably a lot of them.
A generic fake news detector is very different from, say, a detector of claims that Ukraine is a nazi state. The latter is easy to detect.