When ChatGPT came out, one of the things we learned is that human society generally assumes a strong correlation between intelligence and the ability to string together grammatically correct sentences. As a result, many people assumed that even GPT-3.5 was wildly more "intelligent" than it actually was.
I think Deep Research (and tools like it) offer an even stronger illustration of that same effect. Anything that can produce a well-formatted multiple page report with headings and citations surely must be of PhD-level intelligence, right?
I think Deep Research (and tools like it) offer an even stronger illustration of that same effect. Anything that can produce a well-formatted multiple page report with headings and citations surely must be of PhD-level intelligence, right?
(Clearly not.)