I don't necessarily see the "team of automated googlers" as a fundamental or damning problem with GPT-like approaches. First I think people may have a lot fewer truly original ideas then they are willing to admit. Original thought is sought after and celebrated in arts as a rare commodity. But unlike in arts, where there are almost not constraints, when it comes to science or engineering almost every incremental step is of form Y = Fn(X0,..,Xn) where X0..Xn are widely known and proven to be true. With sufficient logical reasoning and/or experimental data, after numerous peer reviews, we can accept Fn(...) to be a valid transform and Y becomes Xn+1, etc. Before internet or Google one had to go to a library and read books and magazines, or ask other people to find inputs from which new ideas could be synthesized. I think GPT-like stuff is a small step towards automating and speeding up this general synthesis process in the post-Google world.
But if we are looking to replace end-to-end intelligence at scale it's not just about synthesis. We need to also automate the peer review process so that it's bandwidth is matched to increased rate of synthesis. Most good researchers and engineers are able to self-critique their work (and the degree to which they can do that well is really what makes one good IMHO). And then we rely on our colleagues and peers to review our work and form a consensus on its quality. Currently GPT-like systems can easily overwhelm humans with such peer review requests. Even if a model is capable of writing the next great literary work, predicting exactly what happened on Jan 6, or formulating new laws of physics the sheer amount of crap it will produce alongside makes it very unlikely that anyone will notice.
I call it the "Prior-Units" theorem. Given that you are able to articulate an idea useful to many people, there exists prior units of that idea. The only way then to come up with a "new idea", is to come up with an idea useful only to yourself (plenty of those) (or small groups), or translate an old idea to a new language.
The reason for this is that if your adult life consists of just a tiny, tiny, tiny fraction of the total time of all adults, and so if an idea is relevant to more people, odds decrease exponentially that no one thought of it before.
There are always new languages though, so a great strategy is to take old ideas and bring them to new languages. I count new high level, non programming languages as new languages as well.
Art (music, literature, ...) involves satisfaction of constraints. For instance you need to tune your guitar like the rest of the band, write 800 words like the editor told you, tell a story with beginning, middle, and end and hopefully not use the cheap red pigments that were responsible for so many white, blue, and gray flags I saw in December 2001.
"team of automated googlers" where google is baked-in. Google results, and content behind it, changes. Meaning, GPT would have to be updated as well. Could be a cool google feature, a service.
But if we are looking to replace end-to-end intelligence at scale it's not just about synthesis. We need to also automate the peer review process so that it's bandwidth is matched to increased rate of synthesis. Most good researchers and engineers are able to self-critique their work (and the degree to which they can do that well is really what makes one good IMHO). And then we rely on our colleagues and peers to review our work and form a consensus on its quality. Currently GPT-like systems can easily overwhelm humans with such peer review requests. Even if a model is capable of writing the next great literary work, predicting exactly what happened on Jan 6, or formulating new laws of physics the sheer amount of crap it will produce alongside makes it very unlikely that anyone will notice.