No, because they aren't the same. Those things are tools that reallocate cognitive burden. LLMs destroy cognitive burden. LLMs cause cognitive decline, a spinning jenny doesn't.
Gonna have to disagree there. A lot of models are being used to reallocate cognitive burden.
A phd level biologist with access to the models we can envision in the future will probably be exponentially more valuable than entire bio startups are today. This is because s/he will be using the model to reallocate cognitive burden.
At the same time, I'm not naive. I know that there will be many, many non phd level biologist wannabes that attempt to use models to remove entirely cognitive burden. But what they will discover is that they are unable to hold a candle to the domain expert reallocating cognitive burden.
Models don't cause cognitive decline. They make cognitive labor exponentially more valuable than it is today. With the problem being that it creates an even more extreme "winner take all" economic environment that a growing population has to live in. What happens when a startup really only needs a few business types and a small team of domain experts? Today, a successful startup might be hundreds of jobs. What happens when it's just a couple dozen? Or not even a dozen? (Other than the founders and investors capturing even more wealth than they do presently.)
I'd totally agree with this point if we assume that efficiency/performance growth will flatten at some point. For example, if it gets logarithmic soon, then the progress will grow slowly over the next decades. And then, yes, it will likely look like that current software developers, engineers, scientists, etc., just got an enormously powerful tool, which knows many languages almost perfectly and _briefly_ knows the entire internet.
Yet, if we trust all these VC-backed AI startups and assume that it will continue growing rapidly, e.g., at least linearly, over the next years, I'm afraid that it may indeed reach a superhuman _intelligence_ level (let's say p99 or maybe even p999 of the population) in most of the areas. And then why do you need this top of the notch smart-ass human biologist if you can as well buy a few racks of TPUs?
Because only the biologist knows what assays to ask the super human intelligence for. And how the results affect the biomolecular process you want to look at.
If you can’t ask the right questions, like everyone without a phd in biology, you’re kind of out of luck. The superhuman intelligence will just spin forever trying to figure out what you’re talking about.
It doesn't really matter what something can be used for, it matters what it will be used for most of the time. Television can be used for reading books, but people mostly don't use it that way. Smartphones can be used for creation, but people mostly don't use them that way. You've got Satya Nadella on a stage saying AI makes you a better friend because it can reply to messages from your friends for you. We are creating, and to a large extent have created, a world that we will not want to live in, as evidenced by skyrocketing depression and the loneliness epidemic.
Read Neil Postman or Daniel Boorstin or Marshall McLuhan or Sherry Turkle. The medium is the message.