That’s a good point. I don’t think anyone is denying that GPT will be useful though. I’m more worried that because of commercial reasons and public laziness / ignorance, it’s going to get shoehorned into use cases it’s not meant for and create a lot of misinformation. So a similar problem to search, but amplified
There are some real concerns for a technology like ChatGPT or Bing's version or whatever AI. However, a lot of the criticisms are about the inaccuracy of the model's results. Saying "ChatGPT got this simple math wrong" isn't as useful or meaningful of a criticism when the product isn't being marketed as a calculator or some oracle of truth. It's being marketed as an LLM that you can chat with.
If the majority of criticism was about how it could be abused to spread misinformation or enable manipulation of people at scale, or similar, the pushback on criticism would be less.
It's nonsensical to say that ChatGPT doesn't have value because it gets things wrong. What makes much more sense is to say is that it could be leveraged to harm people, or manipulate them in ways they cannot prevent. Personally, it's more concerning that MS can embed high-value ad spots in responses through this integration, while farming very high-value data from the users, wrt advertising and digital surveillance.
> It's being marketed as an LLM that you can chat with.
... clearly not, right? It isn't just being marketed to those of us who understand what an "LLM" is. It is being marketed to a mainstream audience as "an artificial intelligence that can answer your questions". And often it can! But it also "hallucinates" totally made up BS, and people who are asking it arbitrary questions largely aren't going to have the discernment to tell when that is happening.