I partly share the author's point that ChatGPT users (myself included) can "walk away not just misinformed, but misinformed with conviction". Sometimes I want to criticise aloud, write a post blaming this technology for those colourful, sophisticated, yet empty bullshits I hear from a colleague or read in an online post.
But I always resist the urge. Because I think: Isn't it always going to have some kinds of people like that? With or without this LLM thing.
If there is anything to hate about this technology, for the more and more bullshits we see/hear in daily life, it is:
(1) Its reach: More people of all ages, of different backgrounds, expertise, and intents are using it. Some are heavily misusing it.
(2) Its (ever increasing) capability: Yes, it has already become pretty easy for ChatGPT or any other LLMs to produce a sophisticated but wrong answer on a difficult topic. And I think the trend is that with later, more advanced versions, it would become harder and take more effort to spot a hidden failure lurking in a more information-dense LLM's answer.
Have been following your application and I think it is a wonderful application of AI and this new exciting LLM thing. Sadly I'm not on any of the platforms that you have currently supported.
It would be very cool if someone can create an open-sourced version of this for Linux users :)
But I always resist the urge. Because I think: Isn't it always going to have some kinds of people like that? With or without this LLM thing.
If there is anything to hate about this technology, for the more and more bullshits we see/hear in daily life, it is: (1) Its reach: More people of all ages, of different backgrounds, expertise, and intents are using it. Some are heavily misusing it. (2) Its (ever increasing) capability: Yes, it has already become pretty easy for ChatGPT or any other LLMs to produce a sophisticated but wrong answer on a difficult topic. And I think the trend is that with later, more advanced versions, it would become harder and take more effort to spot a hidden failure lurking in a more information-dense LLM's answer.