I think part of the solution is to start discussing the specific limitations of LLMs, rather than speaking broadly about AI/AGI. For example, many people assume these models can understand arbitrarily long inputs, but LLMs have strict token limits. Even when large inputs fit within the model's context window, it may not reason effectively over the entire content. This happens because the model's attention is spread across all tokens, and its ability to maintain coherence or focus can degrade with length. These constraints along with hardware limitations like those in NPUs are not always obvious to everyday users.
I agree, but unfortunately it falls flat IME. The hype is too strong and being pushed by the Fab Five that is causing an unbearable wall to these conversations.
I have these conversations on a day-to-day basis and you are labeled as a hater or stupid because XYZ CEO says that AI should be in everything/making things 100x easier.
There is a constant stream of "What if we use an LLM/AI for this?" even when it's a terrible tool for the job.