The hype around this tech strongly promotes the narrative that we're close to exponential growth, and that AGI is right around the corner. That pretty soon AI will be curing diseases, eradicating poverty, and powering humanoid robots. These scenarios are featured in the AI 2027 predictions.
I'm very skeptical of this based on my own experience with these tools, and rudimentary understanding of how they work. I'm frankly even opposed to labeling them as intelligent in the same sense that we think about human intelligence. There are certainly many potentially useful applications of this technology that are worth exploring, but the current ones are awfully underwhelming, and the hype to make them seem more than they are is exhausting. Not to mention that their biggest potential to further degrade public discourse and overwhelm all our communication channels with even more spam and disinformation is largely being ignored. AI companies love to talk about alignment and safety, yet these more immediate threats are never addressed.
Anyway, it's good to know that there are disagreements about the impact and timelines even inside OpenAI. It will be interesting to see how this plays out, if nothing else.
The hype around this tech strongly promotes the narrative that we're close to exponential growth, and that AGI is right around the corner. That pretty soon AI will be curing diseases, eradicating poverty, and powering humanoid robots. These scenarios are featured in the AI 2027 predictions.
I'm very skeptical of this based on my own experience with these tools, and rudimentary understanding of how they work. I'm frankly even opposed to labeling them as intelligent in the same sense that we think about human intelligence. There are certainly many potentially useful applications of this technology that are worth exploring, but the current ones are awfully underwhelming, and the hype to make them seem more than they are is exhausting. Not to mention that their biggest potential to further degrade public discourse and overwhelm all our communication channels with even more spam and disinformation is largely being ignored. AI companies love to talk about alignment and safety, yet these more immediate threats are never addressed.
Anyway, it's good to know that there are disagreements about the impact and timelines even inside OpenAI. It will be interesting to see how this plays out, if nothing else.