And AI in general* poses existence-level questions (that could go either way: good or bad) regarding military applications, medical research, economic benefits, quality of life, human thriving, etc.
The idea that the future is going to “more or less be predictable” and “within the realm of normal” is a pretty bold claim when you look at history! Paradigm shifts happen. And many people think we’re in the middle of one — people that don’t necessarily have an economic interest in saying so.
* I’m not taking a position here about predicting what particular AI technologies will come next, for what price, with what efficiency and capabilities, and when. Lots of things could happen we can’t predict — like economic cycles, overinvestment, energy constraints, war, popular pushback, policy choices, etc. But I would probably bet that LLMs are just the beginning.
I believe it's the main topic because VCs have been trying to solve the problem of "expensive software developers" for a long time. The AI start-up hype train is real simply because that is how you get VC money these days. VC money contracted with the economy and post-Covid severely, and seemingly what is available is going to AI something-or-other. Somehow, the VC-orientated startup hype-train seems to have become the dominant voice in the zeitgeist of software development.