...everyone here saying "someday AI will <fill in the blank> but not today" while failing to acknowledge that for a lot of things "someday" is 2026, and for an even larger number of things it's 2027, and we can't even predict whether or not in 2028 AI will handle nearly all things...
The problem is that it's hard to pin down any job that's been eliminated by AI even after years of having LLMs. I'm sure it will happen. It just seems like the trajectory of intelligence defies any simple formula.
There's definitely an element of what we saw in the '90s -- software didn't always make people faster, it made the quality of their output better (wysiwyg page layout, better database tools/validation, spell check in email, etc. etc.).
But we're going to get to a point where "the quality goes up" means the quality exceeds what I can do in a reasonable time frame, and then what I can do in any time frame...
I literally am in the process of firing someone who we no longer need because of efficiencies tied to GenAI. I work at a top-10 tech company. So, there you go. That's one job.
That's really interesting, can you offer any insight on the type of role this efficiency made unnecessary or why firing made more sense than augmenting?