I think this conversation is dancing around the relationship of memory and knowledge. Simply storing information is different than knowing it. One of you is thinking book learning while the other is thinking street smarts.
If we could design a human would we design them with menstrual cycles? Why would we even target human intelligence. Feels like setting the bar low and not being very creative...
Seriously, the human brain is susceptible to self stroking patterns that result in disordered thinking. We spend inordinate amounts of energy daydreaming, and processing visual and auditory stimulus. We require sleep and don't fully understand why. So why would we target human intelligence? Propaganda. Anyone worried about losing their livelihood to automation is going to take notice. AI has the place in the zeitgeist today that Robots occupied in the 1980s and for the same reason. The wealthy and powerful can see the power it has socially right now and they are doing whatever they can to leverage it. It's why they don't call it LLMs but AI because AI is scarier. It's why all the tech bro CEOs signed the "pause" letter.
If this was about man flying we would be making an airplane instead of talking about how the next breakthrough will make us all into angels. LLMs are clever inventions they're just not independently clever.
I feel this way about Typescript too. There are a lot of people in engineering these days who don't think critically or exercise full observation when using popular technologies. I don't feel like it was like this 15 years ago, but it probably was...
And this is a great example of something I rarely see LLMs doing. I think we're approaching a point where we will use LLMs to manage code the way we use React to manage the DOM. You need an update to a feature? The LLM will just recode it wholesale. All of the problems we have in software development will dissolve in mountains of disposable code. I could see enterprise systems being replaced hourly for security reasons. Less chance of abusing a vulnerability if it only exists for an hour to find and exploit. Since the popularity of LLMs proves that as a society we've stopped caring about quality, I have a hard time seeing any other future.
But all the senior business folks think AI can do no wrong and want to put it out the door anyway, assuming all the experienced engineers are just trying to get more money or something.
Worth noting that there are business leaders who see high LOC and number of commits as metrics of good programmers. To them the 2000 LOC commits from offshore are proof that it's working. Sadly the proof that it's not will show in their sales and customer satisfaction if they keep producing their product long enough. For too long the business model in tech has been to get bought out so this doesn't often matter to business.
This study raises the question, why do we play games? Do we play to win or to enjoy ourselves. Why design a machine to do what we should be enjoying? This goes for writing, creating Art, coding. Wanting a machine to win is the desire to achieve a goal without doing the work to earn it. Same for making art or writing novels. The point of these things (growth and achievement) is lost when done by a machine. I want to see this done with investment, legal strategy or business management. These are better suited to LLMs than what we're making them do, but I'd venture that those who are profiting from LLMs right now would profit less if replaced by LLMs by their boards.
I imagine that pitting LLMs against computer games is itself an enjoyable activity.
Generally speaking, people play games for fun, and I suspect that will continue. Even if an LLM can beat all humans at computer games, it doesn't matter. We will continue to enjoy playing them. Computers, pre-LLM, could already out-play humans in many cases.
Other activities mentioned -- writing, art, coding, etc. -- can indeed be fun, but they are also activities that people have been paid to do. It seems that there is incentive to create LLMs that can do an at least adequate job of these tasks for less money than humans are paid, so that that money is rerouted to LLM companies instead of human workers. I imagine humans will continue to write, create art, and even code, without any financial incentive, though probably less.
(I personally remain unpersuaded that LLMs will do away with paid creative work altogether, but there's clearly a lot of interest in trying to maximize what LLMs can do.)
This is a bit harsh on the HN community IMO. This was all nostalgia to me and not about overprotective parents. that said, looking at it in that light, my kids had none of the experiences you did. I think the overprotective instinct of this generation's parents has been steadily teaching them to be more risk averse and protecting them from learning about how to deal with undesirable outcomes to a point of irrational fear. My kids are in this generation and despite having this opinion they're surrounded by other adults and media that teaches them, not how to deal with mistakes, but to avoid them at all costs. I'm not advocating death and dismemberment, but there has to be an in-between.
1000% agree and that's exactly the point of my comment. I didn't mean that all of HN is like this, mostly just the linked post, so I'll edit accordingly.