Hacker News new | past | comments | ask | show | jobs | submit login

I believe the societal level risks are reversed. Because so many people will use AI to produce content we will be overwhelmed with even more shady content, and mid quality fake news.

Reading comprehension, critical thinking and source verification will be much harder than today. This is not filling in blanks. This encourages students to verify and analyze and that is great! It also gives them a framework to understand the limitations of AI so they can more effectively use it.




>I believe the societal level risks are reversed.

Your scenario is not a contradiction. We could have a lower level of ability to express ourselves combined with a lot of AI generic content effecting reading comprehension combined with lower-level AIs (because they end up being trained on AI generic content) combined with style being stuck in a certain form and never evolving (because AIs will keep generating new generic content in the same default style).

Deskilling is one of the things that worries me more than AIs 'taking over the world' - more likely we'll eventually hand it to AIs willingly or evolve ourselves somehow.

>This encourages students to verify and analyze and that is great!

Very reasonable, I'm not against this exercise per se. I am wondering where this will lead to.


So in other words, better consumers. But we're at our best when we create...




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: