Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The issue with giving everyone open access to uncontrolled everything is obvious, it does have merit indeed. The terrible example of unrestricted social media as "information superconductor" is alive and breathing, supposedly it led to at least one actual physical genocide within the last decade. The question that is less obvious to some is: do these safety concerns ultimately lead us into the future controlled by a few, who will then inevitably exploit everyone to a much worse effect? That it's already more or less the status quo is not an excuse; it needs to be discussed and not dismissed blindly.

It's a very political question, and HN somewhat despises politics. But OpenAI is not an apolitical company either, they are ideologically driven and have the AGI (defined as "capable of replacing humans in economically important jobs) as their stated target. Your distant ancestors (assuming they were from Europe) were able to escape the totalitarianism and feudalism, starting from the Middle Ages, when the margins were mile-wide compared to what we have now. AI controlled by a few is way more efficient and optimized; will you even have a chance before your entire way of thinking is turned to the desired direction?

I'm from a country that lives in your possible future (Russia), I've seen a remarkably similar process from the inside, so this question seems very natural to me.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: