Hacker News new | past | comments | ask | show | jobs | submit login

I mean, aren't they self appointed because they got there first?



No. Knew the right people, had the right funds, and said and did and thought the things compatible with getting investment from people with even more influence than them.

Unless you're saying my only option is to pick and choose between different sets of people like that?


There is a political economy as well as a technical aspect to this that present inherent issues. Even if we can address the former by say regime change, the latter issue remains: the domain is technical and cognitively demanding. Thus the practitioners will generally sound sane and rational (they are smart people but that is no guarantee of anything other than technical abilities) and non-technical policy types (like most of the remaining board members at openAI) are practically compelled to take policy positions based either on ‘abstract models’ (which may be incorrect) or as after the fact reaction to observation of the mechanisms (which may be too late).

The thought occurs that it is quite possible that just like humanity is really not ready (we remain concerned) to live with WMD technologies, it is possible that we have again stumbled on another technology that taxes our ethical, moral, educational, political, and economic understanding. We would be far less concerned if we were part of a civilization of generally thoughtful and responsible specimens but we’re not. This is a cynical appraisal of the situation, I realize, but tldr is “it is a systemic problem”.


In the end my concern comes down to that those who rise to power in our society are those who are best at playing the capitalist game. That's mostly, I guess, fine if what they're doing is being most efficient making cars or phones or grocery store chains or whatever.

Making intelligent machines? Colour me disturbed.

Let me ask you this re: "the domain is technical and cognitively demanding" -- do you think Sam Altman (or a Steve Jobs, Peter Thiel, etc.) would pass a software engineer technical interview at e.g. Google? (Not saying those interviews are perfect, they suck, but we'll use that as a gatekeeper for now.). I'm betting the answer is quite strongly "no."

So the selection criterion here is not the ability to perform technically. Unless we're redefining technical. Which leaves us with "intellectually demanding" and "smart", which, well, frankly also applies to lawyers, politicians, etc.

My worry is right now that the farther you go up at any of these organizations, the more the kind of intelligence and skills trends towards the "is good at manipulating and convincing others" kind of spectrum vs the "is good at manipulating and convincing machines" kind of spectrum. And it is into the former that we're concentrating more and more power.

(All that said, it does seem like Sutskever would definitely pass said interview, and he's likely much smarter than I am. But I remain unconvined that that kind of smarts is the kind of smarts that should be making governance-of-humanity decisions)

As terrible as politicians and various "abstract model" applying folks might be, at least they are nominally subject to being voted out of power.

Democracy isn't a great system for producing excellence.

But as a citizen I'll take it over a "meritocracy" which is almost always run by bullshitters.

What we need is accountability and legitimacy and the only way we've found to produce on a mass society level is through democratic institutions.


> What we need is accountability and legitimacy and the only way we've found to produce on a mass society level is through democratic institutions.

The problem is that our democratic institutions are not doing a good job of producing accountability and legitimacy. Our politics and government bureaucracies are just as corrupted as our private corporations. Sure, in theory we can vote the politicians out of power, but in practice that never happens: Congress has incumbent reelection rates in the 90s.

The unfortunate truth is that nobody is really qualified to be making governance-of-humanity decisions. The real problem is that we have centralized power so much in governments and megacorporations that the few elites at the top end up making decisions that impact everyone even though they aren't qualified to do it. Historically, the only fix for that has been to decentralize power: to give no one the power to make decisions that impact large numbers of people.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: