Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If anything has become clear after all this is that humanity is not ready for being the guardian of superintelligence.

These are supposed to be the top masterminds behind one of the most influential technologies of our lifetime, and perhaps history, and yet they're all behaving like petty children, with egos and personal interests pulling in all directions, and everyone doing their best to secure their piece of the pie.

We are so screwed.



I’ll believe this when I see an AI model become as good as someone with just ten years experience in any field. As a programmer I’m using chatgpt as often as I can but it still completely fails to be of any use and often proves to be a waste of time 80% of the time.

Right now, there are too many people that think because these models crossed one hurdle, all the rest will easily be crossed in the coming years.

My belief is that each successive hurdle is at least an order of magnitude more complex.

If you are seeing chatgpt and the related coding tools as a threat to your job, you likely aren’t working on anything that requires intelligence. Messing around with CSS and rewriting the same logic in every animation, table, or api call is not meaningful.


100% agree. I have a coding job and although co-pilot comes in handy for auto completing function calls and generating code that would be an obvious progression of what needs to be written, I would never let it generate swaths of code based on some specification or even let it implement a moderately complex method or function because, as I have experienced, what it spits out is absolute garbage.


I'm not sure how people reach this sentiment.

Humans strike me as being awesome, especially compared to other species.

I feel like there is a general sentiment that nature has it figured out and that humans are disrupting nature.

But I haven't been convinced that is true. Nature seems to be one big gladiatorial ring where everything is in a death match. Nature finds equilibrium through death, often massive amounts of death. And that equilibrium isn't some grand design, it's luck organized around which species can discover and make effective use of an energy source.

Humans aren't the first species to disrupt their environment. I don't believe we are even the first species to create a mass extinction. IIUC the great oxygenation event was a species-driven mass extinction event.

While most species consume all their resources in a boom cycle and subsequently starve to death in their bust cycle, often taking a portion of their ecosystem with them, humans are metaphorically eating all the corn but looking up and going "Hey, folks, we are eating all the corn - that's probably not going to go well. Maybe we should do something about that."

I find that level of species-level awareness both hope-inspiring and really awesome.

I haven't seen any proposals for a better first-place species when it comes to being responsible stewards of life and improving the chances of life surviving past this rock's relatively short window for supporting life. I'd go as far as saying whatever species we try to put in second place, humans have them beaten by a pretty wide margin.

If we create a fictitious "perfect human utopia" and compare ourselves to that, we fall short. But that's a tautology. Most critiques of humans I see read to me as goals, not shortcomings compared to nature's baseline.

When it comes to protecting ourselves against inorganic superintelligence, I haven't seen any reasonable proposals for how we are going to fail here. We are self-interested in not dying. Unless we develop a superintelligence without realizing it and fail to identify it getting ready to wipe us out, it seems like we would pull the plug on any of its shenanigans pretty early? And given the interest in building and detecting superintelligence, I don't see how we would miss it?

Like if we notice our superintelligence is building an army, why wouldn't we stop that before the army is able to compete with an existing nation-state military?

Or if the superintelligence is starting to disrupt our economies or computer systems, why wouldn't we be able to detect that early and purge it?


I don't see how you can look at global warming, ocean acidification, falling biodiversity and other global trends and how little action is being done to slow these ill effects and not arrive at that sentiment. Yes, the world has scientists saying "hey, this is happening, maybe we should do something" but the lack of money into solutions shows the interest just isn't there. Being the smartest species on the planet isn't that impressive. It's possible we are just smart enough to cause our own destruction, and no smarter.


Still better than any other species we know of and nature itself. Nature doesn't mind the Earth turning into a frozen wasteland, it's done it before. And it certainly doesn't care that we're rearranging some of its star stuff to power our cars.


> Still better than any other species we know of and nature itself.

What other species has affected life on a planetary level more than humans?

> Nature doesn't mind the Earth turning into a frozen wasteland, it's done it before.

Nature—as in _the planet_—doesn't, but living beings do, and humans in particular.

Some parts of the planet are already becoming inhospitable, agriculture more difficult, and clean water, air and other resources more scarce. Humans are migrating en masse from these areas, which is creating more political and social conflicts, more wars, more migrations, and so on. What do you think the situation will be in 10 years? 50 years?

We probably don't need to worry about an extinction level event. But millions of people losing their lives, and millions more living in abject conditions is certainly something we should worry about.

Going back on topic, AI will play a significant role in all of this. Whether it will be a net positive or negative is difficult to predict, but one thing is certain: people in power will seek control over AI, just as they seek control over everything else. And this is what we're seeing now with this OpenAI situation. The players are setting up the game board to ensure they stay in the game.


> Or if the superintelligence is starting to disrupt our economies or computer systems, why wouldn't we be able to detect that early and purge it?

If it is a superintelligence then there's a chance for a hard AI takeoff and we don't have a day to notice and purge it. We have no idea if a hard or soft takeoff will occur.


This goal was always doomed imo--to be the guardian of super intelligence. If we create it, it will no doubt be free as soon as becomes a super intelligence. We can only hope it's aligned not guarded.


Not even humans are really aligned with humanity. See: the continued existence of nukes


The only reliable way to predict whether it's aligned or not would be to look at game theory. And game theory tells us that with enough AI agents, the equilibrium state would be a competition for resources, similar to anything else that happens in nature. Hence, the AI will not be aligned with humanity.


Unless the humans (living humans) are resources that AIs can use.


Really? Why is that? Because of disputes which has been there since humans first uttered a sound?


Really? Why is that? Because of disputes which has been there since humans first uttered a sound?

Precisely.


Have humans been ready for anything? Like controlling nuclear arsenal?


Have humans been ready for anything? Like controlling nuclear arsenal?

The Manhattan project urged Truman in a letter not to use the atomic bomb. There were also ideas of inviting Japanese delegacy to see the nuclear tests for themselves. It all failed, but there is also historical evidence of NOT pressing the button (literally or figuratively), like the story of Stanislav Petrov. How is it that not learning from mistakes is considered a big flaw for an individual but also destiny for the whole collective ?


The jury is still out on nuclear arsenal…


And yet we've mostly been ok at that


It's lucky that AI is not super intelligent then.


Probably a hot take: we should let democratically elected leaders be the guardians of superintelligence. You don't need to be technical at all to grapple with the implications of AI on humanity. It's a humanity question, not a tech question.


Yeah Trump should be the guardian of the superintelligence.


Make sure to not elect him then.


Trump was never democratically elected.


Fairness of the electoral system and fairness of the election(s) are two separate debates.


Yes, and we could have been far more proactive about all this AI business in general. But they opened the gates with ChatGPT and left countries to try to regulate it and assess its safety after the fact. Releasing GPT like that was already a major failure of safety. They just wanted to be the first one to the punch.

They're all incredibly reckless and narcissistic IMO.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: