Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

America did nuclear weapons research to get there before the Nazis and Japan and we were able to use them to stop Japan


Has the US ever stated or followed a policy of neutrality and openness?

OpenAI positioned itself like that, much the same way Switzerland does in global politics.


Openness sure, but neutrality? I thought they had always been very explicitly positioned on the "ethical AGI" side.


> Has the US ever stated or followed a policy of neutrality

Yes, most of the time from the founding until the First World War.

> and openness?

Not sure what sense of "openness" is relevant here.


Not at all. Prior to WWI, the US was aggressively and intentionally cleaning European interests out of the Western hemisphere. It was in frequent wars, often with one European power or another. It just didn't distract itself too much with squabbles between European powers over matters outside its claimed dominion.

Establishing a hemispheric sphere of influence was no act of neutrality.


> Not sure what sense of "openness" is relevant

It is in the name OpenAI… not that I think the Swiss are especially transparent, but neither are the USA.


I’m not sure you can call Manifest destiny neutral.


You're completely right. Neither can the Monroe Doctrine be called neutral, nor can:

- the Mexican-American War

- Commodore Perry's forced reopening of Japan

- The fact that President Franklin Pierce recognized William Walker's[1] regime as legitimate

- The Spanish-American war

[1]: https://en.wikipedia.org/wiki/William_Walker_(filibuster)


So the first AGI is going to be used to kill other AGIs in the cradle ?


The scenario usually bandied about is AGI self-improving at an accelerating rate: once you cross the threshold to self-improvement, you quickly get superintelligence with God-like powers beyond human comprehension (a.k.a. the Singularity) as AGI v1 creates a faster AGI v2 which creates a faster AGI v3 etc.

Any AI researchers still plodding along at mere human speed are then doomed: they won't be able to catch up even if they manage to reproduce the original breakthrough, since the head start enjoyed by AGI #1 guarantees that its latest iteration is always further along the exponential self-improvement curve and therefore superior to any would-be competitor. Being rational(ists), they give up and welcome their new AI overlord.

And if not, the AI god will surely make them see the error of their ways.


What if AI self improvement is not exponential?

We assume a self improving AI will lead to some runaway intelligence improvement but if it grows at 1% per year or even per month that’s something we can adapt to.


Assume the AGI has access to a credit card and it goes ahead and reserves itself every GPU cycle in existence so it's 1 month is turned into a day, and now we're back to being fucked.

Maybe an ongoing GPU shortage is the only thing that'll save us!


How would an AGI gain access to an unlimited credit card that immediately gives it remote access to all GPUs in the world?


It could hack into NVIDIA and AMD, compromise their firmware build machines silently, then publish a GPU vulnerability that required firmware updates.

After a couple months, turn on the backdoor.


E.g. by convincing 35% of this website's users to "subscribe" to its "service"?

¯\_(ಠ_ಠ)_/¯


It seems to me that non-General AI would typically outcompete AGI, all else held equal. In such a scenario even a first-past-the-post AGI would have trouble becoming an overload if non-Generalized AIs were marshaled against it.


This makes no sense at all.



uhm, wat?


This is just the thesis that paperclip optimizers win over general intelligence, because they optimize.


Or contain, or counter, or be used as a deterrent. At least, I think that's the idea being espoused here (in general, if not in the GP comment).

I think U. S. VS Japan is not.necessarily the right model to be thinking here, but U.S. VS U.S.S.R., where we'd like to believe that neither nation would actually launch against the other, but both having the weapon meant they couldn't without risking severe damage in response making it a losing proposition.

That said, I'm sure anyone with an AGI in their pocket/on their side will attempt to use it as a big stick against those that don't, in the Teddy Roosevelt meaning.


I think that was part of the LessWrong eschatology.

It doesn't make sense with modern AI, where improvement (be it learning or model expansion) is separated from it's normal operation, but I guess some beliefs can persevere very well.


Modern AI also isn't AGI. We seem to get a revolution at the frontier every 5 years or so; it's unlikely the current LLM transformer architecture will remain the state of the art for even a decade. Eventually something more capable will become the new modern.


Which reminds me, I really need to finish Person of Interest someday.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: