Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In short - what’s stopping a computer that has the resources to improve itself from improving itself extremely quickly? See https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...

Less excitingly, an LLM with access to the web could do things with your online persona or IP that you’d find embarrassing or illegal. Maybe not when it’s slowed down and watched at all times, but will that always be the case once we start doing this?

Anyway the genies out of the bottle and “that’s an unsafe use of technology” is basically antithetical to the Silicon Valley ethos, so objecting at this point seems futile.



Theoretically an agent exposed to the internet could improve itself. But this one can not do that. There is no way (as far as we know) for anyone or anything with internet access to change the code running on GPT-4 short of finding out who works at OpenAI and blackmailing them. This would be easily detected.

You’re right that it could do something bad with your IP, but it’s not really correct to say that GPT-4 could improve itself if given internet access. It’s just not hooked up that way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: