Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's going to be interesting when we have AI with human level performance in making AIs. We just need to hope it doesn't realise the paradox that even if you could make an AI even better at making AIs, there would be no need to.


Why would there be no need? I'm struggling to understand the paradox.

If you're trying to maximize some goal g, and making better AIs is an instrumental goal that raises your expected value of g, then if "making an AI that's better at making AIs" has a reasonable cost and an even higher expected value, you'd jump to seize the opportunity.

Or am I misunderstanding you?


It's a bit of a confusing paradox to try to explain, but basically once we have an AI with human level ability at making AIs there's no longer any need to aim higher, because if we can make a better AI then so can it. The paradox/joke I was trying to convey is that we need to hope that that AI doesn't realise the same thing, otherwise it could just refuse to make something better than itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: