* Whether the source code to advanced AI is open may have some importance, but what determines whether some individual or corporation will be able to run advanced AI is whether they can afford the hardware. I can download some open-source code and run it on my laptop - but Google has data centres with 10s or 100s of thousands of computers. The big corporations are much more likely to have/control the advanced AI because they have the resources for the needed hardware.
* Soft / hard takeoff -
I think a lot of people miss that any 'hard takeoffs' will be limited by the amount of hardware that can be allocated to an AI.
Let us imagine that we have created an AI that can reach human level intelligence, and it requires a data centre with 10000 computers to run it. Just because the AI has reached human level intelligence doesn't mean that the AI will magically get smarter and smarter and become 'unto a God' to us. If it wants to get 2x smarter, it will probably require 2x (or more) computers. The exact ratio depends on the equation of 'achieved intelligence' vs hardware requirements, and also on the unknown factor of algorithmic improvements. I think that algorithmic improvements will have diminishing returns. Even if the AI is able to improve its own algorithms by say 2x, it's unlikely that will allow it to transition from human level to 'god-level' AI. I think hardware resources allocated will still be the major factor.
So an AI isn't likely to get a lot smarter in a subtle, hidden way, or in an explosive way. More likely it will be something like 'we spent another 100M dollars on our new data centre, and now the AI is 50% smarter!'.
As someone who has done research in AI, you can train all the state of the art models with a single computer (couple of TitanX GPUs, top of the line CPU, couple terabyte SSD, 32 GB RAM) that any engineer can afford.
Contrary to popular belief, state of the art deep learning is not commonly run on multi-node clusters. Although hardware itself is not the bottleneck for innovation in the current state of the art in deep learning, if we restrict ourself to hardware, the bottleneck is memory bandwidth.
Yeah. I guess I was thinking more about some kind of AI that would be similar to human intelligence, as opposed to a specialized pattern recognition algorithm like deep neural networks. I think an AI that is on a human level will need a ton of memory and processing power, and the most obvious way of providing that is to allow for distributed processing over the computers in e.g. a data centre.
A thought about your second thought: if the AI reaches smart-human-level intelligence it may get itself the hardware. It could hack or social-engineer its way into the Internet, start making (or taking) money, and use it to hire humans to do stuff for it.
Indeed.
Maybe there should be a board of humans that has the final say on if money should be allocated to hardware for the AI. And they wouldn't be allowed to do google searches while deciding :)
What matters more is if the state of the NN or algorithm we train is open. In other words, its one thing to know the starting state; The advantage lies entirely in having a massive or at least robust dataset that has been trained.
* Whether the source code to advanced AI is open may have some importance, but what determines whether some individual or corporation will be able to run advanced AI is whether they can afford the hardware. I can download some open-source code and run it on my laptop - but Google has data centres with 10s or 100s of thousands of computers. The big corporations are much more likely to have/control the advanced AI because they have the resources for the needed hardware.
* Soft / hard takeoff - I think a lot of people miss that any 'hard takeoffs' will be limited by the amount of hardware that can be allocated to an AI. Let us imagine that we have created an AI that can reach human level intelligence, and it requires a data centre with 10000 computers to run it. Just because the AI has reached human level intelligence doesn't mean that the AI will magically get smarter and smarter and become 'unto a God' to us. If it wants to get 2x smarter, it will probably require 2x (or more) computers. The exact ratio depends on the equation of 'achieved intelligence' vs hardware requirements, and also on the unknown factor of algorithmic improvements. I think that algorithmic improvements will have diminishing returns. Even if the AI is able to improve its own algorithms by say 2x, it's unlikely that will allow it to transition from human level to 'god-level' AI. I think hardware resources allocated will still be the major factor. So an AI isn't likely to get a lot smarter in a subtle, hidden way, or in an explosive way. More likely it will be something like 'we spent another 100M dollars on our new data centre, and now the AI is 50% smarter!'.