> My take is that it's worth taking pragmatic steps towards studying AI safety measures (i.e. OpenAI), but not going as far as to talk the likes of 'AI research regulation'.
Sometimes, it makes more sense being cautiously optimistic (pro-active) rather than reactive. We have already gone down that reactive slope and it's better to act now before it's all too late [0].
I think the kind of people who would fill these ai regulation roles would be pseudo technical bureaucratic types who would prove to have offered no value if sudden unexpected agi really did come about
Ignoring the topic of the linked article* I'd argue that there's examples of being too cautious as well. There's a lot of good that we could have done with GMO that is not being done because of very restrictive regulation. Ironically it means that GMO is mostly used for things that are not as obviously good because that's where there's enough profit to be made in the short term to make the research worth it.
I'm a bit afraid that this will happen with self driving cars and AI. That politicians will create draconian policies and laws to protect against the threat of AGI etc, without understanding or knowing what the real threats even are (just look att the trolley dilemma debate...). This could make it economically prohibitive to develop many technologies which has the potential to save many lives as well as improve life quality overall.
* It seems to be more about how rules and policies can be unfair and just to a small extent about how policies can be made opaque by being internal to some ML system.
There's a lot more money going into making plants resistant to pesticide than into making plants better adjusted for harsh conditions or more nutritious, things that could potentially have a huge effect for poor people.
If AI scientists actually believed that the general public will believe the talk about existential threats, they would be afraid of activist groups sabotaging and occasionally firebombing their laboratories. Like sometimes happens with GMO research. Clearly they are not.
Sometimes, it makes more sense being cautiously optimistic (pro-active) rather than reactive. We have already gone down that reactive slope and it's better to act now before it's all too late [0].
[0] https://blogs.scientificamerican.com/roots-of-unity/review-w...