The issue is that the ML stuff is so bleeding edge and paradigm breaking that no set of prior credentials is really a valid mark for being right or wrong about this, especially those with traditional college research background.
The best way to describe this issue is with a hypothetical scenario:
A well respected Ph.D Physicist comes out with a paper that describes a warp drive that can be used to travel faster than light. He hands the paper to very talented multi disciplinary engineer that has built everything from small micro circuits to spaceships. Engineer says: "Ok this is cool, what do I need to do to start building this?". Researches says "Ok so first, you need to find some negative mass and gather it". Engineer says: "How do I do that"? Researcher answers: "I dont know, but once you do, find a way to arrange it into a ring around the spaceship".
Its the same story with AI. Just because someone has research experience doesn't mean that he/she knows how technology will evolve in the real world. There may be theories floating around about how AI development could cause danger, but without real concrete paths of what has to happen for those theories to come true - and without those paths, those theories remain theories.
Alternatively, we can draw a more fitting parallel to Robert Oppenheimer, who, upon recognizing the devastating potential of his creation, dedicated himself to halting the spread of nuclear weapons worldwide.
Robert Oppenheimer knew the entire domain space of nuclear weapons. Current researchers don't. Its not like future neural networks are just going to be stacks and stacks of transformers on top of each other.
Warning against potential dangers is meaningless. Any significant piece of tech has potential danger. Some innocuous microprocessor can be used as a guidance chip for a middle, or run a smart toaster oven.
There is more to it though. Geoffrey isn't just warning about potential danger. He is looking at current research, and wrongfully extrapolating AI power into the future. Sure, AI can and will be misused, but most of the warnings about sentient AI, or it's ability to solve complex problems like making deadly viruses are all hypothetical.
If they offered, I would take it. Not going to put an ounce of effort into convincing anyone to give me the position.
Jokes aside, the statement stands on its own without any sort of credentials and lack there of. A lot of the hypothetical AI danger relies on the fact that AI will somehow internally prove by a proxy that P=NP, and be able to produce information that would require brute force iteration to search using traditional methods through some arbitrary algorithm. Or alternatively, it will somehow be able to figure out how to do search for those tasks more efficiently, despite there being no evidence what so ever that a more efficient search algorithm exists for a given task.
Everything "simpler" then that is already possible to do, albeit with more steps, which is irrelevant for someone with capital or basic knowledge.
Maybe... they're better positioned to opine on this topic than you are?