They're exaggerating the efficacy of their technology, and are demanding a regulatory moat that currently doesn't exist under the pretext of the danger of that technology being misused. Notice the singular focus on "extinction risk" rather than flaws that the technology actually has right now, like hallucination, training bias, and prompt injection or the looming harm of AI adoption in both putting people out of work while substantially reducing the quality of anything automated with it. Part of it is marketing and part of it is ego on the part of people who grew up on fictional narratives about AI destroying the world and want to feel like they wield the power to make it do that, but whether any individual's belief is sincere or not, the overall push is a product of the massive incentives everyone involved in AI has to promote their product and entrench their market position.
You're saying that AI researchers are not immune to the power of incentives. I'm saying that there is no evidence that this sort of incentive has ever caused this sort of behavior before.