I think the article dealt pretty well with risk: you survive by focus finite resources on the X% chance of stopping Y% chances of the near elimination of humanity, not the A% chance of stopping a B% probability of an even worse than the near elimination of humanity event where X is large, Y is a small fraction and the product of A and B is barely distinguishable from zero, despite it getting more column inches most of the rest of the near-neglible probability proposed solutions for exceptionally low probability extinction events put together.
I also tend to agree with Maciej that the argument for focusing on the A probability of B is rescued by making the AI threat seem even worse with appeals to human utility like "but what if, instead of simply killing off humanity, they decided to enslave us or keep us alive forever to administer eternal punishment..." either.
But most resources are not spent on risk mitigation. The question of what priority to give risk mitigation naturally (should) go up the more credible existential risks are identified.
I also tend to agree with Maciej that the argument for focusing on the A probability of B is rescued by making the AI threat seem even worse with appeals to human utility like "but what if, instead of simply killing off humanity, they decided to enslave us or keep us alive forever to administer eternal punishment..." either.