FYI, the two-hundred year figure that this article mentions appears to be in dispute. Wikipedia (https://en.wikipedia.org/wiki/Cascadia_subduction_zone#Earth...) includes much more specific, cited information which puts the frequency at 300-600 years between significant quakes with the earthquake of 1700 having occurred after a 780 year lull.
...the raw data we are looking for? That data looks to have a mean of 530 years, and a standard deviation of 262 years, although it isn't very normal looking (well there are only 18 samples after all, so maybe it isn't too bad). See the histogram I plotted at:
I would say overall the situation looks ... not great, in the medium term.
"Perhaps more striking than the probability numbers is that we can now say that we have already gone longer without an earthquake than 75 percent of the known times between earthquakes in the last 10,000 years," Goldfinger said. "And 50 years from now, that number will rise to 85 percent."
I am really interested to learn how they calculate the probability of 1/3. I think there is a legitimate question here about the methods used. In technical terms: what is the sample space and what set are you looking for within that sample space? How strong is the justification to treat the sequence of earthquakes as a sequence of independent identically distributed random numbers? Leaves me scratching my head. I looked at Professor Goldfinger's papers and couldn't find a reasonable answer. Perhaps it is more fundamental knowledge (e.g. something one would find in geology textbooks)?