I think you are misreading the article, or at least taking an oddly limited "developer-only" view instead of considering the audience. NASA is not trying to answer your question about how many digits of pi they use. They are trying to answer a non-technical person on Facebook (likely a kid) who is wondering, vaguely, if NASA needs to use a highly precise representation of pi to fly spaceships, or if something coarse will work.
The answer the kid is looking for: NASA's most precise representation of pi is more precise than 3.14, but nowhere close to 500 digits like the question suggested. 15 is more than enough for most engineers at NASA, and anything that an astronomer might conceptually want to do would take at most 40 digits of pi to do with almost arbitrary precision. The fact that the current representation is architecturally convenient for modern FPUs is basically immaterial to the person's question, even if that's interesting for people with detailed knowledge about such things.
They're still only answering half the question. Okay, 15 digits is more than enough. But how much more? What would the minimum be, and why are we using more than that?
Ideally the article would talk about what "more than enough" looks like (which it does), but also what minimums look like (which it doesn't), and then mention that they chose that specific size because most computers do two specific sizes really fast and that's the more accurate of the two.
The answer the kid is looking for: NASA's most precise representation of pi is more precise than 3.14, but nowhere close to 500 digits like the question suggested. 15 is more than enough for most engineers at NASA, and anything that an astronomer might conceptually want to do would take at most 40 digits of pi to do with almost arbitrary precision. The fact that the current representation is architecturally convenient for modern FPUs is basically immaterial to the person's question, even if that's interesting for people with detailed knowledge about such things.