The axiom holds for all numbers and all single operations. The final section presents a gotcha that arises when reasoning about the amount of _propagated_ error in certain calculations (called catastrophic cancellation).
The "exact" guarantee that you mention is precisely the same as saying that it's "within some relative epsilon". Using machine epsilon tells you _how_ close "as close as possible" is.
Your proposed improvement to the inequality is true, but is not really useful for computing the amount of error you can expect from numerical algorithms. For that, you want some quantification of how close is "as close as possible". Having the relative error bounded by machine epsilon is an extremely strong guarantee, as the article shows.
The key I think your article is missing though is that it should explain which numbers are representable exactly as floating points. This is very important information for other reasons too and that tells you how close "as close as possible" is. I can tell you that when I started learning about floating point numbers I knew there was some relative error and believed that operations like addition and square root were just "close enough"- in fact they're as good as possible and the only difficulty is which numbers we're allowed to write down! Enlightenment! And everything else falls out from there.
I do think your edit is a good clarification to make. My point about the "in mathematical notation" part though is that it assumes a and b are exactly represented as floating point numbers (else catastrophic cancellation can happen) though the sentence after the equation implies they are not representable.
The "exact" guarantee that you mention is precisely the same as saying that it's "within some relative epsilon". Using machine epsilon tells you _how_ close "as close as possible" is.
Your proposed improvement to the inequality is true, but is not really useful for computing the amount of error you can expect from numerical algorithms. For that, you want some quantification of how close is "as close as possible". Having the relative error bounded by machine epsilon is an extremely strong guarantee, as the article shows.