Right, it is not correct. But many programs do it wrong. If you just do a couple of additions the problem will never be noticed. It's easy to write a program that sums up 0.01 until the result is not equal to n * 0.01. Not at my computer now, so I can't do it again. I remember n was bigger to be relevant for any supermarket cashier. But of course applications exist where it matters.
> It's easy to write a program that sums up 0.01 until the result is not equal to n * 0.01.
It's not easy to do that if you use a floating point decimal type, like I recommended. For instance, using C#'s decimal, that will take you somewhere in the neighborhood of 10 to the 26 iterations. With a binary floating point number, it's less than 10.
Of course with a decimal type there is no rounding issue. That's not what 0.30000000000000004 is about.
Many languages have no decimal support built in or at least it is not the default type. With a binary type the rounding becomes already visible after 10959 additions of 1 cent.
That is simply not true. The C# decimal type doesn't accumulate errors when adding, unless you exceed its ~28 digits of precision. E.g. see here: https://rextester.com/RMHNNF58645