My mental model of floating-point types is that they are useful for scientific/numeric computations where values are sampled from a probability distribution and there is inherently noise, and not really useful for discrete/exact logic.
Yep, absolutely (and increasingly often people are using 16-bit floats on GPUs to go even faster).
But the person you replied to said programming logic, not programming anything.
Honestly I think if you care about the difference between `<` and `<=`, or if you use `==` ever, it's a red flag that floating-point numbers might be the wrong data type.