Floating point math is imprecise because of the challenges of storing such values in a binary representation.
In base 10, the fraction 1/3
is represented as 0.333…
which, for a given number of significant digit, will never exactly
be 1/3
. The same problem happens when trying to represent 1/10
in base 2, with leads to the infinitely repeating fraction
0.0001100110011…
. This makes floating point representations inherently imprecise.
Even worse, floating point math is not associative; push a float
through a series of simple mathematical operations and the answer
will be different based on the order of those operation because of the rounding that takes place at each step.
Even simple floating point assignments are not simple, as can be vizualized using the format
function to check for significant
digits:
>>> format(0.1, ".17g")
'0.10000000000000001'
This can also be vizualized as a fraction using the as_integer_ratio
method:
>>> my_float = 0.1
>>> numerator, denominator = my_float.as_integer_ratio()
>>> f"{numerator} / {denominator}"
'3602879701896397 / 36028797018963968'
Therefore, the use of the equality (==
) and inequality (!=
) operators on float
values is almost always
erroneous.