The BigDecimal is used to represents immutable, arbitrary-precision signed decimal numbers.
Differently from the BigDecimal, the double primitive type and the Double type have limited precision due to
the use of double-precision 64-bit IEEE 754 floating point. Because of floating point imprecision, the BigDecimal(double) constructor can
be somewhat unpredictable.
For example writing new BigDecimal(0.1) doesn’t create a BigDecimal which is exactly equal to 0.1, but it is equal to
0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be represented exactly as a double (or, for that matter, as a
binary fraction of any finite length).