Numbers in JavaScript are stored in double-precision 64-bit binary
format IEEE 754. Like any other number encoding occupying a finite number of bits, it is unable to represent all numbers.

The values are stored using 64 bits in the following form:

- 1 bit for the sign (positive or negative)
- 11 bits for the exponent (2
^{n}). -1022 ≤ n ≤ 1023
- 52 bits for the significand (or mantissa)

The actual value of the stored number will be `(-1)`^{sign} * (1 + significand) * 2 ^{exponent}

Given this structure, there are limits in both **magnitude** and **precision**.

Due to the 52 bits used for the significand, any arithmetic in need of more precision than 2^{-52} (provided by
`Number.EPSILON`

) is subject to rounding.

In terms of magnitude, the largest number the 64 bits of the format can store is 2^{1024} - 1 (`Number.MAX_VALUE`

).

However, because the 52 bits of the significand, only integers between -(2^{53} - 1) (`Number.MIN_SAFE_INTEGER`

) and
2^{53} - 1 (`Number.MAX_SAFE_INTEGER`

) can be represented exactly and be properly compared.

Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2; // true

JavaScript provides the `bigint`

primitive to represent values which are too large to be represented by the number primitive. BigInts
are created by appending `n`

to the end of an integer literal, or by calling the `BigInt()`

function (without the new operator),
with an integer or a string.

const myBigInt = BigInt(Number.MAX_SAFE_INTEGER);
myBigInt + 1n === myBigInt + 2n; // false