How JavaScript's Number and Python's float store numbers
Most statically typed languages like Java or C have different data types for numbers.
For example, if you need to store an integer that is in the range [-128;127] you can use byte in Java and char in C, which both take up only 1 byte.
If you need to store a larger integer, you can use int or long data types which take up 4 and 8 bytes respectively.
There are also separate data types that you can use to store numbers with fractional part — float which takes up 4 bytes and double with 8 bytes. These are usually referred to as floating point format and we’ll see later where this name comes from.
JavaScript and Python don’t follow that pattern. JavaScript for a long time had just one generic Number type that handled every numeric value — integers and fractions alike — with BigInt added later as a separate arbitrary-precision integer type.
Python’s equivalents are int for arbitrary-precision integers and float for fractional values.
You might have seen this weird behavior before — type 0.1 + 0.2 into a JavaScript console and the answer comes back as 0.30000000000000004, not 0.3. There are countless StackOverflow threads, GitHub issues, and blog posts on the topic, almost all of them pinning the blame on JavaScript. But the same answer comes back in Python, Java, C, and every other language that uses IEEE-754 binary64 for its floating-point type — the 64-bit double-precision floating point that underlies both JavaScript’s Number and Python’s float.
The same underlying mechanics manifest in other behaviors that look like language bugs but aren’t — two large-but-not-that-large integers comparing as equal, or a value not equaling itself:
> 9007199254740992 === 9007199254740993
true
> NaN === NaN
false
Same expressions in Python — same answers, down to the bit:
>>> 9007199254740992.0 == 9007199254740993.0
True
>>> float('nan') == float('nan')
False
In this article we’ll walk through how those 64 bits actually work according to IEEE-754, run the same experiments in both languages to see they produce identical bit patterns, and pin down which behaviors come from the float and which come from the language wrapped around it.
The number types each language exposes
Before opening up the bits, it helps to lay out what each language gives you at the surface. Both have separate types for integers and fractional numbers, with different names and different defaults:
| Language | Kind | Type | Details |
|---|---|---|---|
| Python | Integer (no decimals, exact) | int | Arbitrary-precision. CPython grows the backing storage as needed, so 2**1000 just works and the result is exact. |
| Float (64-bit IEEE-754 double) | float | The same format we’ll spend the rest of the article unpacking. A literal 0.5 in a Python script is 8 bytes. | |
| JavaScript | Integer (no decimals, exact) | BigInt | Arbitrary-precision integer, added later as a separate type. Written with an n suffix (e.g. 5n). The closest equivalent to Python’s int. |
| Float (64-bit IEEE-754 double) | Number | The same format as Python’s float. JavaScript uses it for both integers and fractions; there is no separate integer type by default. |
Each language also defaults to a different type when you write a numeric literal. Python defaults to int and only switches to float when a decimal point appears in the literal:
>>> type(5)
<class 'int'>
>>> type(5.2)
<class 'float'>
>>> type(5 + 0.0) # int + float promotes to float
<class 'float'>
JavaScript defaults the other way — every numeric literal is a Number (the float type) unless you tag it with n to make it a BigInt. typeof reports 'number' for integers and fractions alike:
> typeof 5
'number'
> typeof 1.5
'number'
> typeof 5n
'bigint'
And once you have a BigInt, you can’t mix it with Number in arithmetic — the engine refuses rather than silently coercing:
> 5n + 1
TypeError: Cannot mix BigInt and other types, use explicit conversions
> 5n + BigInt(1)
6n
> Number(5n) + 1 // the other direction works too, at the cost of precision
6
The split between exact-integer and float-default behavior is roughly mirrored across the two languages, but the defaults are reversed:
in Python you have to opt into float by writing a decimal point; in JavaScript you have to opt out of float by writing n for BigInt.
Everything we’ll discuss from here on lives on the float side — Python’s float, JavaScript’s Number, and the same IEEE-754 binary64 format underneath both.
Representing numbers in the scientific notation
Before we start talking about floating point and the IEEE754 standard, we need to first look into what it means to represent a number in the scientific notation. You’ve probably seen values like (Avogadro’s number) or (the e-notation form often used in code) — these typically represent quantities too large or too small to write out digit by digit, which is exactly the problem scientific notation solves.
In the general form the number in scientific notation can be represented like this:
Significand shows the number of significant digits - it’s also often referred to as mantissa or precision.
Zeros are not considered significant; they just hold a place.
Base specifies the numeric system base, i.e. 10 for decimal system and 2 for binary.
Exponent defines by how many places a radix point must be moved to the left or right to obtain an original number.
Any number can be represented in the scientific notation. For example, the number 7 in decimal and binary systems can be represented like this:
An exponent of 0 simply shows us that no additional operations should be done to obtain the original number. Let’s see another example — the number 0.00000022. The significant numbers here are 22, so let’s remove zeros:
The line in the middle looks like a no-op, and that’s the point — it’s a placeholder. In the very next line we replace with the equivalent expression , which is still but in a form we can split. The then absorbs the eight leading-zero shifts in (turning it into the integer ), and the leftover becomes the new exponent. The two factors are inverses of each other, so the value of the whole expression is unchanged — we’ve just rearranged it into the form we wanted.
The calculation above demonstrates why the exponent of the base is decreased if the radix point moved to the right. So, by performing multiplication we refined our original number to have only significand digits:
Since we used multiplication by 8, we had to compensate it by division and this is where negative exponent of 8 comes from. The same process, only this time the division is used to obtain significand digits, can be performed on the number 22300000:
This time the radix point was moved to the left and hence the exponent increased. As you can see, the scientific notation is a way to work easily with very large or small numbers. Depending on the exponent, the significand may represent an integer or a number with fractional part. When converting to the original number a negative exponent requires shifting the radix point to the left. The positive exponent requires shifting to the right and usually denotes large integers.
It’s also important to understand what a normalized form of a number is. A number is normalized when it is written in scientific notation with one nonzero decimal digit before the radix point. So, if we take our original numbers and represent them in the normalized form they will have the following representations:
Having numbers represented in the normalized form enables easy comparison of numbers by order of magnitude.
As you may have guessed binary numbers will always have 1 before the radix point.
Since the binary system has only two digits — 0 and 1 — and normalization requires the leading digit to be nonzero, the digit before the radix point in normalized binary is always 1.
Scientific notation can be thought of as a floating point representation of a number. The term floating point refers to the fact that a number’s radix point can “float” — it can be put anywhere relative to the significant digits of the number. And as we’ve learnt, the original position is indicated by the exponent.
Floating point according to the IEEE754
The IEEE Standard for Floating-Point Arithmetic (IEEE 754) defines many things related to floating point arithmetic, but for the purposes of our exploration we’re interested only in how numbers are stored, rounded and added. I’ve written a very detailed article explaining how to round binary numbers. Rounding is a frequent operation and occurs when a selected format doesn’t allow enough bits to store a number. It is an important topic so get a good grasp of its mechanics. Now, let’s take a look at how numbers are stored. All examples onward are going to be mostly for numbers in binary system.
Understanding how numbers are stored
There are two formats defined by the standard that are used most often — single and double precision. They differ in the number of bits each takes up and consequently in the range of numbers each format can store. The approach to translating a number in scientific notation into IEEE754 form is the same for all formats, only the number of bits allocated for mantissa (significand digits) and the exponent differ.
IEEE754 floating point allocates bits to store a number sign, its mantissa (significant digits) and exponent. Here is the how it distributes those bits in the double-precision format (64 bit for each number) used by both JavaScript’s Number and Python’s float:
63 62 52 51 0 ← bit position
0 00000000000 0000000000000000000000000000000000000000000000000000
│ │ │
sign(1) exponent(11) mantissa(52) (significand)
The sign bit gets 1 bit, exponent — 11 bits and 52 bits are allocated for mantissa (significand). Here is the table that shows the number of bits allocated for each format:
| Name | Total bits | Exponent | Significand |
|---|---|---|---|
| Single precision | 32 | 8 | 23 |
| Double precision | 64 | 11 | 52 |
The exponent is stored in the offset binary format. I’ve written a detailed article explaining this format and its differences against two’s complement. Please take some time to understand this topic as I’ll be using it when translating numbers into floating point format.
Examples of how integers are stored
Before walking through specific cases bit by bit, here’s a concrete end-to-end picture for the integer 7 (111 in binary) — original bits, normalized scientific form, and the final 64-bit IEEE-754 packed layout:
111 → 1.11 × 2^2 → 0 10000000001 11 0...0
^^^ ^ ^^ ^ ^ ^^^^^^^^^^^ ^^ ^^^^^
│ │ │ │ │ │ │ └── 50 padding zeros (mantissa is 52 bits total)
│ │ │ │ │ │ └── stored mantissa: "11" (bits after the leading 1)
│ │ │ │ │ └── stored exponent: 2 + 1023 (bias) = 1025 = 10000000001
│ │ │ │ └── sign bit: 0 (positive)
│ │ │ └── exponent: how far to shift the radix back
│ │ └── mantissa (the bits after the implicit leading 1)
│ └── the implicit leading 1 (not stored)
└── original integer bits
Now let’s go step by step and see how the integers 1 and 3 are stored using the same scheme.
Number 1 is represented in all numeric systems as 1 so no conversion is required. It can be represented in scientific form as:
Here we have a mantissa of 1 and the exponent of 0. Using this information you may assume that the number is represented in floating point like this:
63 62 52 51 0 ← bit position
0 00000000000 0000000000000000000000000000000000000000000000000001
│ │ │
sign exponent mantissa (significand)
Let’s see if it’s really the case. Unfortunately, neither JavaScript nor Python has a built-in function for printing the raw bits of a stored float. But it’s easy to write one in either language. Here’s a JavaScript helper that handles your computer’s endianness for you — it writes the float into a Float64Array, views the underlying memory as bytes, and walks them in reverse to assemble the bit string:
function to64bitFloat(number) {
var f = new Float64Array(1);
f[0] = number;
var view = new Uint8Array(f.buffer);
var i, result = "";
for (i = view.length - 1; i >= 0; i--) {
var bits = view[i].toString(2);
if (bits.length < 8) {
bits = new Array(8 - bits.length).fill('0').join("") + bits;
}
result += bits;
}
return result;
}
The Python equivalent is a one-liner using the struct module — pack the float as 8 big-endian bytes (>d), then unpack those same bytes as a 64-bit unsigned integer (>Q) and format the integer as a 64-character binary string:
import struct
def to64bit_float(x):
return f"{struct.unpack('>Q', struct.pack('>d', x))[0]:064b}"
Both produce the same 64-character binary string for the same input — to64bitFloat(1) in JavaScript and to64bit_float(1.0) in Python give identical bits. We’ll use them interchangeably from here on.
So, using either one you can see that the number 1 is stored like this:
63 62 52 51 0 ← bit position
0 01111111111 0000000000000000000000000000000000000000000000000000
│ │ │
sign exponent mantissa (significand)
It’s completely different from the assumptions set above. We have no digits in mantissa and there are 1’s in the exponent. Now let’s see why it is so.
The key insight is that IEEE-754 doesn’t store the number directly — it first converts it to normalized scientific form (one nonzero digit before the radix point, the rest after), and then stores those parts.
And as we established earlier, the leading digit before the radix point in normalized binary is always 1 — so the format doesn’t bother to store it.
This is the hidden bit trick: the implicit 1 is prepended back by hardware whenever the value is read, which buys us one extra bit of precision essentially for free.
For the number 1 specifically, the normalized form is 1.0 × 2^0 — there are no digits after the radix point, and the digit before it (the implicit 1) isn’t stored.
So the mantissa has nothing left to record, which is why it’s all zeros.
Now, let’s see where 1’s come from in the exponent. I mentioned earlier that exponent is stored as offset binary. If we calculate the offset:
we can see that this is exactly what we have in the representation. So under offset binary the value stored there is really 0. If it’s unclear how the offset gives us 0 read my article on offset binary.
Let’s use the information we’ve learnt above and try to represent the number 3 in the floating point form. In binary it is represented as 11. If you don’t remember why, check out my very detailed article on decimal-binary conversion algorithms. And upon normalization the number 3 has this form (numbers in binary):
After the radix point we have only one digit of 1 that will be stored in mantissa. As explained earlier the first digit before the radix point is not stored. Also, normalization also gave us the exponent of 1.
Let’s calculate how it’s represented in offset binary and then we have all the information required:
One thing to remember about mantissa is that digits are stored in the exact order they are placed in the scientific form — left to right from radix point. With that in mind, let’s put all numbers in the floating point representation:
63 62 52 51 0 ← bit position
0 10000000000 1000000000000000000000000000000000000000000000000000
│ │ │
sign exponent mantissa (significand)
If you run either helper from above — to64bitFloat(3) in JavaScript or to64bit_float(3.0) in Python — you’ll see we came up with the correct representation.
A note on bit ordering
You may have noticed the mantissa bits sit left-anchored — for 3 we got 1000…000, and for 7 (which we worked through earlier) we’d get 1100…000. This can feel wrong, because the same numbers stored as plain 8-bit integers are 00000011 and 00000111 — 1s on the right. The two formats aren’t mirrored; they just anchor bits to different reference points.
- Integer storage: bit positions represent powers going up from the right. Whole-number bits cluster on the right because that’s where the smallest magnitudes () live.
- Float mantissa: bits sit after the radix point in normalized form, representing powers going down from the radix point. The leftmost stored bit is , the next is , and so on. So the mantissa for stores
11left-to-right, placing the first1at and the second at — left-anchored because that’s where the largest fractional magnitudes live.
A decimal analogy makes the same point: the integer 123 is right-anchored (the 3 is the ones place). The fraction 0.123 is left-anchored (the 1 is the tenths place). Both still treat left as most significant — they just measure from different reference points (the right edge vs. the radix point).
Why 0.1+0.2 is not 0.3
Now that we know how numbers are stored, let’s see what happens in this often-cited example. The short explanation comes down to which fractions can be represented exactly in binary at all.
Only fractions with a denominator which is a power of two can be finitely represented in a binary form. Since denominators of 0.1 (1/10) and 0.2 (1/5) are not powers of two, these numbers can’t be finitely represented in a binary format. In order to store them as a IEEE-754 floating point they have to be rounded to the number of available bits for mantissa — 10 bits for half-precision, 23 bits for single-precision or 52 bits for double-precision. Depending on how many bits of precision are available, the floating-point approximations of 0.1 and 0.2 could be slightly less or greater than their corresponding decimal representations, but never equal. Because of that fact, you’re never going to have 0.1 + 0.2 == 0.3.
This explanation maybe sufficient for some developers, but the best way to see what is going on under the hood is perform all the calculations that the computer is doing yourself. That’s what I’m about to do now.
Representing 0.1 and 0.2 in the floating point format
Let’s see the bits pattern for 0.1 in floating point form. The first thing we need to do is to convert 0.1 to binary. This can be done using the algorithm of multiplication by 2. I explain its mechanics in my article on decimal-binary conversion algorithms. If we convert 0.1 to binary we get an infinite fraction:
0.1 · 2 = 0.2 0.0...
0.2 · 2 = 0.4 0.00...
0.4 · 2 = 0.8 0.000...
0.8 · 2 = 1.6 0.0001...
0.6 · 2 = 1.2 0.00011...
0.2 · 2 = 0.4 0.000110...
The next step is to represent this number in the normalized scientific notation:
Since mantissa can only have 52 bits, we need to round our infinite number to 52 bits after the radix point.
Using the rounding rules defined by IEEE-754 standard and explained in my article on binary numbers rounding we need to round the number up to:
The last thing left is to calculate the exponent representation in offset binary:
And when put into floating point format representation the number 0.1 has the following bits pattern:
63 62 52 51 0 ← bit position
0 01111111011 1001100110011001100110011001100110011001100110011010
│ │ │
sign exponent mantissa (significand)
I encourage you to calculate the floating representation of 0.2 on your own. You should end up with the following representations in the scientific notation and binary:
63 62 52 51 0 ← bit position
0 01111111100 1001100110011001100110011001100110011001100110011010
│ │ │
sign exponent mantissa (significand)
Calculating the result of 0.1 + 0.2
If we assemble the numbers back from their representation as floating point into scientific form, here is what we have:
To add numbers, they need to have equal exponents. The rule says that we need to adjust the number with the smaller exponent to that of the larger. So, let’s adjust the exponent of -4 of the first number to have the exponent -3 like the second number:
Now we can add the numbers:
0.1100110011001100110011001100110011001100110011001101
+ 1.1001100110011001100110011001100110011001100110011010
─────────────────────────────────────────────────────────
10.0110011001100110011001100110011001100110011001100111
Now, the result of the calculation is stored in a floating point format, so we need to normalize the result, round if necessary and calculate the exponent in offset binary.
The normalized number falls right in the middle between the rounding options, so we apply tie-breaking rule and round up to the even. This gives the following resulting number in the normalized scientific form:
And when converted to the floating point format for storing, it has the following bits pattern:
63 62 52 51 0 ← bit position
0 01111111101 0011001100110011001100110011001100110011001100110100
│ │ │
sign exponent mantissa (significand)
This is exactly the bits pattern that is stored when you execute the statement 0.1+0.2.
To get it, the computer has to round three times — one for each number and third time for their sum. When simply 0.3 is stored, the computer performs rounding only once. This rounding operations lead to different bits pattern stored for 0.1+0.2 and for standalone 0.3.
When the language compares 0.1+0.2 to 0.3 — using === in JavaScript, == in Python — it’s these bits patterns that are compared, and since they are different the returned result is false. If such formats existed that even with rounding the bits pattern would be equal, the comparison would evaluate to true regardless of the fact that 0.1 and 0.2 are not finitely representable in binary.
Try checking the bits for the number 0.3 using the helpers shown above — to64bitFloat(0.3) in JavaScript or to64bit_float(0.3) in Python. The pattern will be different than the one we calculated above for the result of 0.1+0.2.
To recover the actual decimal value those bits represent, take the binary scientific form (implicit 1 followed by the 52 mantissa bits, multiplied by 2 to the true-exponent power), shift the radix point to absorb the exponent, and convert the resulting binary fraction to decimal. Doing that arithmetic for 0.1 + 0.2 gives 0.3000000000000000444089209850062616169452667236328125. Doing it for 0.3 gives 0.299999999999999988897769753748434595763683319091796875. They’re close, but not equal — which is exactly why the comparison returns false.
Where languages diverge: wrappers on top of the same float
The float itself is shared, but the languages don’t match on every operation that uses a float. These divergences are policy choices in the language layer, not differences in the underlying arithmetic. A few worth knowing about:
Division by zero. JavaScript returns Infinity.
> 1 / 0
Infinity
> -1 / 0
-Infinity
Python raises ZeroDivisionError.
>>> 1 / 0
Traceback (most recent call last):
...
ZeroDivisionError: division by zero
To get Infinity in Python, you ask for it explicitly: math.inf or float('inf'). The float format has a perfectly good encoding for infinity (we’ll see it below) — Python just chooses not to produce it from a literal 1/0.
Modulo of negatives. JavaScript’s % follows the sign of the dividend.
> -7 % 3
-1
Python’s % follows the sign of the divisor.
>>> -7 % 3
2
Half-integer rounding. JavaScript’s Math.round rounds half up (toward ).
> Math.round(2.5)
3
> Math.round(3.5)
4
Python’s built-in round uses “banker’s rounding” — ties to even — which matches the IEEE-754 default and the rule the FPU itself uses internally.
>>> round(2.5)
2
>>> round(3.5)
4
Default decimal display. JavaScript shows just enough digits to round-trip back to the same float. Python’s repr() does the same. Both hide the trailing …44089… of 0.1 + 0.2 until you ask for more precision with .toFixed(20) or an f"{x:.20f}" format spec.
The rule of thumb: when two languages disagree about a number, suspect the wrapper before suspecting the float. The 64 bits underneath are the same.
The boundary, in either language
Storing integers in a float runs into a hard limit. The mantissa holds 52 bits plus the implicit leading 1, for 53 bits of precision. So every integer up to can be stored exactly. Past that, there aren’t enough mantissa bits to encode every integer — and the format has to skip some.
JavaScript exposes the boundary as a constant:
> Number.MAX_SAFE_INTEGER
9007199254740991 // 2^53 - 1
> 2 ** 53
9007199254740992
> 2 ** 53 + 1
9007199254740992 // not 9007199254740993!
> 9007199254740992 === 9007199254740993
true
The expression 2 ** 53 + 1 doesn’t produce 9007199254740993. The format can’t represent that value, so it rounds to the nearest representable one — which is 9007199254740992.
It’s worth being precise about what MAX_SAFE_INTEGER actually means: it is not the largest integer JavaScript can hold. JavaScript can store much larger values — Number.MAX_VALUE reaches all the way to 1.7976931348623157e+308, the largest finite double. What MAX_SAFE_INTEGER actually marks is the largest integer N where both N and N + 1 are exactly representable. Past it, gaps start to appear: MAX_SAFE_INTEGER + 3 (which is 9007199254740994) is fine, but MAX_SAFE_INTEGER + 2 (which is 9007199254740993) is the first integer the format can’t represent — type it into a console and you get 9007199254740992 back, silently rounded down by 1. As the exponent grows further the gaps widen, so by the time you’re up near MAX_VALUE consecutive representable values can be enormous distances apart.
If you actually need exact integer arithmetic past MAX_SAFE_INTEGER in JavaScript, that’s exactly what BigInt is for — it’s arbitrary-precision, so the boundary disappears entirely:
> 9007199254740993n
9007199254740993n // BigInt, stays exact — the n suffix matters
> 9007199254740993n === 9007199254740992n
false // distinct values, unlike with Number
> 2n ** 53n + 1n
9007199254740993n // every BigInt arithmetic op stays exact
> 2n ** 1000n // and there's no upper limit
107150860718626732094842504906000181056140481170553360744375038837035105112493612249319837881569585812759467291755314682518714528569231404359845775746985748039345677748242309854210746050623711418779541821530464749835819412673987675591655439460770629145711964776865421676604298316526243868372056680693100n
The same boundary for storing integers exactly in a float exists in Python too, but it’s invisible unless you cross from int to float.
Just like JavaScript’s BigInt, Python’s int is arbitrary precision, so integer arithmetic is exact:
>>> 2**53 + 1
9007199254740993 # exact, because both sides are int
>>> 2**53 + 1 == 2**53
False
But the moment you cast to float, the same 64-bit format applies and the same boundary kicks in:
>>> 2.0**53 + 1.0
9007199254740992.0
>>> 2.0**53 + 1 == 2.0**53
True
>>> float(9007199254740993) == float(9007199254740992)
True
Same bits, same boundary. JavaScript exposes it as a top-level constant because every plain numeric literal in JavaScript is a float — BigInt was added later as an escape hatch, but it requires the explicit n suffix to opt into — so the language flags the boundary with a name to warn you. Python flips the defaults: a bare integer literal is already an arbitrary-precision int, so the boundary is invisible until you explicitly cast to float. That’s why MAX_SAFE_INTEGER doesn’t appear in Python’s standard library — in int-only code it never matters.
Why exactly ?
Look at the bit pattern for 9007199254740991 (which is ). It’s an integer whose binary representation is fifty-three 1s in a row, normalized as
which fills the mantissa completely with 1s. To store the next integer up — — we add 1, which cascades a carry through all 52 mantissa bits (each 1 flips to 0), leaving us with
Re-normalizing shifts the radix point one place left and bumps the exponent by one, giving an all-zero mantissa:
For the next integer, , we’d need a mantissa bit set at position 53 — but the mantissa is only 52 bits wide. There’s no room. The format silently rounds to the nearest representable value (either or depending on tie-breaking; for +1 it lands on ).
The mechanical reason this happens: at exponent 52 the mantissa exactly captures every integer up to . To store anything bigger, we have to bump the exponent — but raising it to 53 means the radix point shifts 53 places to the right while the mantissa still gives us only 52 stored bits, so the 53rd bit position is always implicitly 0. At exponent 54 the format appends two zeros; at 55, three; and so on.
To see this concretely, look at the bit positions of the first few integers past . Each is 54 bits wide (bit 53 down to bit 0), but the format has slots for only 53 of them — the implicit leading 1 plus 52 stored mantissa bits.
The least significant bit (the rightmost one, bit 0 — call it LSB) has no slot, so the format physically can’t put a 1 there. Two consequences fall out of that:
- Only integers with LSB=
0(i.e. the evens) are representable in this range. - Neighbors are
2apart, so an increment of+1rounds back to the same value — you need at least+2to land on the next representable integer. (And+1is exactly halfway between two neighbors, so the tie-to-even rule picks the one with mantissa ending in0.)
2^53 = 9007199254740992 = 1 0000…0000 0 ← LSB=0 (even) ✓ representable
2^53 + 1 = 9007199254740993 = 1 0000…0000 1 ← LSB=1 (odd) ✗ no slot for the LSB
2^53 + 2 = 9007199254740994 = 1 0000…0001 0 ← LSB=0 (even) ✓ representable
2^53 + 3 = 9007199254740995 = 1 0000…0001 1 ← LSB=1 (odd) ✗ no slot for the LSB
↑ └─── 52 ───┘ ↑
│ mantissa │
│ bits │
│ └── the LSB has no mantissa slot
│ → always 0 → integer always even
└── implicit leading 1 (not stored)
The implication is striking: every integer larger than MAX_SAFE_INTEGER ends in at least one implicit zero, so no odd integer above MAX_SAFE_INTEGER can be represented at all — only evens.
Climb the exponent further and only multiples of 4 are representable, then multiples of 8, then 16, doubling the gap at every step. You can watch this directly in either REPL:
2^53 → 9007199254740992
2^53 + 1 == 2^53 → true (gap of 2 here, +1 collapses)
2^53 + 2 > 2^53 → true (+2 is the next representable)
2^54 + 2 == 2^54 → true (gap of 4, even +2 collapses)
2^54 + 4 > 2^54 → true (+4 is next)
2^55 + 4 == 2^55 → true (gap of 8 now)
2^55 + 8 > 2^55 → true (+8 is next)
Each step doubles the gap. By the time you reach the largest finite double (Number.MAX_VALUE or sys.float_info.max, both about ), consecutive representable values are enormous distances apart.
The for-loop trap
The widening-gap behavior we just walked through has a famous practical consequence — a loop that looks like it should terminate but never does. Take the simplest possible “increment until the reciprocal is zero” loop, written in both languages.
If we run it JavaScript:
for (let i = 1; 1/i > 0; i++) {
console.log("Count is: " + i);
}
or Python:
i = 1.0
while 1/i > 0:
print(f"Count is: {i}")
i += 1
The loop never stops. This behavior is shared between both languages, because both counters are 64-bit floats subject to the safe-integer boundary. The condition 1/i > 0 is true for any finite positive i, so for the loop to stop, the counter would have to reach Infinity. But once i reaches , i + 1 rounds back to — the gap to the next representable value is 2, and +1 falls in the middle and rounds down. The counter is stuck at 9007199254740992 forever, 1/i stays a tiny positive number, and the loop spins.
You might think incrementing by 2 would dodge this — and for a while it does. With i += 2, the counter clears and keeps stepping through the representable evens (2^53, 2^53 + 2, 2^53 + 4, …). But once it reaches the gap doubles to 4, so i + 2 rounds back to i and the loop is stuck again. Switching to i += 4 just defers the trap to where the gap becomes 8, and so on. The fundamental issue isn’t +1 — it’s that consecutive integers stop being representable as you climb, and any constant-step counter will eventually fall into a gap larger than its step.
The point: this trap is a property of Number / float, not of the language. Any language that uses an IEEE-754 double as its counter type can hit it.
NaN and Infinity
The format reserves two exponent values for special encodings:
- All-zero exponent is reserved for
±0(when the mantissa is also zero) and for subnormals (when the mantissa is nonzero — values too small to represent in normalized form). - All-one exponent is reserved for
±Infinity(mantissa zero) and NaN (mantissa nonzero).
That’s why typeof NaN is 'number' in JavaScript and type(float('nan')) is <class 'float'> in Python — NaN is a perfectly valid bit pattern of the float format, just one with the all-ones exponent.
The bit pattern for positive infinity is
0 11111111111 0000000000000000000000000000000000000000000000000000
A typical NaN looks like
0 11111111111 1000000000000000000000000000000000000000000000000000
We can see that any non-zero mantissa paired with the all-ones exponent encodes a NaN — which means there are quite a few distinct NaN bit patterns, not just one.
The IEEE-754 rule is that NaN compares unequal to everything, including itself. Both languages honor this. Here’s JavaScript:
> NaN === NaN
false
> NaN > 0
false
> NaN < 0
false
> Number.isNaN(NaN)
true
And the same behavior in Python:
>>> import math
>>> float('nan') == float('nan')
False
>>> float('nan') > 0
False
>>> float('nan') < 0
False
>>> math.isnan(float('nan'))
True
Both languages provide a dedicated isNaN / math.isnan because — by the IEEE-754 rule — x == x is the wrong way to detect NaN: it returns false for NaN and true for everything else, but it’s a reverse-logic trick rather than a clear API. (Actually, x !== x is a common idiom in older JS code, exactly because it uniquely identifies NaN.)
Infinity follows ordinary comparison rules — it’s larger than every finite number, smaller than itself only in the sense of equality — and you can mix it into arithmetic without surprises (Infinity + 1 is Infinity, 1 / Infinity is 0, Infinity - Infinity is NaN).
> Infinity > 1e308
true
> Infinity + 1 === Infinity
true
> Infinity - Infinity
NaN
And the same behavior in Python:
>>> math.inf > 1e308
True
>>> math.inf + 1 == math.inf
True
>>> math.inf - math.inf
nan
What to take away
- Almost every modern language uses the same 64 bits for fractional numbers. JavaScript’s
Number, Python’sfloat, Java’sdouble, C’sdouble— all IEEE-754 binary64. Same layout, same arithmetic, same quirks. Differences between languages live in the wrapper, not the float. 0.1 + 0.2 != 0.3is a property of the format. It happens because0.1and0.2aren’t finite in binary and accumulate three rounding errors during the addition; storing0.3directly accumulates only one. The bit patterns differ.- The “safe integer” boundary is at . Beyond it, integer values can’t all be represented and adjacent integers collapse onto the same float. JavaScript exposes it as
Number.MAX_SAFE_INTEGERbecause it has no other integer type; Python users only see it after casting tofloat. - NaN and Infinity are real bit patterns, not exceptions. They live inside the format, with the all-ones exponent reserved for them. NaN’s “compares unequal to everything, including itself” rule is IEEE-754, not a language choice.
Knowing these four things lets you predict what a float will do in any language that uses one — which is, in practice, all of them.
Stay up to date
Get notified when I publish new deep dives.