0.1 + 0.2 returns 0.30000000000000004

[Super Mario 64 menu theme]

When we write

const x = 0.1

in a JavaScript source file and execute it, JavaScript does not interpret the 0.1 as the real number 0.1, because JavaScript numbers cannot represent every single possible real number, and 0.1 is not one of the real numbers which they can represent. Instead, 0.1 is interpreted as the closest JavaScript number to 0.1, which in binary is the number

0.0001100110011001100110011001100110011001100110011001101

or in decimal is

0.1000000000000000055511151231257827021181583404541015625

Note that there's nothing stopping us from writing all of those decimal digits out in our JavaScript source file if we want to:

const x = 0.1000000000000000055511151231257827021181583404541015625

JavaScript will always interpret what we wrote, no matter how (im)precise, as the closest available JavaScript number. Sometimes this reinterpretation will be absolutely precise. But sometimes this reinterpretation will lose some precision.

For the same reason, when we write

const y = 0.2

JavaScript does not interpret this as the real number 0.2 but as the real number

0.200000000000000011102230246251565404236316680908203125

And if we write

const z = 0.3

JavaScript does not interpret this as the real number 0.3 but as the real number

0.299999999999999988897769753748434595763683319091796875

*

This means that when we write

const sum = 0.1 + 0.2

(or const sum = x + y), what JavaScript actually computes is the precise sum

0.1000000000000000055511151231257827021181583404541015625
+
0.200000000000000011102230246251565404236316680908203125
=
0.3000000000000000166533453693773481063544750213623046875

JavaScript numbers cannot represent this precise result either, so the value returned is the closest available JavaScript number, which is

0.3000000000000000444089209850062616169452667236328125

Again, we have lost a little precision, although for a different reason. At first, we lost some precision in the interpretation of the source code. Now, we have lost some more precision in the calculation.

Notice that this sum value, which we got by writing 0.1 + 0.2, is a different JavaScript number from what we got when we simply typed 0.3.

*

Now, what happens when we try to log any of these values at the console?

JavaScript does not log every last decimal place of a number. Instead, JavaScript logs out the minimum number of digits necessary to uniquely identify that JavaScript number from the other JavaScript numbers near it.

So, if we try to log the value

0.1000000000000000055511151231257827021181583404541015625

we'll see the much shorter three-character string

> 0.1

at our console, because this is all that is necessary.

Note that yet again, we have lost some precision! That's three times now!

Strictly speaking, the only reason why console.log(0.1) logs 0.1 is because of two different precision-loss events which cancel one another out. There is no 0.1 in the JavaScript programming language. One would be forgiven for thinking that there is.

Similarly, if we try to log

0.200000000000000011102230246251565404236316680908203125

we'll get

> 0.2

out. And if we try to log

0.299999999999999988897769753748434595763683319091796875

we'll get

> 0.3

out. And finally, if we try to log the result of 0.1 + 0.2, which we remember is

0.3000000000000000444089209850062616169452667236328125

we'll get [drum roll]...

> 0.30000000000000004

So that's why 0.1 + 0.2 equals 0.30000000000000004, and does not equal 0.3. It's because we lost precision in three different places:

  • when the code was interpreted,
  • when the sum was calculated and
  • when the result was output.

This all makes perfect sense now.

But why do JavaScript numbers work like this in the first place?

Because JavaScript numbers are IEEE 754 double-precision (i.e. 64-bit) floating-point numbers, or "doubles".

A double cannot represent every single possible real number. It can only represent approximately 264 distinct real numbers, all of them integer multiples of powers of 2. This includes, say, 0.125, but not 0.1. Instead we get the approximation behaviour seen above.

This behaviour is not unique to JavaScript. It is seen in every programming language where doubles are available, including C, C++, C#, Erlang, Java, Python and Rust.

Back to Code
Back to Things Of Interest

Discussion (18)

2018-08-13 00:30:22 by qntm:

Leaving this one off the RSS feed for now because oh my goodness I need to fix my site's formatting. This is close to illegible.

2018-08-13 23:28:05 by FeepingCreature:

Amusingly, "1 + 2 = 3" really can be precisely expressed in Javascript. Generally speaking, all the numbers that Javascript can represent have the form (1 + [a fraction with a power of two in the denominator]) * a power of two. So you can get

1: (1 + 0) * 1
2: (1 + 0) * 2
3: (1 + 0.5) * 2
4: (1 + 0) * 4
5: (1 + 0.25) * 4
6: (1 + 0.5) * 4
7: (1 + 0.75) * 4
8: (1 + 0) * 8

And so on. And going down from 1, you can precisely express 0.5, 0.25, 0.125, and the fractions-of-a-power-of-two numbers inbetween them, like 0.75.

Observe how this sequence takes chunks of space with a size that's a successive exponent of 2 - [1-2), [2-4), [4-8) - and subdivides them further into fractional segments.

But you will never be able to express 1/10 with that method. Follow the sequence backwards, to find the chunk that would contain 0.1: [0.5-1), [0.25-0.5), [0.125-0.25), [0.0625-0.125) would be the one. In other words, we have to reach 0.1 by adding some fraction that has a power of two in the denominator to 0.0625.

However: 0.1 - 0.0625 = 0.0375, or 3/80, a denominator with a prime factor of 5. In other words, a fraction that cannot be expressed with a denominator that's a power of 2.

This pattern arises because floating-point numbers, which your computer uses to do non-integer math, try to be useful both for very large and for very small numbers. For that reason, they use the trick of taking a chunk in the power-of-two sequence from above and subdividing it, which means that you get fine precision around 1, but proportionally equally fine precision around 0.00001 or 100000. The number that selects the correct chunk of number line is the exponent, and the number that subdivides the chunk is the mantissa. In other words, 1 and 2 have exactly as many floating point numbers between then as 1024 and 2048, or 0.125 and 0.25.

Now you might reasonably ask: "well, why do they use powers of two for their chunks? If they used powers of ten, like humans do, regular common human numbers would be much more cleanly representable." Well, for one, computers simply naturally work most efficiently when you're working with powers of two. But also, if you need reliable math in a bounded range you *should* be using an integer type anyways, if you need reliable math in an arbitrary range you should be using a bignum type, and if you need reliable fast math in an arbitrary range you should hope for an afterlife cause you ain't getting it here on Earth. Them's the breaks.

(While 'Decimal Floating Point' does exist, its uses are niche. Ultimately, if a programmer whips out a floating point number it's because they either don't care overmuch about accuracy or want things to go really fast, or both - neither of which requires powers of ten.)

2018-08-14 16:21:16 by OleenaNatiras:

I think you could write an entire postgraduate thesis on all the ways JavaScript is utter garbage.

2018-08-14 18:34:28 by qntm:

Everything written here is also true of Python, C#, Java...

2018-08-15 22:48:47 by Sid:

> (0.4+0.5 == 0.3+0.6)
false

While there are fraction libraries, without type classes (to add new summable types) or operator overloading, they just look ugly.

2018-08-16 15:16:15 by Ben:

"> (0.4+0.5 == 0.3+0.6)
false"

I’m freaking the hell out man!

2018-08-16 16:43:02 by OleenaNatiras:

@qntm - that is a fair point. At least other languages you can drop to higher precision fairly easily. Say, in C#:

(0.4M+0.5M == 0.3M+0.6M) == true;

It's some of the other stuff JS does that's garbage. (Although there's a special level of Hell reserved for PHP type coersion). And there seem to be a hell of a lot of developers using it when other tools are better for the job at hand.

2018-08-19 12:12:21 by Sid:

"I’m freaking the hell out man!" - Ben

Creeping floating point errors and their accumulation have been known to professionals for decades, so it's okay by now.
The Patriot Missile Failure is routinely taught in numerical computation courses as an example, too.

Just gotta be aware you're using imprecise numbers and always carry a hidden error variable in your mental math :)

2018-08-19 14:11:54 by qntm:

I mean, that's kind of the point I'm trying to make. Floating point numbers themselves are actually always exactly precise. When you write `0.1` it's not "0.1 plus or minus a bit". It's *precisely* 0.1000000000000000055511151231257827021181583404541015625, plus or minus 0.

The errors come from parsing the source code, carrying out calculations, and printing results.

2018-08-19 14:39:40 by Sid:

Oh yeah, sorry, good point.

It's not the numbers, it's the transformations on them.
Readers and printers lie to you in many langs, and calculations round the results off.

I'll be still keeping "error holder variables" for the read/write conversions and those nasty subtractions and divisions when I'm too lazy to look into ground truth in bits, haha.

2018-08-19 15:12:07 by Sid:

in my low-precision node.js, by the way :

> z = 10000000000000000555
10000000000000000000
> z.toPrecision(21)
'10000000000000000000.0'
> z.toPrecision(30)
RangeError: toPrecision() argument must be between 1 and 21
    at Number.toPrecision (native)

So when you write `10000000000000000555`, it's precisely ten quintillions :D

2018-08-24 16:27:57 by Phantom Hoover:

It honestly saddens me a bit that we're at the point where you just call floats "Javascript numbers" with no elaboration anywhere.

2018-08-24 17:16:42 by qntm:

Not all floats are JavaScript numbers. Anyway, like I said, it's time for bed.

2018-09-02 11:05:00 by aitap:

Obligatory reference: "What every computer scientist should know about floating point numbers" https://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf
https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html (just in case someone needs to read more about this problem)

2018-09-03 13:22:39 by Ingvar:

Hilariously, in Common Lisp, it is true that 1/10 + 2/10 is exactly 3/10, but that is because it also has rational numbers (in parallel to fixed-precision integers, bignums, small floats and larger floats, possibly a few more real-type numbers, as well as the complex correspondents of these).

Numbers are basically complicated, but still easier than date-time representations (if nothing else, the latter also needs to have a calendar system attached to make any sense).

2018-10-02 07:59:42 by Antistone:

"JavaScript numbers cannot represent every single possible real number"

A bit of trivia: In fact, NO computerized system can represent every single possible real number. The number of possible computer programs is infinite, but it's only COUNTABLY infinite, and there are uncountably many possible reals.

2018-10-10 17:09:18 by donpdonp:

The most detailed yet easy to understand writeup of javascript's treatment of number values - thank you! I'll be pointing people to this in the future. Pointing out both steps of error loss, interpreting the source and performing the calulation, was the first ive seen that written about.

2018-10-11 22:15:37 by Java Dope:

As mentioned, this is IEEE 754 and you can see it in many languages, for example Java.