0.1 + 0.2 returns 0.30000000000000004

[Super Mario 64 menu theme]

When we write

const x = 0.1

in a JavaScript source file and execute it, JavaScript does not interpret the 0.1 as the real number 0.1, because JavaScript numbers cannot represent every single possible real number, and 0.1 is not one of the real numbers which they can represent. Instead, 0.1 is interpreted as the closest JavaScript number to 0.1.

In decimal, 0.1 is exactly:

0.1000000000000000000... = 0.10

In binary, that is an infinite recurring expansion:

0.00011001100110011001100... = 0.00011

However, all JavaScript numbers are finite binary expansions. So the closest available JavaScript number is:

0.0001100110011001100110011001100110011001100110011001101

Note the absence of a "..." or an overline! This expansion is truncated after 55 bits. In decimal, this number is exactly:

0.1000000000000000055511151231257827021181583404541015625

Note that there's nothing stopping us from writing all of those decimal digits out in our JavaScript source file if we want to:

const x = 0.1000000000000000055511151231257827021181583404541015625

JavaScript will always interpret whatever decimal number we wrote, no matter how (im)precise, as the closest available JavaScript number. Sometimes this reinterpretation will be absolutely precise. But sometimes this reinterpretation will lose some precision.

For the same reason, when we write

const y = 0.2

JavaScript does not interpret this as the real number 0.2 but as the real number

0.200000000000000011102230246251565404236316680908203125

And if we write

const z = 0.3

JavaScript does not interpret this as the real number 0.3 but as the real number

0.299999999999999988897769753748434595763683319091796875

*

This means that when we write

const sum = 0.1 + 0.2

(or const sum = x + y), what JavaScript actually computes is the precise sum

0.1000000000000000055511151231257827021181583404541015625
+
0.200000000000000011102230246251565404236316680908203125
=
0.3000000000000000166533453693773481063544750213623046875

JavaScript numbers cannot represent this precise result either, so the value returned is the closest available JavaScript number, which is

0.3000000000000000444089209850062616169452667236328125

Again, we have lost a little precision, although for a different reason. At first, we lost some precision in the interpretation of the source code. Now, we have lost some more precision in the calculation.

Notice that this sum value, which we got by writing 0.1 + 0.2, is a different JavaScript number from what we got when we simply typed 0.3.

*

Now, what happens when we try to log any of these values at the console?

JavaScript does not log every last decimal place of a number. Instead, JavaScript logs out the minimum number of digits necessary to uniquely identify that JavaScript number from the other JavaScript numbers near it.

So, if we try to log the value

0.1000000000000000055511151231257827021181583404541015625

we'll see the much shorter three-character string

> 0.1

at our console, because this is all that is necessary.

Note that yet again, we have lost some precision! That's three times now!

Strictly speaking, the only reason why console.log(0.1) logs 0.1 is because of two different precision-loss events which cancel one another out. There is no 0.1 in the JavaScript programming language. One would be forgiven for thinking that there is.

Similarly, if we try to log

0.200000000000000011102230246251565404236316680908203125

we'll get

> 0.2

out. And if we try to log

0.299999999999999988897769753748434595763683319091796875

we'll get

> 0.3

out. And finally, if we try to log the result of 0.1 + 0.2, which we remember is

0.3000000000000000444089209850062616169452667236328125

we'll get [drum roll]...

> 0.30000000000000004

So that's why 0.1 + 0.2 equals 0.30000000000000004, and does not equal 0.3. It's because we lost precision in three different places:

  • when the code was interpreted,
  • when the sum was calculated and
  • when the result was output.

This all makes perfect sense now.

But why do JavaScript numbers work like this in the first place?

Because JavaScript numbers are IEEE 754 double-precision (i.e. 64-bit) floating-point numbers, or "doubles".

A double cannot represent every single possible real number. It can only represent approximately 264 distinct real numbers, all of them integer multiples of powers of 2. This includes, say, 0.125, but not 0.1. Instead we get the approximation behaviour seen above.

This behaviour is not unique to JavaScript. It is seen in every programming language where doubles are available, including C, C++, C#, Erlang, Java, Python and Rust.

Also

To explore further, you may find this snippet of code useful.

import { stringify } from './xact.js'

console.log(stringify(0.1))
// '0.1000000000000000055511151231257827021181583404541015625'

console.log(stringify(0.2))
// '0.200000000000000011102230246251565404236316680908203125'

console.log(stringify(0.3))
// '0.299999999999999988897769753748434595763683319091796875'

console.log(stringify(0.1 + 0.2))
// '0.3000000000000000444089209850062616169452667236328125'

Discussion (31)

2018-08-13 01:30:22 by qntm:

Leaving this one off the RSS feed for now because oh my goodness I need to fix my site's formatting. This is close to illegible.

2018-08-14 00:28:05 by FeepingCreature:

Amusingly, "1 + 2 = 3" really can be precisely expressed in Javascript. Generally speaking, all the numbers that Javascript can represent have the form (1 + [a fraction with a power of two in the denominator]) * a power of two. So you can get 1: (1 + 0) * 1 2: (1 + 0) * 2 3: (1 + 0.5) * 2 4: (1 + 0) * 4 5: (1 + 0.25) * 4 6: (1 + 0.5) * 4 7: (1 + 0.75) * 4 8: (1 + 0) * 8 And so on. And going down from 1, you can precisely express 0.5, 0.25, 0.125, and the fractions-of-a-power-of-two numbers inbetween them, like 0.75. Observe how this sequence takes chunks of space with a size that's a successive exponent of 2 - [1-2), [2-4), [4-8) - and subdivides them further into fractional segments. But you will never be able to express 1/10 with that method. Follow the sequence backwards, to find the chunk that would contain 0.1: [0.5-1), [0.25-0.5), [0.125-0.25), [0.0625-0.125) would be the one. In other words, we have to reach 0.1 by adding some fraction that has a power of two in the denominator to 0.0625. However: 0.1 - 0.0625 = 0.0375, or 3/80, a denominator with a prime factor of 5. In other words, a fraction that cannot be expressed with a denominator that's a power of 2. This pattern arises because floating-point numbers, which your computer uses to do non-integer math, try to be useful both for very large and for very small numbers. For that reason, they use the trick of taking a chunk in the power-of-two sequence from above and subdividing it, which means that you get fine precision around 1, but proportionally equally fine precision around 0.00001 or 100000. The number that selects the correct chunk of number line is the exponent, and the number that subdivides the chunk is the mantissa. In other words, 1 and 2 have exactly as many floating point numbers between then as 1024 and 2048, or 0.125 and 0.25. Now you might reasonably ask: "well, why do they use powers of two for their chunks? If they used powers of ten, like humans do, regular common human numbers would be much more cleanly representable." Well, for one, computers simply naturally work most efficiently when you're working with powers of two. But also, if you need reliable math in a bounded range you *should* be using an integer type anyways, if you need reliable math in an arbitrary range you should be using a bignum type, and if you need reliable fast math in an arbitrary range you should hope for an afterlife cause you ain't getting it here on Earth. Them's the breaks. (While 'Decimal Floating Point' does exist, its uses are niche. Ultimately, if a programmer whips out a floating point number it's because they either don't care overmuch about accuracy or want things to go really fast, or both - neither of which requires powers of ten.)

2018-08-14 17:21:16 by OleenaNatiras:

I think you could write an entire postgraduate thesis on all the ways JavaScript is utter garbage.

2018-08-14 19:34:28 by qntm:

Everything written here is also true of Python, C#, Java...

2018-08-15 23:48:47 by Sid:

> (0.4+0.5 == 0.3+0.6) false While there are fraction libraries, without type classes (to add new summable types) or operator overloading, they just look ugly.

2018-08-16 16:16:15 by Ben:

"> (0.4+0.5 == 0.3+0.6) false" I’m freaking the hell out man!

2018-08-16 17:43:02 by OleenaNatiras:

@qntm - that is a fair point. At least other languages you can drop to higher precision fairly easily. Say, in C#: (0.4M+0.5M == 0.3M+0.6M) == true; It's some of the other stuff JS does that's garbage. (Although there's a special level of Hell reserved for PHP type coersion). And there seem to be a hell of a lot of developers using it when other tools are better for the job at hand.

2018-08-19 13:12:21 by Sid:

"I’m freaking the hell out man!" - Ben Creeping floating point errors and their accumulation have been known to professionals for decades, so it's okay by now. The Patriot Missile Failure is routinely taught in numerical computation courses as an example, too. Just gotta be aware you're using imprecise numbers and always carry a hidden error variable in your mental math :)

2018-08-19 15:11:54 by qntm:

I mean, that's kind of the point I'm trying to make. Floating point numbers themselves are actually always exactly precise. When you write `0.1` it's not "0.1 plus or minus a bit". It's *precisely* 0.1000000000000000055511151231257827021181583404541015625, plus or minus 0. The errors come from parsing the source code, carrying out calculations, and printing results.

2018-08-19 15:39:40 by Sid:

Oh yeah, sorry, good point. It's not the numbers, it's the transformations on them. Readers and printers lie to you in many langs, and calculations round the results off. I'll be still keeping "error holder variables" for the read/write conversions and those nasty subtractions and divisions when I'm too lazy to look into ground truth in bits, haha.

2018-08-19 16:12:07 by Sid:

in my low-precision node.js, by the way : > z = 10000000000000000555 10000000000000000000 > z.toPrecision(21) '10000000000000000000.0' > z.toPrecision(30) RangeError: toPrecision() argument must be between 1 and 21 at Number.toPrecision (native) So when you write `10000000000000000555`, it's precisely ten quintillions :D

2018-08-24 17:27:57 by Phantom Hoover:

It honestly saddens me a bit that we're at the point where you just call floats "Javascript numbers" with no elaboration anywhere.

2018-08-24 18:16:42 by qntm:

Not all floats are JavaScript numbers. Anyway, like I said, it's time for bed.

2018-09-02 12:05:00 by aitap:

Obligatory reference: "What every computer scientist should know about floating point numbers" https://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html (just in case someone needs to read more about this problem)

2018-09-03 14:22:39 by Ingvar:

Hilariously, in Common Lisp, it is true that 1/10 + 2/10 is exactly 3/10, but that is because it also has rational numbers (in parallel to fixed-precision integers, bignums, small floats and larger floats, possibly a few more real-type numbers, as well as the complex correspondents of these). Numbers are basically complicated, but still easier than date-time representations (if nothing else, the latter also needs to have a calendar system attached to make any sense).

2018-10-02 08:59:42 by Antistone:

"JavaScript numbers cannot represent every single possible real number" A bit of trivia: In fact, NO computerized system can represent every single possible real number. The number of possible computer programs is infinite, but it's only COUNTABLY infinite, and there are uncountably many possible reals.

2018-10-10 18:09:18 by donpdonp:

The most detailed yet easy to understand writeup of javascript's treatment of number values - thank you! I'll be pointing people to this in the future. Pointing out both steps of error loss, interpreting the source and performing the calulation, was the first ive seen that written about.

2018-10-11 23:15:37 by Java Dope:

As mentioned, this is IEEE 754 and you can see it in many languages, for example Java.

2019-02-20 06:21:36 by qntnn:

Lol

2019-02-25 00:37:59 by David S:

> [Super Mario 64 menu theme] Looks like someone's been watching pannenkoek explain obscure inner details of a video game... He's kind of like the Carl Sagan of Mario 64: a popularizer of a scientific field that explains not just the facts but also the burning desire for knowledge that drives their discovery, however difficult or tedious.

2019-04-13 01:42:02 by Lambda Fairy:

> A bit of trivia: In fact, NO computerized system can represent every single possible real number. The number of possible computer programs is infinite, but it's only COUNTABLY infinite, and there are uncountably many possible reals. Note that while the set of real numbers can't be represented on a computer, the set of *computable* real numbers can. And this set has all the real numbers that normal people care about. The technique is called "exact real arithmetic" if you'd like to learn more yourself.

2019-06-25 01:15:36 by @rskurat:

but exact 1 divided by exact 10 (which is 8 * 1.25) gives what? the 0.1 approximation?

2019-06-25 01:24:25 by qntm:

Yes, `1 / 10 === 0.1` returns `true` so that's the exact same number again. (10 is exact, because it is an integer multiple of a power of 2. The integer is 10, and the power of 2 is 2^0 = 1.)

2019-08-27 14:06:56 by George Langham:

I do always hate when people come into a thread just to say "and that's why X language is garbage" All languages are garbage, cause they are trying to work with imperfect systems. Heck, it's a often stated problem of binary computing that you can have accuracy or speed, you can't have both.

2019-09-06 17:45:06 by John:

I feel like the realization of the nature of floating point is something every programmer goes through once in their lives. When designing their first app that deals with money they think, "Well, I need to represent $1.33, and that has a decimal point, so I need a float". Then they start to encounter all of these errors and think their language sucks. The reality, that we have all learned through school or through life, is that floating point math needs careful thought before you can just use it, even then, you can run into issues like this. If you want to represent decimals exactly, you need to find libraries that will help you do this, or you need to "roll your own", for example, store money as 'cents' in an int. I've still encountered this in industry too, "I couldn't find a decimal field in redshift so I've been importing all of our daily clicks sales for the last year into a float field, there's 10 billion rows worth"

2020-05-17 18:23:25 by 0.30000000000000004:

All hardware sucks, all software sucks

2021-03-30 08:43:52 by Cas722ey:

Doing a physics assessment and you have no idea how much this stuffed with my head until I found this. Thanks for the help!

2021-03-30 08:44:22 by Did this work?:

<strong>Strong!</strong>

2021-04-12 02:18:59 by me:

Cool

2021-04-12 02:20:15 by me (again):

Didn't think that would work. :)

2021-04-12 15:28:06 by another me:

https://0.30000000000000004.com/

New comment by :

Plain text only. Line breaks become <br/>
The square root of minus one: