I imagine this is a classic floating point precision question, but I am trying to wrap my head around this result, running
1//0.01 in Python 3.7.5 yields
I imagine it is an expected result, but is there any way to decide when it is safer to use
int(1/f) rather than
If this were division with real numbers,
1//0.01 would be exactly 100. Since they are floating-point approximations, though,
0.01 is slightly larger than 1/100, meaning the quotient is slightly smaller than 100. It's this 99.something value that is then floored to 99.
The reasons for this outcome are like you state, and are explained in Is floating point math broken? and many other similar Q&A.
When you know the number of decimals of numerator and denominator, a more reliable way is to multiply those numbers first so they can treated as integers, and then perform integer division on them:
So in your case
1//0.01 should be converted first to
1*100//(0.01*100) which is 100.
In more extreme cases you can still get "unexpected" results. It might be necessary to add a
round call to numerator and denominator before performing the integer division:
1 * 100000000000 // round(0.00000000001 * 100000000000)
But, if this is about working with fixed decimals (money, cents), then consider working with cents as unit, so that all arithmetic can be done as integer arithmetic, and only convert to/from the main monetary unit (dollar) when doing I/O.
Or alternatively, use a library for decimals, like decimal, which:
...provides support for fast correctly-rounded decimal floating point arithmetic.
from decimal import Decimal cent = Decimal(1) / Decimal(100) # Contrary to floating point, this is exactly 0.01 print (Decimal(1) // cent) # 100
What you have to take into account is that
// is the
floor operator and as such you should first think as if you have equal probability to fall in 100 as in 99 (*) (because the operation will be
100 ± epsilon with
epsilon>0 provided that the chances of getting exactly 100.00..0 are extremely low.)
You can actually see the same with a minus sign,
>>> 1//.01 99.0 >>> -1//.01 -100.0
and you should be as (un)surprised.
On the other hand,
int(-1/.01) performs first the division and then applies the
int() in the number, which is not floor but a truncation towards 0! meaning that in that case,
>>> 1/.01 100.0 >>> -1/.01 -100.0
>>> int(1/.01) 100 >>> int(-1/.01) -100
Rounding though, would give you the YOUR expected result for this operator because again, the error is small for those figures.
(*)I am not saying that the probability is the same, I am just saying that a priori when you perform such a computation with floating arithmetic that is an estimate of what you are getting.
If you execute the following
from decimal import * num = Decimal(1) / Decimal(0.01) print(num)
The output will be:
This is how it's internally represented,
so rounding it down
// will give
Floating point numbers can't represent most decimal numbers exactly, so when you type a floating point literal you actually get an approximation of that literal. The approximation may be larger or smaller than the number you typed.
You can see the exact value of a floating point number by casting it to Decimal or Fraction.
>>> from decimal import Decimal >>> Decimal(0.01) Decimal('0.01000000000000000020816681711721685132943093776702880859375') >>> from fractions import Fractio >>> Fraction(0.01) Fraction(5764607523034235, 576460752303423488)
We can use the Fraction type to find the error caused by our inexact literal.
>>> float((Fraction(1)/Fraction(0.01)) - 100) -2.0816681711721685e-15
We can also find out how granular double precision floating point numbers around 100 are by using nextafter from numpy.
>>> from numpy import nextafter >>> nextafter(100,0)-100 -1.4210854715202004e-14
From this we can surmise that the nearest floating point number to
1/0.01000000000000000020816681711721685132943093776702880859375 is in-fact exactly 100.
The difference between
int(1/0.01) is the rounding. 1//0.01 rounds the exact result down to the next whole number in a single step. So we get a result of 99.
int(1/0.01) on the other hand rounds in two stages, first it rounds the result to the nearest double precision floating point number (which is exactly 100), then it rounds that floating point number down to the next integer (which is again exactly 100).