I'm PHP developer and entirely new to Python. I installed it (version
2.5.2, from Debian repos) today on the persuasion of a friend, who is a
Python addict.
The first thing I typed into it was 3.2*3 (don't ask why I typed *that*,
I don*t know, I just did). And the answer wasn't 9.6.
Here it is:
>>> 3.2*3
9.6000000000000014
So I became curious...
>>> 3.21*3
9.629999999999999
>>> (3.2*3)*2
19.200000000000003
... and so on ...
After that I tried Windows version (3.1rc2), and...
>>> 3.2*3
9.600000000000001
I wasn't particularly good in math in school and university, but I'm
pretty sure that 3.2*3 is 9.6.
Cheers,
Bojan
Hi Bojan,
This is a FAQ. Take a look at:
http://docs.python.org/tutorial/floatingpoint.html
and let us know whether that explains things to your
satisfaction.
Mark
>
> I wasn't particularly good in math in school and university, but I'm
> pretty sure that 3.2*3 is 9.6.
>
It's not math, it's floating point representation of numbers - and its
limited accuracy.
Type 9.6 and you'll get 9.5999999999999996
--
Tomasz Zielinski
http://pyconsultant.eu
<snip>
>>>> 3.2*3
> 9.600000000000001
>
> I wasn't particularly good in math in school and university, but I'm
> pretty sure that 3.2*3 is 9.6.
Yes -- in this world. But in the inner workings of computers, 3.2 isn't
accurately representable in binary. This is a faq.
ActivePython 2.6.2.2 (ActiveState Software Inc.) based on
Python 2.6.2 (r262:71600, Apr 21 2009, 15:05:37) [MSC v.1500 32 bit
(Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> 3.2
3.2000000000000002
>>>
Emile
This is almost certainly nothing to do with python per se, but the
floating point implementation of your hardware. Floating point
arithmetic on computers is not accurate to arbitrary precision. If you
want such precision use a library that supports it or make you own
translations to and from appropriate integer sums (but it's going to be
slower).
It looks like it's false in PHP too, by the way (not
that I know any PHP, so I could well be missing
something...)
bernoulli:py3k dickinsm$ php -a
Interactive mode enabled
<?
$a = 3.2*3;
$b = 9.6;
var_dump($a);
float(9.6)
var_dump($b);
float(9.6)
var_dump($a == $b);
bool(false)
Mark
Hi Mark,
Yes, that explains things to my satisfation. Now I'm embarrassed that I
didn't know that before.
Thanks,
Bojan
I'm surprised how often people encounter this and wonder about it. As I
began programming back in the day using C, this is just something I grew
up with (grudging acceptance).
I guess PHP artificially rounds the results or something to make it seem
like it's doing accurate calculations, which is a bit surprising to me.
We all know that IEEE floating point is a horribly inaccurate
representation, but I guess I'd rather have my language not hide that
fact from me. Maybe PHP is using BCD or something under the hood (slow
but accurate).
If you want accurate math, check out other types like what is in the
decimal module:
>>> import decimal
>>> a=decimal.Decimal('3.2')
>>> print a * 3
9.6
> If you want accurate math, check out other types like what is in the
> decimal module:
>
>>>> import decimal
>>>> a=decimal.Decimal('3.2')
>>>> print a * 3
> 9.6
I wish people would stop representing decimal floating point arithmetic as "more
accurate" than binary floating point arithmetic. It isn't. Decimal floating
point arithmetic does have an extremely useful niche: where the inputs have
finite decimal representations and either the only operations are addition,
subtraction and multiplication (e.g. many accounting problems) OR there are
conventional rounding modes to follow (e.g. most of the other accounting problems).
In the former case, you can claim that decimal floating point is more accurate
*for those problems*. But as soon as you have a division operation, decimal
floating point has the same accuracy problems as binary floating point.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
After a bit of experimentation on my machine, it *looks* as though PHP
is using the usual hardware floats internally (no big surprise there),
but implicit conversions to string use 14 significant digits. If
Python's repr used '%.14g' internally instead of '%.17g' then we'd see
pretty much the same thing in Python.
> We all know that IEEE floating point is a horribly inaccurate
> representation [...]
That's a bit extreme! Care to elaborate?
, but I guess I'd rather have my language not hide that
> fact from me. Maybe PHP is using BCD or something under the hood (slow
> but accurate).
>
> If you want accurate math, check out other types like what is in the
> decimal module:
As Robert Kern already said, there really isn't any sense in which
decimal
floating-point is any more accurate than binary floating-point, except
that---somewhat tautologically---it's better at representing decimal
values exactly.
The converse isn't true, though, from a numerical perspective: there
are some interesting examples of bad things that can happen with
decimal floating-point but not with binary. For example, given any
two Python floats a and b, and assuming IEEE 754 arithmetic with
default rounding, it's always true that a <= (a+b)/2 <= b, provided
that a+b doesn't overflow. Not so for decimal floating-point:
>>> import decimal
>>> decimal.getcontext().prec = 6 # set working precision to 6 sig figs
>>> (decimal.Decimal('7.12346') + decimal.Decimal('7.12348'))/2
Decimal('7.12345')
Similarly, sqrt(x*x) == x is always true for a positive IEEE 754
double x (again
assuming the default roundTiesToEven rounding mode, and assuming that
x*x neither overflows nor underflows). But this property fails for
IEEE 754-compliant decimal floating-point.
Mark
--Scott David Daniels
Scott....@Acm.Org
You may not. I do.
http://code.google.com/p/mpmath/
We have the gmpy module which can do arbitray precision floats.
>>> gmpy.pi(600)
mpf
('3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446e0',
600)
>
> --Scott David Daniels
> Scott.Dani...@Acm.Org
gmpy?
--
Aahz (aa...@pythoncraft.com) <*> http://www.pythoncraft.com/
"as long as we like the same operating system, things are cool." --piranha
And while that's true, to a point, that isn't what Michael or the many others
are referring to when they claim that decimal is more accurate (without any
qualifiers). They are misunderstanding the causes and limitations of the example
"3.2 * 3 == 9.6". You can see a great example of this in the comparison between
new Cobra language and Python:
http://cobra-language.com/docs/python/
In that case, they have a fixed-precision decimal float from the underlying .NET
runtime but still making the claim that it is more accurate arithmetic. While
you may make (completely correct) claims that decimal.Decimal can be more
accurate because of its arbitrary precision capabilities, this is not the claim
others are making or the one I am arguing against.
Those that failed, learned. You only see those that haven't learnt yet.
Dialog between two teachers:
T1: Oh those pupils, I told them hundred times! when will they learn?
T2: They did, but there's always new pupils.
TGIF
Uli
(wave and smile)
--
Sator Laser GmbH
Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932
> If you want accurate math, check out other types like what is in the
> decimal module:
>
>>>> import decimal
>>>> a=decimal.Decimal('3.2')
>>>> print a * 3
> 9.6
Not so. Decimal suffers from the exact same problem, just with different
numbers:
>>> import decimal
>>> x = decimal.Decimal('1')/decimal.Decimal('3')
>>> 3*x == 1
False
Some numbers can't be represented exactly in base 2, and some numbers
can't be represented exactly in base 10.
--
Steven
>> We all know that IEEE floating point is a horribly inaccurate
>> representation [...]
>
> That's a bit extreme! Care to elaborate?
Well, 0.1 requires an infinite number of binary places, and IEEE floats
only have a maximum of 53 or so, so that implies that floats are
infinitely inaccurate...
*wink*
--
Steven
But since 10 = 2 * 5, all numbers that can be finitely represented in
binary can be represented finitely in decimal as well, with the exact
same number of places for the fractional part (and no more digits
than the binary representation in the integer part)
OK, so base 30 is the obvious choice, digits and letters, and 1/N works
for n in range(1, 7) + range(8, 11). G�del numbers, anyone? :-)
--Scott David Daniels
Scott....@Acm.Org
To get even more working, use real rational numbers: p/q represented
by the pair of numbers (p,q) with p,q natural numbers. Then 1/N works
for every N, and upto any desired precision.
--
André Engels, andre...@gmail.com
Unfortunately, I keep seeing people who claim to be old hands at floating point
making these unlearned remarks. I have no issue with neophytes like the OP
expecting different results and asking questions. It is those who answer them
with an air of authority that need to take a greater responsibility for knowing
what they are talking about. I lament the teachers, not the pupils.
True. Poor choice of words on my part. No matter what representation
one chooses for numbers, we can remember that digits != precision.
That's why significant digits were drilled into our heads in physics!
That's the reason IEEE actually works out for most things that we need
floating point for.