This really has nothing to do with Python - you'd see the same behavior in any language using your hardware's binary floating-point arithmetic. First read the docs.
After you read that, you'll better understand that you're not adding one one-hundredth in your code. This is exactly what you're adding:
>>> from decimal import Decimal
>>> Decimal(.01)
Decimal('0.01000000000000000020816681711721685132943093776702880859375')
That string shows the exact decimal value of the binary floating ("double precision" in C) approximation to the exact decimal value 0.01. The thing you're really adding is a little bigger than 1/100.
Controlling floating-point numeric errors is the field called "numerical analysis", and is a very large and complex topic. So long as you're startled by the fact that floats are just approximations to decimal values, use the decimal
module. That will take away a world of "shallow" problems for you. For example, given this small modification to your function:
from decimal import Decimal as D
def sqrt(num):
root = D(0)
while root * root < num:
root += D("0.01")
return root
then:
>>> sqrt(4)
Decimal('2.00')
>>> sqrt(9)
Decimal('3.00')
It's not really more accurate, but may be less surprising in simple examples because now it's adding exactly one one-hundredth.
An alternative is to stick to floats and add something that is exactly representable as a binary float: values of the form I/2**J
. For example, instead of adding 0.01, add 0.125 (1/8) or 0.0625 (1/16).
Then look up "Newton's method" for computing square roots ;-)
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…