Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
639 views
in Technique[技术] by (71.8m points)

c - Why do I need 17 significant digits (and not 16) to represent a double?

Can someone give me an example of a floating point number (double precision), that needs more than 16 significant decimal digits to represent it?

I have found in this thread that sometimes you need up to 17 digits, but I am not able to find an example of such a number (16 seems enough to me).

Can somebody clarify this?

Thanks a lot!

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

My other answer was dead wrong.

#include <stdio.h>

int
main(int argc, char *argv[])
{
    unsigned long long n = 1ULL << 53;
    unsigned long long a = 2*(n-1);
    unsigned long long b = 2*(n-2);
    printf("%llu
%llu
%d
", a, b, (double)a == (double)b);
    return 0;
}

Compile and run to see:

18014398509481982
18014398509481980
0

a and b are just 2*(2^53-1) and 2*(2^53-2).

Those are 17-digit base-10 numbers. When rounded to 16 digits, they are the same. Yet a and b clearly only need 53 bits of precision to represent in base-2. So if you take a and b and cast them to double, you get your counter-example.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...