We are working on a library of numeric routines in C. We are not sure yet whether we will work with single precision (float
) or double (double
), so we've defined a type SP
as an alias until we decide:
typedef float SP;
When we run our unit tests, they all pass on my machine (a 64-bit Ubuntu) but they fail on my colleague's (a 32-bit Ubuntu that was mistakenly installed on a 64-bit machine).
Using Git's bisect
command, we found the exact diff that began yielding different results between his machine and mine:
-typedef double SP;
+typedef float SP;
In other words, going from double precision to single precision yields numerically different results on our machines (about 1e-3 relative difference in the worst cases).
We are fairly certain that we are never comparing unsigned ints to negative signed ints anywhere.
Why would a library of numeric routines yield different results on a 32-bit operating system and on a 64-bit system?
CLARIFICATION
I'm afraid I might not have been clear enough: we have Git commit 2f3f671
that uses double precision, and where the unit tests pass equally well on both machines. Then we have Git commit 46f2ba
, where we changed to single precision, and here the tests still pass on the 64-bit machine but not on the 32-bit machine.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…