I'm writing a specification that describes some arithmetic that will be performed by software. The intention is that I can hand this spec to two different programmers (who use potentially different languages and/or architectures) and when I feed some input into their programs, they will both always spit out the same result.
For instance, if the spec says "add 0.5 to the result", this can be a problem. Depending on the floating point storage method, 0.5 could be represented as 0.4999999135, or 0.500000138, etc.
What is the best way to specify the rules here so that things are consistent across the board? Can I just say "All numbers must be represented in IEEE 754 64-bit format"? Or is it better to say something like "All numbers must be first scaled by 1000 and computed using fixed-point arithmetic"?
This is a little different from most floating-point questions I've come across since the issue is repeatability across platforms, not the overall precision.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…