The format is actually quite simple, and not particularly different than IEEE 754 binary32 format (it's actually simpler, not supporting any of the "magic" NaN/Inf values, and having no subnormal numbers, because the mantissa here has an implicit 0 on the left instead of an implicit 1).
As Wikipedia puts it,
The number is represented as the following formula: (?1)sign × 0.significand × 16exponent?64.
If we imagine that the bytes you read are in a uint8_t b[4]
, then the resulting value should be something like:
uint32_t mantissa = (b[1]<<16) | (b[2]<<8) | b[3];
int exponent = (b[0] & 127) - 64;
double ret = mantissa * exp2(-24 + 4*exponent);
if(b[0] & 128) ret *= -1.;
Notice that here I calculated the result in a double
, as the range of a IEEE 754 float
is not enough to represent the same-sized IBM single precision value (also the opposite holds). Also, keep in mind that, due to endian issues, you may have to revert the indexes in my code above.
Edit: @Eric Postpischil correctly points out that, if you have C99 or POSIX 2001 available, instead of mantissa * exp2(-24 + 4*exponent)
you should use ldexp(mantissa, -24 + 4*exponent)
, which should be more precise (and possibly faster) across implementations.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…