Floating point numbers are usually normalized. Consider, for example, scientific notation as most of us learned it in school. You always scale the exponent so there's exactly one digit before the decimal point. For example, instead of 123.456, you write 1.23456x102.
Floating point on a computer is normally handled (almost1) the same way: numbers are normalized so there's exactly one digit before the binary point (binary point since most work in binary instead of decimal). There's one difference though: in the case of binary, that means the digit before the decimal point must be a 1
. Since it's always a 1
, there's no real need to store that bit. To save a bit of storage in each floating point number, that 1
bit is implicit instead of being stored.
As usual, there's just a bit more to the situation than that though. The main difference is denormalized numbers. Consider, for example, if you were doing scientific notation but you could only use exponents from -99 to +99. If you wanted to store a number like, say, 1.234*10-102, you wouldn't be able to do that directly, so it would probably just get rounded down to 0.
Denormalized numbers give you a way to deal with that. Using a denormalized number, you'd store that as 0.001234*10-99. Assuming (as is normally the case on a computer) that the number of digits for the mantissa and exponent are each limited, this loses some precision, but still avoids throwing away all the precision and just calling it 0
.
1 Technically, there are differences, but they make no difference to the basic understanding involved.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…