I'm wondering what happens when casting from a floating point type to an unsigned integer type in C when the value can't be accurately represented by the integer type in question. Take for instance
func (void)
{
float a = 1E10;
unsigned b = a;
}
The value of b
I get on my system (with unsigned
on my system being able to represent values from 0 to 232-1) is 1410065408
. This seems sensible to me because it's simply the lowest order bits of the result of the cast.
I believe the behavior of operations such as these is undefined by the standard. Am I wrong? What can I expect in practice if I do things like this?
Also, what happens with signed types? If b
is of type int
, I get -2147483648
, which doesn't really make sense to me.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…