When you pass a float
as an argument to a variadic function (like printf()
), it is promoted to a double
, which is twice as large as a float
(at least on most platforms).
One way to get around this would be to cast the float
to an unsigned int
when passing it as an argument to printf()
:
printf("hex is %x", *(unsigned int*)&f);
This is also more correct, since printf()
uses the format specifiers to determine how large each argument is.
Technically, this solution violates the strict aliasing rule. You can get around this by copying the bytes of the float
into an unsigned int
and then passing that to printf()
:
unsigned int ui;
memcpy(&ui, &f, sizeof (ui));
printf("hex is %x", ui);
Both of these solutions are based on the assumption that sizeof(int) == sizeof(float)
, which is the case on many 32-bit systems, but isn't necessarily the case.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…