Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
416 views
in Technique[技术] by (71.8m points)

binary - C question: off_t (and other signed integer types) minimum and maximum values

I occasionally will come across an integer type (e.g. POSIX signed integer type off_t) where it would be helpful to have a macro for its minimum and maximum values, but I don't know how to make one that is truly portable.


For unsigned integer types I had always thought this was simple. 0 for the minimum and ~0 for the maximum. I have since read of several different SO threads which suggest using -1 instead of ~0 for portability. An interesting thread with some contention is here:
c++ - Is it safe to use -1 to set all bits to true? - Stack Overflow

However even after reading about this issue I'm still confused. Also, I'm looking for something both C89 and C99 compliant so I don't know if the same methods apply. Say I had a type of uint_whatever_t. Couldn't I just cast to 0 first and then bitwise complement? Would this be ok?:

#define UINT_WHATEVER_T_MAX ( ~ (uint_whatever_t) 0 )


Signed integer types look like they'll be a tougher nut to crack. I've seen several different possible solutions but only one appears to be portable. Either that or it's incorrect. I found it while googling for an OFF_T_MAX and OFF_T_MIN. Credit to Christian Biere:

#define MAX_INT_VAL_STEP(t) 
    ((t) 1 << (CHAR_BIT * sizeof(t) - 1 - ((t) -1 < 1))) 

#define MAX_INT_VAL(t) 
    ((MAX_INT_VAL_STEP(t) - 1) + MAX_INT_VAL_STEP(t))

#define MIN_INT_VAL(t) 
    ((t) -MAX_INT_VAL(t) - 1)

[...]
#define OFF_T_MAX MAX_INT_VAL(off_t) 


I couldn't find anything regarding the different allowable types of signed integer representations in C89, but C99 has notes for integer portability issues in §J.3.5:

Whether signed integer types are represented using sign and magnitude, two’s complement, or ones’ complement, and whether the extraordinary value is a trap representation or an ordinary value (6.2.6.2).

That would seem to imply that only those three listed signed number representations can be used. Is the implication correct, and are the macros above compatible with all three representations?


Other thoughts:
It seems that the function-like macro MAX_INT_VAL_STEP() would give an incorrect result if there were padding bits. I wonder if there is any way around this.

Reading through signed number representations on Wikipedia it occurs to me that for all three signed integer representations any signed integer type's MAX would be:
sign bit off, all value bits on (all three)
And its MIN would be either:
sign bit on, all value bits on (sign and magnitude)
sign bit on, all value bits off (ones/twos complement)

I think I could test for sign and magnitude by doing this:

#define OFF_T_MIN ( ( ( (off_t)1 | ( ~ (off_t) -1 ) ) != (off_t)1 ) ? /* sign and magnitude minimum value here */ : /* ones and twos complement minimum value here */ )

Then as sign and magnitude is sign bit on and all value bits on wouldn't the minimum for off_t in that case be ~ (off_t) 0 ? And for ones/twos complement minimum I would need some way to turn all the value bits off but leave the sign bit on. No idea how to do this without knowing the number of value bits. Also is the sign bit guaranteed to always be one more significant than the most significant value bit?

Thanks and please let me know if this is too long a post



EDIT 12/29/2010 5PM EST:
As answered below by ephemient to get the unsigned type max value, (unsigned type)-1 is more correct than ~0 or even ~(unsigned type)0. From what I can gather when you use -1 it is just the same as 0-1 which will always lead to the maximum value in an unsigned type.

Also, because the maximum value of an unsigned type can be determined it is possible to determine how many value bits are in an unsigned type. Credit to Hallvard B. Furuseth for his IMAX_BITS() function-like macro that he posted in reply to a question on comp.lang.c

/* Number of bits in inttype_MAX, or in any (1<<b)-1 where 0 <= b < 3E+10 */
#define IMAX_BITS(m) ((m) /((m)%0x3fffffffL+1) /0x3fffffffL %0x3fffffffL *30 
                  + (m)%0x3fffffffL /((m)%31+1)/31%31*5 + 4-12/((m)%31+3))

IMAX_BITS(INT_MAX) computes the number of bits in an int, and IMAX_BITS((unsigned_type)-1) computes the number of bits in an unsigned_type. Until someone implements 4-gigabyte integers, anyway:-)

The heart of my question however remains unanswered: how to determine the minimum and maximum values of a signed type via macro. I'm still looking into this. Maybe the answer is there is no answer.

If you are not viewing this question on StackOverflow in most cases you cannot see the proposed answers until they are accepted. It is suggested to view this question on StackOverflow.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

I believe I have finally solved this problem, but the solution is only available at configure-time, not compile-time or runtime, so it's still not idea. Here it is:

HEADERS="#include <sys/types.h>"
TYPE="off_t"
i=8
while : ; do
printf "%s
struct { %s x : %d; };
" "$HEADERS" "$TYPE" $i > test.c
$CC $CFLAGS -o /dev/null -c test.c || break
i=$(($i+1))
done
rm test.c
echo $(($i-1))

The idea comes from 6.7.2.1 paragraph 3:

The expression that specifies the width of a bit-field shall be an integer constant expression with a nonnegative value that does not exceed the width of an object of the type that would be specified were the colon and expression omitted. If the value is zero, the declaration shall have no declarator.

I would be quite pleased if this leads to any ideas for solving the problem at compile-time.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...