Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
533 views
in Technique[技术] by (71.8m points)

gcc - why is char's sign-ness not defined in C?

The C standard states:

ISO/IEC 9899:1999, 6.2.5.15 (p. 49)

The three types char, signed char, and unsigned char are collectively called the character types. The implementation shall define char to have the same range, representation, and behavior as either signed char or unsigned char.

And indeed gcc define that according to target platform.

My question is, why does the standard do that? I can see nothing that can come out of ambiguous type definition, except of hideous and hard to spot bugs.

More than so, in ANSI C (before C99), the only byte-sized type is char, so using char for math is sometimes inevitable. So saying "one should never use char for math" is not so true. If that was the case, a saner decision was to include three types "char,ubyte,sbyte".

Is there a reason for that, or is it just some weird backwards-compatibility gotcha, in order to allow bad (but common) compilers to be defined as standard compatible?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

"Plain" char having unspecified signed-ness allows compilers to select whichever representation is more efficient for the target architecture: on some architectures, zero extending a one-byte value to the size of "int" requires less operations (thus making plain char 'unsigned'), while on others the instruction set makes sign-extending more natural, and plain char gets implemented as signed.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...