Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
312 views
in Technique[技术] by (71.8m points)

Why sizeof built in types except char is compiler dependent in C & C++?

Why are fundamental types in C and C++ not strictly defined like in Java where an int is always 4 bytes and long is 8 bytes, etc. To my knowledge in C and C++ only a char is defined as 1 byte and everything else is defined differently by different compilers. So an int in C and C++ does not have to necessarily be 4 bytes as long as it's longer than a short and short is longer char.
I was just wondering why is that and does it serve any purpose?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

The reason is largely because C is portable to a much wider variety of platforms. There are many reasons why the different data types have turned out to be the various sizes they are on various platforms, but at least historically, int has been adapted to be the platform's native word size. On the PDP-11 it was 16 bits (and long was originally invented for 32-bit numbers), while some embedded platform compilers even have 8-bit ints. When 32-bit platforms came around and started having 32-bit ints, short was invented to represent 16-bit numbers.

Nowadays, most 64-bit architectures use 32-bit ints simply to be compatible with the large base of C programs that were originally written for 32-bit platforms, but there have been 64-bit C compilers with 64-bit ints as well, not least of which some early Cray platforms.

Also, in the earlier days of computing, floating-point formats and sizes were generally far less standardized (IEEE 754 didn't come around until 1985), which is why floats and doubles are even less well-defined than the integer data types. They generally don't even presume the presence of such peculiarities as infinities, NaNs or signed zeroes.

Furthermore, it should perhaps be said that a char is not defined to be 1 byte, but rather to be whatever sizeof returns 1 for. Which is not necessarily 8 bits. (For completeness, it should perhaps be added here, also, that "byte" as a term is not universally defined to be 8 bits; there have been many historical definitions of it, and in the context of the ANSI C standard, a "byte" is actually defined to be the smallest unit of storage that can store a char, whatever the nature of char.)

There are also such architectures as the 36-bit PDP-10s and 18-bit PDP-7s that have also run C programs. They may be quite rare these days, but do help explain why C data types are not defined in terms of 8-bit units.

Whether this, in the end, really makes the language "more portable" than languages like Java can perhaps be debated, but it would sure be suboptimal to run Java programs on 16-bit processors, and quite weird indeed on 36-bit processors. It is perhaps fair to say that it makes the language more portable, but programs written in it less portable.

EDIT: In reply to some of the comments, I just want to append, as an opinion piece, that C as a language is unlike languages like Java/Haskell/ADA that are more-or-less "owned" by a corporation or standards body. There is ANSI C, sure, but C is more than ANSI C; it's a living community, and there are many implementations that aren't ANSI-compatible but are "C" nevertheless. Arguing whether implementations that use 8-bit ints are C is similar to arguing whether Scots is English in that it's mostly pointless. They use 8-bit ints for good reasons, noone who knows C well enough would be unable to reason about programs written for such compilers, and anyone who writes C programs for such architectures would want their ints to be 8 bits.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...