The standard is clear: when performing arithmetic on an integral type smaller than int
, the integer is first promoted to a signed int
, unless int
cannot represent the full range of values for the original type, in which case the promotion is to unsigned int
instead.
My question is: what is (was?) the motivation for this policy? Why are unsigned types promoted to signed int
, rather than always to unsigned int
?
Of course, in practice there's almost no difference, since the underlying assembly instruction is the same (just a zero-extension), but there is a key downside to promotion to signed int
, without obvious upside, since overflows are UB in signed arithmetic but well-defined in unsigned arithmetic.
Were there historical reasons for preferring signed int
? Are there architectures that don't use two's complement arithmetic where promotion of small unsigned types to signed int
rather than unsigned int
is easier/faster?
EDIT: I would think it's obvious, but here I'm looking for facts (i.e. some documentation or references that explain the design decision), not "primarily opinion-based" speculation.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…