Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.0k views
in Technique[技术] by (71.8m points)

c - Why converting 'out of range integer to integer' leads to IB, but converting 'out of range floating-point to integer' leads to UB?

Follow-up question for:

  1. Type casting: double to char: multiple questions
  2. Assigning an unsigned value to a signed char

Context: ISO/IEC 9899:202x (E) working draft — February 5, 2020 C17..C2x N2479 (emphasis added):

J.3 Implementation-defined behavior, J.3.5 Integers

— The result of, or the signal raised by, converting an integer to a signed integer type when the value cannot be represented in an object of that type (6.3.1.3).

6.3.1.4 Real floating and integer

When a finite value of standard floating type is converted to an integer type other than _Bool, the fractional part is discarded (i.e., the value is truncated toward zero). If the value of the integral part cannot be represented by the integer type, the behavior is undefined.

Question: Why converting 'out of range integer to integer' leads to IB, but converting 'out of range floating-point to integer' leads to UB? I.e. why the behavior is not consistent (e.g. IB in both cases)?

UPD. Answer from user P.P. in duplicated question:

I doubt it's reasonably answerable. It's mainly because of history, and based on the implementations, behaviours of hardware, etc when C was standardized. So "consistency" wasn't possible/practical (it's not like the committee decided to arbitrarily classify certain behaviours as IB, UB, or unspecified).

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

From the point of view of the Standard, the question of whether to classify something as Implementation-Defined Behavior and Undefined Behavior depends on whether all implementations should be required to document a behavior generally consistent with the semantics of the language, regardless for cost or usefulness. There was no need to mandate that implementations process actions in ways their customers would find useful, because it was expected that implementations allowed to behave in such fashion would do so with or without a mandate. Consequently, it was seen as better to characterize as Undefined Behavior useful actions which implementations might process 100% consistently, than to characterize as Implementation-Defined actions which might sometimes be impractical to implement consistently.

Note that for an implementation to treat an action as having documented behavior could sometimes have costs that might not be obvious. Consider, for example:

int f1(int x, int y);
int f2(int x, int y, int z);
void test(int x, unsigned char y)
{
  short temp = x/(y+1);
  if (f1(x,y))
    f2(x,y,temp);
}

On platforms where the conversion to short would always execute without side effects, or on implementations that were allowed to treat out-of-range conversions as Undefined Behavior, the computation of x/(y+1) and conversion to short could be deferred until after the call to f1, and skipped altogether if f1 returns zero. Such transformation could affect the behavior of a signal raised by the conversion, however, and would thus not appear to be allowable under the Standard on implementations where the conversion could raise a signal.

On the other hand, while it may be useful to have implementations raise a signal in case of an out-of-bounds conversion, such signals would mainly be useful in situations where quality of diagnostics was viewed as more important than performance. Implementations where performance was more important would be free to make optimizations like the above if they processed the conversion as having no side effects, and it seemed likely that the latter course of action would be practical on all platforms.

There were platforms where the fastest way of converting a float to an int will trap; as noted, the possibility that an action may trap would make classification as Implementation-Defined behavior expensive. While it is unlikely that there would have been any platforms where it would have been impractical to process a conversion from e.g. float to short as a conversion from float to int, followed by a conversion from int to short, there are platforms where that may not be the most useful behavior (e.g. if a platform can at no extra cost peg the result of such a conversion to the range of the target type, that may be more useful than a conversion to int and then the target type). Even if the authors of the Standard would have expected and intended that conversions from floating-point types to small integer types never yield unsequenced traps for any values which are within range of int, the Standard classifies as UB general actions which might behave unpredictably in some cases but in a predictable implementation-specific fashion in others, without any effort to identify specific cases where they should behave predictably.

The latter principle is perhaps best illustrated by examining the way left shift was described in C89 and C99. There is no reason why x << 0 shouldn't yield x for all integer values of x, and the way C89 specified the behavior would do precisely that. The C89 spec, however, specified behavior in some cases where it may be useful to allow some implementations to behave in a different, and not necessarily predictable, fashion. C99 makes no effort to identify situations where all implementations should treat left shifts of negative numbers the same way as C89 did, because the authors expected that all implementations would treat such cases in C89 fashion with or without a mandate.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...