To understand why you probably need to remember the current "x86" CPUs with 32 and 64 bit values started life as much more limited 8-bit machines, going back to the Intel 8008. (I coded in this world back in 1973, I still remember (ugh) it!).
In that world, registers were precious and small. You need INC
/DEC
for various purposes, the most common being loop control. Many loops involved doing "multi-precision arithmetic" (e.g, 16 bits or more!) By having INC
/DEC
set the Zero flag (Z
), you could use them to control loops pretty nicely; by insisting the loop control instructions not change the Carry flag (CF
), the carry is preserved across loop iterations and you can implement multiprecision operations without writing tons of code to remember the carry state.
This worked pretty well, once you got used to the ugly instruction set.
On more modern machines with larger word sizes, you don't need this is much, so INC
and DEC
could be semantically equivalent to ADD
...,1 etc. That in fact is what I use when I need the carry set :-}
Mostly, I stay away from INC
and DEC
now, because they do partial condition code updates, and this can cause funny stalls in the pipeline, and ADD
/SUB
don't. So where it doesn't matter (most places), I use ADD
/SUB
to avoid the stalls. I use INC
/DEC
only when keeping the code small matters, e.g., fitting in a cache line where the size of one or two instructions makes enough difference to matter. This is probably pointless nano[literally!]-optimization, but I'm pretty old-school in my coding habits.
My explanation tells us why INC
/DEC
set the Zero flag (Z
). I don't have a particularly compelling explanation for why INC
/DEC
set the sign (and the Parity flag).
EDIT April 2016: It seems that the stall problem is handled better on modern x86s. See INC instruction vs ADD 1: Does it matter?
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…