This is the function that I use for this calculation:
public static int binlog( int bits ) // returns 0 for bits=0
{
int log = 0;
if( ( bits & 0xffff0000 ) != 0 ) { bits >>>= 16; log = 16; }
if( bits >= 256 ) { bits >>>= 8; log += 8; }
if( bits >= 16 ) { bits >>>= 4; log += 4; }
if( bits >= 4 ) { bits >>>= 2; log += 2; }
return log + ( bits >>> 1 );
}
It is slightly faster than Integer.numberOfLeadingZeros() (20-30%) and almost 10 times faster (jdk 1.6 x64) than a Math.log() based implementation like this one:
private static final double log2div = 1.000000000001 / Math.log( 2 );
public static int log2fp0( int bits )
{
if( bits == 0 )
return 0; // or throw exception
return (int) ( Math.log( bits & 0xffffffffL ) * log2div );
}
Both functions return the same results for all possible input values.
Update:
The Java 1.7 server JIT is able to replace a few static math functions with alternative implementations based on CPU intrinsics. One of those functions is Integer.numberOfLeadingZeros(). So with a 1.7 or newer server VM, a implementation like the one in the question is actually slightly faster than the binlog
above. Unfortunatly the client JIT doesn't seem to have this optimization.
public static int log2nlz( int bits )
{
if( bits == 0 )
return 0; // or throw exception
return 31 - Integer.numberOfLeadingZeros( bits );
}
This implementation also returns the same results for all 2^32 possible input values as the the other two implementations I posted above.
Here are the actual runtimes on my PC (Sandy Bridge i7):
JDK 1.7 32 Bits client VM:
binlog: 11.5s
log2nlz: 16.5s
log2fp: 118.1s
log(x)/log(2): 165.0s
JDK 1.7 x64 server VM:
binlog: 5.8s
log2nlz: 5.1s
log2fp: 89.5s
log(x)/log(2): 108.1s
This is the test code:
int sum = 0, x = 0;
long time = System.nanoTime();
do sum += log2nlz( x ); while( ++x != 0 );
time = System.nanoTime() - time;
System.out.println( "time=" + time / 1000000L / 1000.0 + "s -> " + sum );
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…