It s usually a case of setting block size for optimal performance, and grid size according to the total amount of work. Most kernels have a "sweet spot" number of warps per Mp where they work best, and you should do some benchmarking/profiling to see where that is. You probably still need over-spill logic in the kernel because problem sizes are rarely round multiples of block sizes.
EDIT:
To give a concrete example of how this might be done for a simple kernel (in this case a custom BLAS level 1 dscal type operation done as part of a Cholesky factorization of packed symmetric band matrices):
// Fused square root and dscal operation
__global__
void cdivkernel(const int n, double *a)
{
__shared__ double oneondiagv;
int imin = threadIdx.x + blockDim.x * blockIdx.x;
int istride = blockDim.x * gridDim.x;
if (threadIdx.x == 0) {
oneondiagv = rsqrt( a[0] );
}
__syncthreads();
for(int i=imin; i<n; i+=istride) {
a[i] *= oneondiagv;
}
}
To launch this kernel, the execution parameters are calculated as follows:
- We allow up to 4 warps per block (so 128 threads). Normally you would fix this at an optimal number, but in this case the kernel is often called on very small vectors, so having a variable block size made some sense.
- We then compute the block count according to the total amount of work, up to 112 total blocks, which is the equivalent of 8 blocks per MP on a 14 MP Fermi Telsa. The kernel will iterate if the amount of work exceeds grid size.
The resulting wrapper function containing the execution parameter calculations and kernel launch look like this:
// Fused the diagonal element root and dscal operation into
// a single "cdiv" operation
void fusedDscal(const int n, double *a)
{
// The semibandwidth (column length) determines
// how many warps are required per column of the
// matrix.
const int warpSize = 32;
const int maxGridSize = 112; // this is 8 blocks per MP for a Telsa C2050
int warpCount = (n / warpSize) + (((n % warpSize) == 0) ? 0 : 1);
int warpPerBlock = max(1, min(4, warpCount));
// For the cdiv kernel, the block size is allowed to grow to
// four warps per block, and the block count becomes the warp count over four
// or the GPU "fill" whichever is smaller
int threadCount = warpSize * warpPerBlock;
int blockCount = min( maxGridSize, max(1, warpCount/warpPerBlock) );
dim3 BlockDim = dim3(threadCount, 1, 1);
dim3 GridDim = dim3(blockCount, 1, 1);
cdivkernel<<< GridDim,BlockDim >>>(n,a);
errchk( cudaPeekAtLastError() );
}
Perhaps this gives some hints about how to design a "universal" scheme for setting execution parameters against input data size.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…