Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
328 views
in Technique[技术] by (71.8m points)

gpgpu - Linking with 3rd party CUDA libraries slows down cudaMalloc

It is not a secret that on CUDA 4.x the first call to cudaMalloc can be ridiculously slow (which was reported several times), seemingly a bug in CUDA drivers.

Recently, I noticed weird behaviour: the running time of cudaMalloc directly depends on how many 3rd-party CUDA libraries I linked to my program (note that I do NOT use these libraries, just link my program with them)

I ran some tests using the following program:

int main() {
  cudaSetDevice(0);
  unsigned int *ptr = 0;
  cudaMalloc((void **)&ptr, 2000000 * sizeof(unsigned int));   
  cudaFree(ptr);
return 1;
}

the results are as follows:

  • Linked with: -lcudart -lnpp -lcufft -lcublas -lcusparse -lcurand running time: 5.852449

  • Linked with: -lcudart -lnpp -lcufft -lcublas running time: 1.425120

  • Linked with: -lcudart -lnpp -lcufft running time: 0.905424

  • Linked with: -lcudart running time: 0.394558

According to 'gdb', the time indeed goes into my cudaMalloc, so it's not caused by some library initialization routine..

I wonder if somebody has plausible explanation for this ?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

In your example, the cudaMalloc call initiates lazy context establishment on the GPU. When runtime API libraries are included, their binary payloads have to be inspected and the GPU elf symbols and objects they contain merged into the context. The more libraries there are, the longer you can expect the process to take. Further, if there is an architecture mismatch in any of the cubins and you have a backwards compatible GPU, it can also trigger driver recompilation of device code for the target GPU. In a very extreme case, I have seen an old application linked with an old version of CUBLAS take 10s of seconds to load and initialise when run on a Fermi GPU.

You can explicitly force lazy context establishment by issuing a cudaFree call like this:

int main() {
  ? cudaSetDevice(0);
    cudaFree(0); // context establishment happens here
  ? unsigned int *ptr = 0;
  ? cudaMalloc((void **)&ptr, 2000000 * sizeof(unsigned int)); ? 
  ? cudaFree(ptr);
  return 1;
}

If you profile or instrument this version with timers you should find that the first cudaFree call consumes most of the runtime and the cudaMalloc call becomes almost free.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

2.1m questions

2.1m answers

60 comments

57.0k users

...