Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
314 views
in Technique[技术] by (71.8m points)

python - Using tensorflow on a GPU but also numpy arrays in same code: memory overhead delay?

I am calculating Fourier transforms with tensorflow using tf.signal.fft. I have successfully installed tensorflow-gpu and have the right drivers and versions for my code to actually use my CUDA enabled GPU. Indeed I can check that the GPU is being used (even though always at about 1-2%, but its memory is usually at 80%).

I am solving a partial differential equation with the Fourier split-step method where each time increment looks like psi(t+dt) = InverseFourier [ potential(t) * Fourier( psi(t) ) ].

While the InverseFourier and Fourier are tensorflow methods, the potential is just a numpy array that also needs calculating at each step. My doubt now is: does this numpy calculation actually run on the CPU? So, before the GPU one can be carrier out, the array must be moved from RAM to the GPU memory. Maybe this causes an overhead and hence a time delay?

Am I completely wrong? Is there a way to check for overhead times? Should I just do everything with tensorflow functions?

question from:https://stackoverflow.com/questions/65713541/using-tensorflow-on-a-gpu-but-also-numpy-arrays-in-same-code-memory-overhead-de

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...