I am calculating Fourier transforms with tensorflow using tf.signal.fft
. I have successfully installed tensorflow-gpu
and have the right drivers and versions for my code to actually use my CUDA enabled GPU. Indeed I can check that the GPU is being used (even though always at about 1-2%, but its memory is usually at 80%).
I am solving a partial differential equation with the Fourier split-step method where each time increment looks like psi(t+dt) = InverseFourier [ potential(t) * Fourier( psi(t) ) ]
.
While the InverseFourier and Fourier are tensorflow methods, the potential
is just a numpy
array that also needs calculating at each step. My doubt now is: does this numpy calculation actually run on the CPU? So, before the GPU one can be carrier out, the array must be moved from RAM to the GPU memory. Maybe this causes an overhead and hence a time delay?
Am I completely wrong? Is there a way to check for overhead times? Should I just do everything with tensorflow
functions?
question from:
https://stackoverflow.com/questions/65713541/using-tensorflow-on-a-gpu-but-also-numpy-arrays-in-same-code-memory-overhead-de 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…