My computer has the following software installed: Anaconda (3), TensorFlow (GPU), and Keras.
There are two Anaconda virtual environments - one with TensorFlow for Python 2.7 and one for 3.5, both GPU version, installed according to the TF instructions. (I had a CPU version of TensorFlow installed previously in a separate environment, but I've deleted it.)
When I run the following:
source activate tensorflow-gpu-3.5
python code.py
and check nvidia-smi
it shows only 3MiB GPU Memory Usage by Python, so it looks like GPU is not used for calculations.
(code.py
is a simple deep Q-learning algorithm implemented with Keras)
Any ideas what can be going wrong?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…