I try to build docker with flask(API) and tensorflow(ML model).
Process of loading model to tf session was very long, so I decided to start it at the beginning and keep it in a RAM. Then my problem has start. My application stopped working for a docker. It works when I run it without(outside) docker.
Script for keeping tf session.
class TensorHelperAlign():
def __init__(self):
graph = tf.Graph()
with graph.as_default():
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=1.0)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False))
sess.as_default().__enter__()
pnet, rnet, onet = create_mtcnn(sess, None)
self.pnet = pnet
self.rnet = rnet
self.onet = onet
align_mtcnn_helper = TensorHelperAlign()
I based on facenet repo https://github.com/davidsandberg/facenet/blob/master/src/align/detect_face.py
When I run main
function in https://github.com/davidsandberg/facenet/blob/master/src/align/align_dataset_mtcnn.py this code in docker, my application stop at the step with pnet.
My diffrance is in line 51:
# with graph.as_default():
# gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=1.0)
# sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, #log_device_placement=False))
# sess.as_default().__enter__()
# pnet, rnet, onet = create_mtcnn(sess, None)
pnet, rnet, onet = align_mtcnn_helper.pnet, align_mtcnn_helper.rnet, align_mtcnn_helper.onet
Instead of running it all the time, I keep it in a RAM.
Do you know a reason why this work outside of container and didn't work in it?
If I rollback this change my container works fine.
question from:
https://stackoverflow.com/questions/65600485/keep-tensorflow-session-inside-docker 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…