Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
404 views
in Technique[技术] by (71.8m points)

python - Why inference of SavedModel slower on tf.keras than tf.saved_model?

What I did:

  • convert the .weights to SavedModel (.pb), using tf.keras.models.save()
  • load the new model file in two ways below:
# 1st way: tf.saved_model.load
model = tf.saved_model.load(model_path, tags=[tag_constants.SERVING])
infer = model.signatures['serving_default']
res = infer(tf.constant(batch_data))

# 2nd way: tf.keras.models.load
model = tf.keras.models.load_model(model_path, compile=False)
res = model.predict(batch_data)

The first runs at 15 FPS, while the second runs at 10 FPS, 1.5x slower...

My ultimate goal is to output both intermediate layer outputs and final predictions. And the only (simple) way to achieve this in TF2 is by tf.keras.Modul(model.inputs, [<layer>.output, model.output]). I need that loaded model to be an keras object to implement this.

So how could I use keras way and maintain the same inference speed?

question from:https://stackoverflow.com/questions/65938203/why-inference-of-savedmodel-slower-on-tf-keras-than-tf-saved-model

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...