I am applying quantization to a SSD model. The gist is attached. There is a custom object called "AnchorBoxes" which is added while loading the model. This works fine when I don't do quantization. But when I apply quantization, this custom object is not recognized.
I tried a work around.
def apply_quantization_to_conv2D(layer):
#print(layer)
if isinstance(layer, tf.keras.layers.Conv2D):
return tfmot.quantization.keras.quantize_annotate_layer(layer)
return layer
# Use `tf.keras.models.clone_model` to apply `apply_quantization_to_dense`
# to the layers of the model.
annotated_model = tf.keras.models.clone_model(
model,
clone_function=apply_quantization_to_conv2D,
)
#annotated_model.save('quantize_ready_model_20_01_Conv2D.h5', include_optimizer=True)
annotated_model.summary()
# Now that the Dense layers are annotated,
# `quantize_apply` actually makes the model quantization aware.
#quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
I commented this line quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
in the above code as it was throwing the error ValueError: Unknown layer: AnchorBoxes
Instead I saved the model after applying quantization to the Conv2D layers as below
def apply_quantization_to_conv2D(layer):
#print(layer)
if isinstance(layer, tf.keras.layers.Conv2D):
return tfmot.quantization.keras.quantize_annotate_layer(layer)
return layer
# Use `tf.keras.models.clone_model` to apply `apply_quantization_to_dense`
# to the layers of the model.
annotated_model = tf.keras.models.clone_model(
model,
clone_function=apply_quantization_to_conv2D,
)
annotated_model.summary()
annotated_model.save('quantize_ready_model_20_01_Conv2D_1.h5', include_optimizer=True)
# Now that the Dense layers are annotated,
# `quantize_apply` actually makes the model quantization aware.
#quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
#quant_aware_model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
#quant_aware_model.summary()
Then I loaded the model hoping that the loaded quantized model as below will have the custom_objects attached to it.
with tfmot.quantization.keras.quantize_scope():
loaded_model = tf.keras.models.load_model('./quantize_ready_model_20_01_Conv2D_1.h5', custom_objects={'AnchorBoxes': AnchorBoxes})
Finally I applied the quantize_apply
to the new loaded_model
which has quantized layers.
quant_aware_model = tfmot.quantization.keras.quantize_apply(loaded_model)
which again resulted in the same error
ValueError: Unknown layer: AnchorBoxes
System information
TensorFlow version (installed from source or binary): TF 2.0.0
TensorFlow Model Optimization version (installed from source or binary): 0.5.0
Describe the expected behavior
When I run quantize_apply(model), the model should become quantization aware
Describe the current behavior
Throwing an error on the custom objects
Code to reproduce the issue
gist
question from:
https://stackoverflow.com/questions/65829552/valueerror-unknown-layer-anchorboxes-quantization-tensorflow