Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
242 views
in Technique[技术] by (71.8m points)

python - How should I convert a TF2 Object Detection model in order to use it in TFjs?

I'm currently trying to convert to TF.js one of the Object Detection models from the TF2 OD ZOO, in particular SSD MobileNet V2 FPNLite 320x320.

When I convert the model pre-existing SavedModel from the saved_model folder to TF.js I'm able to import it in my browser and launch it through executeAsync(). If I keep the original pipeline.config and try to create another SavedModel from the provided checkpoint using this line

python exporter_main_v2.py --input_type image_tensor 
    --pipeline_config_path ./pre-trained-models/ssd320/pipeline.config 
    --trained_checkpoint_dir ./pre-trained-models/ssd320/checkpoint_0 
    --output_directory ./pre-trained-models/ssd320/exported_model

after I convert it to TF.js with the following line

tensorflowjs_converter 
    --input_format=tf_saved_model 
    --saved_model_tags=serve 
    ./pre-trained-models/ssd320/path-to-savedmodel-folder 
    ./pre-trained-models/tfjs_test

I encounter the following error when I try to launch the inference on my browser

Error screenshot

util_base.js?a6b2:141 Uncaught (in promise) Error: TensorList shape mismatch:  Shapes -1 and 3 must match
    at Module.assert (util_base.js?a6b2:141)
    at assertShapesMatchAllowUndefinedSize (tensor_utils.js?74aa:24)
    at TensorList.setItem (tensor_list.js?41f7:182)
    at Module.executeOp (control_executor.js?de9e:188)
    at eval (operation_executor.js?be85:52)
    at executeOp (operation_executor.js?be85:94)
    at GraphExecutor.processStack (graph_executor.js?33ef:390)
    at GraphExecutor.executeWithControlFlow (graph_executor.js?33ef:350)
    at async GraphExecutor._executeAsync (graph_executor.js?33ef:285)
    at async GraphModel.executeAsync (graph_model.js?9724:316)

I'm currently working in Colab with the standard modules (TF 2.4.1, Python 3.6.9 and tensorflowjs 3.0.0) and didn't manage to find infos on similiar issues elsewhere.

I tried with SSD MobileNet v2 320x320 (no FPN here) and the outcome is the same. I'm starting to think that it may be connected to the use of exporter_main_v2.py but I wouldn't know how to convert the model without it.

Could you please help me figure out something more about the cause of this issue?

question from:https://stackoverflow.com/questions/66046092/how-should-i-convert-a-tf2-object-detection-model-in-order-to-use-it-in-tfjs

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...