在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):vikrant7/mobile-vod-bottleneck-lstm开源软件地址(OpenSource Url):https://github.com/vikrant7/mobile-vod-bottleneck-lstm开源编程语言(OpenSource Language):Python 100.0%开源软件介绍(OpenSource Introduction):Mobile Video Object DetectionCode for the Paper Mobile Video Object Detection with Temporally-Aware Feature Maps Mason Liu, Menglong Zhu, CVPR 2018 IntroductionThis paper introduces an online model for object detection in videos designed to run in real-time on low-powered mobile and embedded devices. Proposed approach combines fast single-image object detection with convolutional long short term memory (LSTM) layers to create an interweaved recurrent-convolutional architecture. Additionally, authors propose an efficient Bottleneck-LSTM layer that significantly reduces computational cost compared to regular LSTMs. This network achieves temporal awareness by using Bottleneck-LSTMs to refine and propagate feature maps across frames. This approach is substantially faster than existing detection methods in video, outperforming the fastest single-frame models in model size and computational cost while attaining accuracy comparable to much more expensive single-frame models on the Imagenet VID 2015 dataset. This model reaches a real-time inference speed of up to 15 FPS on a mobile CPU. Dependencies
DatasetDownload Imagenet VID 2015 dataset from [link]. This is the link for ILSVRC2017 as the link for ILSVRC2015 seems to down now. To get list of training, validation and test dataset (make sure to change path of dataset in the scripts):
Note: Output of this scripts is already in the repo, so no need to run it again Two custom Pytorch Dataset classes are written in TrainingMake sure to be in python 3.6+ environment with all the dependencies installed. As described in section 4.2 of the paper, model has two types of LSTM layers, one is Bottleneck LSTM layer which reduces the number of channels by 0.25 and the other is normal Conv LSTM which has same number of channels as output as that of input. Training of multiple Conv LSTM layers is done in sequencial order i.e. fine tune and fix all the layers before the newly added LSTM layer. Make sure to keep batch size same in lstm1, lstm2, lstm3, lstm4 and lstm5 training as the size of hidden and cell state of LSTM layers should be consistent while training. Also, make sure to keep width multiplier same. By default, GPU is used for training. Here, freeze_net command line argument freezes the model as descriped in the paper. Before saving the checkpoint model, model gets validated on the validation set. All the checkpoint models are saved in BasenetBasenet is Mobilenet V1 with SSD. Train the basenet by executing following command: python train_mvod_basenet.py --datasets {path to ILSVRC2015 root dir} --batch_size 60 --num_epochs 30 --width_mult 1 If you want to train with any other width multiplier then change the command line argument width_mult accordingly. For more help on command line args, execute the following command: python train_mvod_basenet.py --help Basenet with 1 Bottleneck LSTMAs described in section 4.2 of the paper, first Bottleneck LSTM layer is placed after Conv13 layer and we freeze all the layers upto and including Conv13 layer. To train model with one Bottleneck LSTM layer execute following command: python train_mvod_lstm1.py --datasets {path to ILSVRC2015 root dir} --batch_size 10 --num_epochs 30 --pretrained {path to pretrained basenet model} --width_mult 1 --freeze_net Refer script docstring and inline comments in Basenet with 2 Bottleneck LSTMAs described in section 4.2 of the paper, second Bottleneck LSTM layer is placed after Feature Map 1 layer and we freeze all the layers upto and including Feature Map 1 layer. To train model with two LSTM layers execute following command: python train_mvod_lstm2.py --datasets {path to ILSVRC2015 root dir} --batch_size 10 --num_epochs 30 --pretrained {path to pretrained lstm 1} --width_mult 1 --freeze_net Refer script docstring and inline comments in Basenet with 3 Bottleneck LSTMAs described in section 4.2 of the paper, third Bottleneck LSTM layer is placed after Feature Map 2 layer and we freeze all the layers upto and including Feature Map 2 layer. To train model with three Bottleneck LSTM layers execute following command: python train_mvod_lstm3.py --datasets {path to ILSVRC2015 root dir} --batch_size 10 --num_epochs 30 --pretrained {path to pretrained lstm 2} --width_mult 1 --freeze_net Refer script docstring and inline comments in Basenet with 3 Bottleneck LSTM and 1 LSTMAs described in section 4.2 of the paper, a LSTM layer is placed after Feature Map 3 layer and we freeze all the layers upto and including Feature Map 3 layer. To train model with 3 Bottleneck LSTM layers and 1 LSTM layer execute following command: python train_mvod_lstm4.py --datasets {path to ILSVRC2015 root dir} --batch_size 10 --num_epochs 30 --pretrained {path to pretrained lstm 3} --width_mult 1 --freeze_net Refer script docstring and inline comments in Basenet with 3 Bottleneck LSTM and 2 LSTMAs described in section 4.2 of the paper, second normal LSTM layer is placed after Feature Map 4 layer and we freeze all the layers upto and including Feature Map 4 layer. To train model with 3 Bottleneck LSTM layers and 2 LSTM layer execute following command: python train_mvod_lstm5.py --datasets {path to ILSVRC2015 root dir} --batch_size 10 --num_epochs 30 --pretrained {path to pretrained lstm 4} --width_mult 1 --freeze_net Refer script docstring and inline comments in EvaluationEvaluation script python evaluate.py --help ResultsMain Results according to the paper:
Reported metrics:TODO: Train model and report metric score. Due to limited GPU resource and the huge size of Imagenet VID 2015 dataset, training of the model is taking huge amount of time. I will report the metric score here once training is done. Update : I have trained Basenet and now training of lstm1 is going on. References
ContributorsThanks a lot to [Pichao Wang] for training the model and suggesting several changes. LicenseBSD |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论