在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):HYPJUDY/Decouple-SSAD开源软件地址(OpenSource Url):https://github.com/HYPJUDY/Decouple-SSAD开源编程语言(OpenSource Language):Python 86.9%开源软件介绍(OpenSource Introduction):Decouple-SSADCode, models and data for Decouple-SSAD. Decoupling Localization and Classification in Single Shot Temporal Action Detection, Download preprocessed data, final models and the result files for evaluation by visiting Onedrive. An improved version of SSAD (Single Shot Temporal Action Detection) is also provided in this repository. IntroductionVideo temporal action detection aims to temporally localize and recognize the action in untrimmed videos. Existing one-stage approaches mostly focus on unifying two subtasks, i.e., localization of action proposals and classification of each proposal through a fully shared backbone. However, such design of encapsulating all components of two subtasks in one single network might restrict the training by ignoring the specialized characteristic of each subtask. In this paper, we propose a novel Decoupled Single Shot temporal Action Detection (Decouple-SSAD) method to mitigate such problem by decoupling the localization and classification in a one-stage scheme. Particularly, two separate branches are designed in parallel to enable each component to own representations privately for accurate localization or classification. Each branch produces a set of action anchor layers by applying deconvolution to the feature maps of the main stream. High-level semantic information from deeper layers is thus incorporated to enhance the feature representations. We conduct extensive experiments on THUMOS14 and demonstrate superior performance over state-of-the-art methods. Environment
Have been tested on Ubuntu 16.04
For example, I use the following commands to set up environment: export PATH=/usr/local/cuda-9.0/bin:$PATH # select cuda version
export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64/
conda create -n Decouple-SSAD pip python=3.6 # select python version
source activate Decouple-SSAD
pip install --upgrade packageURL # select tensorflow version
# e.g. tensorflow_gpu-1.12.0-cp36-cp36m-linux_x86_64.whl
# download from https://www.tensorflow.org/install/pip#package-location
# or
# pip install tensorflow-gpu --user
# this command will install the latest version, which may not match with your cuda version
git clone https://github.com/HYPJUDY/Decouple-SSAD.git
cd Decouple-SSAD
# ... follow "Run code" section ...
conda deactivate Run code
Compare with SSADOur implementation of Decouple-SSAD is based on SSAD (Single Shot Temporal Action Detection). We thank Tianwei Lin for his generously sharing code. We re-train the origin SSAD as our baseline model (i.e., akin to the single main stream of Decouple-SSAD), which follows the overall structure and adopt most of the parameters of SSAD but with the following improvements:
Besides to these, some codes of SSAD have been re-structured and modified, which may also have influence on the performance. We conduct ablation study to show that our baseline model (the main stream pretrained on UCF101) improve [email protected] performance of SSAD from 24.6% to 31.2% (6.6% relative gains). Moreover, Decouple-SSAD could further enhance the performance significantly, providing 4.6% enhancement against the baseline. PerformanceNote that the performance may fluctuate 0.5~2% at each training even with the same environment and code. Occasionally, the mAP of single temporal stream is higher than the fused result (e.g. Decouple-SSAD(KnetV3) in my last training). Here's the detailed performance report. Each of the following experiments are run with 2 GeForce RTX 2080 Ti GPUs.
For those who are interested in the detection AP over different action categories with overlap threshold 0.5 in THUMOS14,
please refer to CitationIf you like this paper or code, please cite us:
ContactFeel free to open issues or email to me for issues or help using Decouple-SSAD. Any feedback is welcome! |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论