• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

liuwei16/ALFNet: Code for 'Learning Efficient Single-stage Pedestrian Detect ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

liuwei16/ALFNet

开源软件地址(OpenSource Url):

https://github.com/liuwei16/ALFNet

开源编程语言(OpenSource Language):

Jupyter Notebook 83.8%

开源软件介绍(OpenSource Introduction):

Learning Efficient Single-stage Pedestrian Detectors by Asymptotic Localization Fitting

Keras implementation of ALFNet accepted in ECCV 2018.

Introduction

This paper is a step forward pedestrian detection for both speed and accuracy. Specifically, a structurally simple but effective module called Asymptotic Localization Fitting (ALF) is proposed, which stacks a series of predictors to directly evolve the default anchor boxes step by step into improving detection results. As a result, during training the latter predictors enjoy more and better-quality positive samples, meanwhile harder negatives could be mined with increasing IoU thresholds. On top of this, an efficient single-stage pedestrian detection architecture (denoted as ALFNet) is designed, achieving state-of-the-art performance on CityPersons and Caltech. For more details, please refer to our paper.

img01

Dependencies

  • Python 2.7
  • Numpy
  • Tensorflow 1.x
  • Keras 2.0.6
  • OpenCV

Contents

  1. Installation
  2. Preparation
  3. Models
  4. Training
  5. Test
  6. Evaluation

Installation

  1. Get the code. We will call the cloned directory as '$ALFNet'.
  git clone https://github.com/liuwei16/ALFNet.git
  1. Install the requirments.
  pip install -r requirements.txt

Preparation

  1. Download the dataset. We trained and tested our model on the recent CityPersons pedestrian detection dataset, you should firstly download the datasets. By default, we assume the dataset is stored in '$ALFNet/data/cityperson/'.

  2. Dataset preparation. We have provided the cache files of training and validation subsets. Optionally, you can also follow the ./generate_data.py to create the cache files for training and validation. By default, we assume the cache files is stored in '$ALFNet/data/cache/cityperson/'.

  3. Download the initialized models. We use the backbone ResNet-50 and MobileNet_v1 in our experiments. By default, we assume the weight files is stored in '$ALFNet/data/models/'.

Models

We have provided the models that are trained on the training subset with different ALF steps and backbone architectures. To help reproduce the results in our paper,

  1. For ResNet-50:

ALFNet-1s: city_res50_1step.hdf5

ALFNet-2s: city_res50_2step.hdf5

ALFNet-3s: city_res50_3step.hdf5

  1. For MobileNet:

MobNet-1s: city_mobnet_1step.hdf5

MobNet-2s: city_mobnet_2step.hdf5

Training

Optionally, you should set the training parameters in ./keras_alfnet/config.py.

  1. Train with different backbone networks.

Follow the ./train.py to start training. You can modify the parameter 'self.network' in ./keras_alfnet/config.py for different backbone networks. By default, the output weight files will be saved in '$ALFNet/output/valmodels/(network)/'.

  1. Train with different ALF steps.

Follow the ./train.py to start training. You can modify the parameter 'self.steps' in ./keras_alfnet/config.py for different ALF steps. By default, the output weight files will be saved in '$ALFNet/output/valmodels/(network)/(num of)steps'.

  1. Update: Train with the strategy of weight moving average (WMA)

Optionally, we provide an example of training ALFNet-2s with WMA (./train_2step_wma.py)

WMA is firstly proposed in Mean-Teacher.

We find that WMA is helpful to achieve more stable results and one trial is given in ./results_2step_wma.txt

Test

Follow the ./test.py to get the detection results. By default, the output .txt files will be saved in '$ALFNet/output/valresults/(network)/(num of)steps'.

Evaluation

  1. Follow the ./evaluation/dt_txt2json.m to convert the '.txt' files to '.json'.
  2. Follow the ./evaluation/eval_script/eval_demo.py to get the Miss Rate (MR) results of models. By default, the models are evaluated based on the Reasonable settting. Optionally, you can modify the parameters in ./evaluation/eval_script/eval_MR_multisetup.py to evaluate the models in different settings, such as different occlusion levels and IoU thresholds.

Citation

If you think our work is useful in your research, please consider citing:

@InProceedings{Liu_2018_ECCV,
author = {Liu, Wei and Liao, Shengcai and Hu, Weidong and Liang, Xuezhi and Chen, Xiao},
title = {Learning Efficient Single-stage Pedestrian Detectors by Asymptotic Localization Fitting},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}



鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap