• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

kyu-sz/WPAL-network: Weakly-supervised Pedestrian Attribute Localization Network ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

kyu-sz/WPAL-network

开源软件地址(OpenSource Url):

https://github.com/kyu-sz/WPAL-network

开源编程语言(OpenSource Language):

Python 91.4%

开源软件介绍(OpenSource Introduction):

Weakly-supervised Pedestrian Attribute Localization Network

This repository is no longer maintained. Please refer the latest version.

For the RAP dataset, please contact Dangwei Li ([email protected]).

By Ken Yu, under guidance of Dr. Zhang Zhang and Prof. Kaiqi Huang.

Weakly-supervised Pedestrian Attribute Localization Network (WPAL-network) is a Convolutional Neural Network (CNN) structure designed for recognizing attributes from objects as well as localizing them. Currently it is developed to recognize attributes from pedestrians only, using the Richly Annotated Pedestrian (RAP) database or PETA database.

Installation

  1. Clone this repository

    # Make sure to clone with --recursive
    git clone --recursive https://github.com/kyu-sz/Weakly-supervised-Pedestrian-Attribute-Localization-Network.git
  2. Build Caffe and pycaffe

    This project use python layers for input, etc. When building Caffe, set the WITH_PYTHON_LAYER option to true.

    WITH_PYTHON_LAYER=1 make all pycaffe -j 8
  3. Download the RAP database

    To get the Richly Annotated Pedestrian (RAP) database, please visit rap.idealtest.org to learn about how to download a copy of it.

    It should have two zip files.

    $RAP/RAP_annotation.zip
    $RAP/RAP_dataset.zip
    
  4. Unzip them both to the directory.

    cd $RAP
    unzip RAP_annotation.zip
    unzip RAP_dataset.zip
  5. Create symlinks for the RAP database

    cd $WPAL_NET_ROOT/data/dataset/
    ln -s $RAP RAP

Usage

To train the model, first fetch a pretrained VGG_CNN_S model by:

./data/scripts/fetch_pretrained_vgg_cnn_s_model.sh

Then run experiment script for training:

./experiments/examples/VGG_CNN_S/train_vgg_s_rap_0.sh

Experiment script for testing is also available:

./experiments/examples/VGG_CNN_S/test_vgg_s_rap.sh

Acknowledgements

The project layout and some codes are derived from Mr. Ross Girshick's py-faster-rcnn.

We use VGG_CNN_S as pretrained model. Information can be found on Mr. K. Simonyan's Gist. It is from the BMVC-2014 paper "Return of the Devil in the Details: Delving Deep into Convolutional Nets":

Return of the Devil in the Details: Delving Deep into Convolutional Nets
K. Chatfield, K. Simonyan, A. Vedaldi, A. Zisserman
British Machine Vision Conference, 2014 (arXiv ref. cs1405.3531)



鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap