请选择 进入手机版 | 继续访问电脑版
  • 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

BryanPlummer/pl-clc: Implementation for our paper "Phrase Localization and ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

BryanPlummer/pl-clc

开源软件地址(OpenSource Url):

https://github.com/BryanPlummer/pl-clc

开源编程语言(OpenSource Language):

MATLAB 99.8%

开源软件介绍(OpenSource Introduction):

Phrase Localization and Visual Relationship Detection with Comprehensive Image-Language Cues

pl-clc contains the implementation for our paper which has several implementation improvements over the initial arXiv submission. If you find this code useful in your research, please consider citing:

@inproceedings{plummerPLCLC2017,
    Author = {Bryan A. Plummer and Arun Mallya and Christopher M. Cervantes and Julia Hockenmaier and Svetlana Lazebnik},
    Title = {Phrase Localization and Visual Relationship Detection with Comprehensive Image-Language Cues},
    booktitle = {ICCV},
    Year = {2017}
}

Phrase Localization Evaluation Demo

This code was tested using Matlab R2016a on a system with Ubuntu 14.04.

  1. Clone the pl-clc repository

    git clone --recursive https://github.com/BryanPlummer/pl-clc.git
  2. Follow installation requirements for external code which includes:

    1. Faster RCNN
    2. Edge Boxes
    3. LIBSVM
    4. HGLMM Fisher Vectors

    On the system this code was tested on only Caffe (in Faster RCNN) and LIBSVM required any compiling to use the evaluation script.

  3. Optional, download the Stanford Parser, putting the code in the external folder naming it stanford-parser. Note that the version of the Stanford Parser used for the precomputed data was 3.4.1.

  4. Download the precomputed data (8.3G): pl-clc models

  5. Get the Flickr30k Entities dataset and put it in the datasets folder. The code also assumes the images have been placed in datasets/Flickr30kEntities/Images.

  6. After unpacking the precomputed data you can run our evaluation code

    >> evalAllCuesFlickr30K

    This step took about 45 minutes using a single Tesla K40 GPU on a system with an Intel(R) Xeon(R) CPU E5-2687W v2 processor.

Training new models

There are example scripts that was used to create all the precomputed data in the trainScripts folder. Training these models from scratch requires about 100G of memory. This can be reduced by simply removing some parfor loops, but training the CCA model requires about 70G memory by itself.




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap