在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):TencentYoutuResearch/CrowdCounting-P2PNet开源软件地址(OpenSource Url):https://github.com/TencentYoutuResearch/CrowdCounting-P2PNet开源编程语言(OpenSource Language):Python 100.0%开源软件介绍(OpenSource Introduction):P2PNet (ICCV2021 Oral Presentation)This repository contains codes for the official implementation in PyTorch of P2PNet as described in Rethinking Counting and Localization in Crowds: A Purely Point-Based Framework. A brief introduction of P2PNet can be found at 机器之心 (almosthuman). The codes is tested with PyTorch 1.5.0. It may not run with other versions. Visualized demos for P2PNetThe networkThe overall architecture of the P2PNet. Built upon the VGG16, it firstly introduce an upsampling path to obtain fine-grained feature map. Then it exploits two branches to simultaneously predict a set of point proposals and their confidence scores. Comparison with state-of-the-art methodsThe P2PNet achieved state-of-the-art performance on several challenging datasets with various densities.
Comparison on the NWPU-Crowd dataset.
The overall performance for both counting and localization.
Comparison for the localization performance in terms of F1-Measure on NWPU.
Installation
Organize the counting datasetWe use a list file to collect all the images and their ground truth annotations in a counting dataset. When your dataset is organized as recommended in the following, the format of this list file is defined as:
Dataset structures:
DATA_ROOT is your path containing the counting datasets. Annotations formatFor the annotations of each image, we use a single txt file which contains one annotation per line. Note that indexing for pixel values starts at 0. The expected format of each line is:
TrainingThe network can be trained using the
By default, a periodic evaluation will be conducted on the validation set. TestingA trained model (with an MAE of 51.96) on SHTechPartA is available at "./weights", run the following commands to launch a visualization demo:
Acknowledgements
Citing P2PNetIf you find P2PNet is useful in your project, please consider citing us: @inproceedings{song2021rethinking,
title={Rethinking Counting and Localization in Crowds: A Purely Point-Based Framework},
author={Song, Qingyu and Wang, Changan and Jiang, Zhengkai and Wang, Yabiao and Tai, Ying and Wang, Chengjie and Li, Jilin and Huang, Feiyue and Wu, Yang},
journal={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year={2021}
} Related works from Tencent Youtu Lab
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论