• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

cmb-chula/pylon: Official implementation of Pyramid Localization Network (PYLON) ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

cmb-chula/pylon

开源软件地址(OpenSource Url):

https://github.com/cmb-chula/pylon

开源编程语言(OpenSource Language):

Python 99.1%

开源软件介绍(OpenSource Introduction):

Official implementation of Pyramid Localization Network (PYLON)

an iScience's paper (read online):

@article{PREECHAKUL2022103933,
title = {Improved image classification explainability with high-accuracy heatmaps},
journal = {iScience},
volume = {25},
number = {3},
pages = {103933},
year = {2022},
issn = {2589-0042},
doi = {https://doi.org/10.1016/j.isci.2022.103933},
url = {https://www.sciencedirect.com/science/article/pii/S2589004222002036},
author = {Konpat Preechakul and Sira Sriswasdi and Boonserm Kijsirikul and Ekapol Chuangsuwanich},
keywords = {Artificial intelligence, Computer science, Signal processing}

CAM methods are used to produce heatmaps for explaining a deep classifier. However, these heatmaps come in low resolution hindering explainaibilty of the classifier.

PYLON extends the deep classifier allowing CAM methods to generate much higher resolution heatmaps resulting in much more accurate explanation.

High resolution heatmaps:

high resolution heatmaps

PYLON's architecture:

PYLON architecture

What's included

  • Pytorch implementation of PYLON (see pylon.py)
  • Additional results and heatmaps
  • Code to reproduce the main results

Additional results

NIH's Chest X-Ray 14

  1. Picked localization images (256 x 256) (~20)

  2. Picked localization images (512 x 512) (~20)

  3. All localization images (256 x 256) (~1000 images)

Reproducing results

Requirements

It was tested with Pytorch 1.7.1.

conda create -n pylon python=3.7
conda activate pylon
conda install pytorch=1.7.1 torchvision cudatoolkit=11.0 -c pytorch

Install other related libraries:

pip install -r requirements.txt

Preparing datasets

NIH's Chest X-Ray 14

You need to download the Chest X-Ray 14 dataset by yourself from https://nihcc.app.box.com/v/ChestXray-NIHCC.

Extract all the images into a single big directory data/nih14/images, containing 100k images.

VinDr-CXR

Download the DICOM version from Kaggle https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/overview

You need to convert them into PNG. The script is provided in scripts/convert_dcm_to_png.py. The conversion will not alter the aspect ratio it will aim for maximum 1024 either width or height.

Put all the PNG files into directory data/vin/images. The final directory structures:

Pascal VOC2012

Download the dataset from http://host.robots.ox.ac.uk/pascal/VOC/voc2012/. (Only train & val datasets)

Extract it to data/voc2012. You should see the following directory structures:

data/voc2012
- VOCdevkit

Run

The main run files are train_nih_run.py (see train_nih.py for reference), train_vin_run.py (see train_vin.py for reference), and train_voc_run.py (see train_voc.py for reference). The files describe experiments to be run. They are straightforward to edit. Make changes or read before you run:

NIH's Chest X-Ray 14

python train_nih_run.py

VinDr-CXR

python train_vin_run.py

Pascal VOC2012

python train_voc_run.py

In order to compare with a wide-range of CAM methods, run the following scripts:

python eval_cam.py

The results will be shown at eval_loc/<cam_mode>/<model_name>

See the stats

Stats will be available at eval_auc and eval_loc.

The figures will be available at figs/picked and figs/all.

Running in parallel

You can change the config in mlkitenv.json:

{
    # available GPUs 
    "cuda": [0],
    "num_workers": 8,
    # number of parallel jobs
    "global_lock": 1
}



鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap