• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

utiasSTARS/dpc-net: Deep Pose Correction for Visual Localization

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

utiasSTARS/dpc-net

开源软件地址(OpenSource Url):

https://github.com/utiasSTARS/dpc-net

开源编程语言(OpenSource Language):

Python 100.0%

开源软件介绍(OpenSource Introduction):

DPC-Net: Deep Pose Correction for Visual Localization

Code for Dense Pose Corrections. DPC-Net learns SE(3) corrections to classical geometric and probabilistic visual localization pipelines (e.g., visual odometry).

Installation & Pre-Requisites

  1. Ensure that pytorch is installed on your machine. We perform all training and testing on a a GTX Titan X (Maxwell) with 12GiB of memory.

  2. Install pyslam and liegroups. We use pyslam's TrajectoryMetrics class to store computed trajectories, and use it to compute pose graph relaxations.

  3. Clone DPC-net:

git clone https://github.com/utiasSTARS/dpc-net

Testing with pre-trained model on KITTI data

  1. Download pre-trained models and stats for our sample estimator (based on libviso2):

ftp://128.100.201.179/2017-dpc-net

  1. Open and edit the appropriate variables (mostly paths) in test_dpc_net.py.

  2. Run test_dpc_net.py --seqs 00 --corr pose.

Note that this code does not include the pose graph relaxation, and as a result the statistics it outputs are based on a simple correction model (where in between poses are left untouched).

Training

To train DPC-Net, you need two things:

  1. A list of frame-to-frame corrections to a localization pipeline (this is typically computed using some form of ground-truth).
  2. A set of images (stereo or mono, depending on whether the correction is SO(3) or SE(3)) from which the model can learn corrections.

Using KITTI data

To use the KITTI odometry benchmark to train DPC-Net, you can use the scripts train_dpc_net.py and create_kitti_training_data.py as starting points. If you use our framework, you'll need to save your estimator's poses in a TrajectoryMetrics object.

Citation

If you use this code in your research, please cite:

@article{2018_Peretroukhin_DPC,
  author = {Valentin Peretroukhin and Jonathan Kelly},
  doi = {10.1109/LRA.2017.2778765},
  journal = {{IEEE} Robotics and Automation Letters},
  link = {https://arxiv.org/abs/1709.03128},
  title = {{DPC-Net}: Deep Pose Correction for Visual Localization},
  year = {2018}
}



鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap