• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

irapkaist/scancontext: Global LiDAR descriptor for place recognition and long-te ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

irapkaist/scancontext

开源软件地址(OpenSource Url):

https://github.com/irapkaist/scancontext

开源编程语言(OpenSource Language):

C++ 42.2%

开源软件介绍(OpenSource Introduction):

Scan Context

NEWS (Oct, 2021): Scan Context++ is accepted for T-RO!

  • Our extended study named Scan Context++ is accepted for T-RO.
    • Scan Context++: Structural Place Recognition Robust to Rotation and Lateral Variations in Urban Environments
  • The additional evaluation codes (e.g., lateral evaluations on Oxford Radar RobotCar dataset) with the new metric (we call it recall-distribution based on KL-D) will be added soon.

Note

  • Scan Context can be easily integrated with any LiDAR odometry algorithms or any LiDAR sensors. Examples are:
  • Scan Context also works for radar.
    • Integrated with yeti-radar-odometry for radar SLAM: navtech-radar-slam
      • p.s. please see the fast_evaluator_radar directory for the radar place recognition evaluation (radar scan context was introduced in MulRan dataset paper).

NEWS (April, 2020): C++ implementation

  • C++ implementation released!
    • See the directory cpp/module/Scancontext
    • Features
      • Light-weight: a single header and cpp file named "Scancontext.h" and "Scancontext.cpp"
        • Our module has KDtree and we used nanoflann. nanoflann is an also single-header-program and that file is in our directory.
      • Easy to use: A user just remembers and uses only two API functions; makeAndSaveScancontextAndKeys and detectLoopClosureID.
      • Fast: tested the loop detector runs at 10-15Hz (for 20 x 60 size, 10 candidates)
    • Example: Real-time LiDAR SLAM
      • We integrated the C++ implementation within the recent popular LiDAR odometry codes (e.g., LeGO-LOAM and A-LOAM).
        • That is, LiDAR SLAM = LiDAR Odometry (LeGO-LOAM) + Loop detection (Scan Context) and closure (GTSAM)
      • For details, see cpp/example/lidar_slam or refer these repositories: SC-LeGO-LOAM or SC-A-LOAM.

  • Scan Context is a global descriptor for LiDAR point cloud, which is proposed in this paper and details are easily summarized in this video .
@ARTICLE { gskim-2021-tro,
    AUTHOR = { Giseop Kim and Sunwook Choi and Ayoung Kim },
    TITLE = { Scan Context++: Structural Place Recognition Robust to Rotation and Lateral Variations in Urban Environments },
    JOURNAL = { IEEE Transactions on Robotics },
    YEAR = { 2021 },
    NOTE = { Accepted. To appear. },
}

@INPROCEEDINGS { gkim-2018-iros,
  author = {Kim, Giseop and Kim, Ayoung},
  title = { Scan Context: Egocentric Spatial Descriptor for Place Recognition within {3D} Point Cloud Map },
  booktitle = { Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems },
  year = { 2018 },
  month = { Oct. },
  address = { Madrid }
}
  • This point cloud descriptor is used for place retrieval problem such as place recognition and long-term localization.

What is Scan Context?

  • Scan Context is a global descriptor for LiDAR point cloud, which is especially designed for a sparse and noisy point cloud acquired in outdoor environment.
  • It encodes egocentric visible information as below:

  • A user can vary the resolution of a Scan Context. Below is the example of Scan Contexts' various resolutions for the same point cloud.

How to use?: example cases

  • The structure of this repository is composed of 3 example use cases.
  • Most of the codes are written in Matlab.
  • A directory matlab contains main functions including Scan Context generation and the distance function.
  • A directory example contains a full example code for a few applications. We provide a total 3 examples.
  1. basics contains a literally basic codes such as generation and can be a start point to understand Scan Context.

  2. place recognition is an example directory for our IROS18 paper. The example is conducted using KITTI sequence 00 and PlaceRecognizer.m is the main code. You can easily grasp the full pipeline of Scan Context-based place recognition via watching and following the PlaceRecognizer.m code. Our Scan Context-based place recognition system consists of two steps; description and search. The search step is then composed of two hierarchical stages (1. ring key-based KD tree for fast candidate proposal, 2. candidate to query pairwise comparison-based nearest search). We note that our coarse yaw aligning-based pairwise distance enables reverse-revisit detection well, unlike others. The pipeline is below.

  1. long-term localization is an example directory for our RAL19 paper. For the separation of mapping and localization, there are separated train and test steps. The main training and test codes are written in python and Keras, only excluding data generation and performance evaluation codes (they are written in Matlab), and those python codes are provided using jupyter notebook. We note that some path may not directly work for your environment but the evaluation codes (e.g., makeDataForPRcurveForSCIresult.m) will help you understand how this classification-based SCI-localization system works. The figure below depicts our long-term localization pipeline.

    More details of our long-term localization pipeline is found in the below paper and we also recommend you to watch this video .
@ARTICLE{ gkim-2019-ral,
    author = {G. {Kim} and B. {Park} and A. {Kim}},
    journal = {IEEE Robotics and Automation Letters},
    title = {1-Day Learning, 1-Year Localization: Long-Term LiDAR Localization Using Scan Context Image},
    year = {2019},
    volume = {4},
    number = {2},
    pages = {1948-1955},
    month = {April}
}
  1. SLAM directory contains the practical use case of Scan Context for SLAM pipeline. The details are maintained in the related other repository PyICP SLAM; the full-python LiDAR SLAM codes using Scan Context as a loop detector.

Acknowledgment

This work is supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure and Transport of Korea (19CTAP-C142170-02), and [High-Definition Map Based Precise Vehicle Localization Using Cameras and LIDARs] project funded by NAVER LABS Corporation.

Contact

If you have any questions, contact here please

License

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Copyright

  • All codes on this page are copyrighted by KAIST and Naver Labs and published under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License. You must attribute the work in the manner specified by the author. You may not use the work for commercial purposes, and you may only distribute the resulting work under the same license if you alter, transform, or create the work.



鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap