在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):GrumpyZhou/visloc-apr开源软件地址(OpenSource Url):https://github.com/GrumpyZhou/visloc-apr开源编程语言(OpenSource Language):Jupyter Notebook 92.3%开源软件介绍(OpenSource Introduction):Absolute Camera Pose Regression for Visual LocalizationThis repository provides implementation of PoseNet[Kendall2015ICCV], PoseNet-Nobeta[Kendall2017CVPR] which trains PoseNet using the loss learning the weighting parameter and PoseLSTM[Walch2017ICCV]. To use our code, first download the repository:
Setup Running EnvironmentWe tested the code on Linux Ubuntu 16.04.6 with
We recommend to use Anaconda to manage packages. Run following lines to automatically setup a ready environment for our code.
Otherwise, one can try to download all required packages seperately according to their offical documentation. Comments:The code has also been tested with Python 3.5, Pytorch 0.4, but now we have upgraded to latest versions. Prepare DatasetsOur code is flexible for evaluation on various localization datasets. We use Cambridge Landmarks dataset as an example to show how to prepare a dataset:
Other DatasetsIf you want to train it on other datasets, please make sure it has same folder structure as Cambridge Landmarks dataset:
Here, dataset_train.txt and dataset_test.txt are the pose label files. For more details about the pose label format, you can check the documentation of Cambridge Landmarks dataset. TrainingWe recommend to download pretrained model for PoseNet initialization. The weights are pretrained on Place dataset for place recognition and has been adapted for our PoseNet implementation. It can be downloaded by executing weights/download.sh.
Use abspose.py for either training or testing. For detailed training options run Training ExamplesHere we show an example to train a PoseNet-Nobeta model on ShopFacade scene.
See more training examples in example.sh. Training Visualization (optional)We use Visdom server to visualize the training process. By default, training loss, validation accuracy( translation and rotation) are plotted to one figure. One can adapt it inside utils/common/visdom_templates.py to plot other statistics. It can turned on from training options as following.
TestingTrained modelsWe provide some pretrained models here. One can also see how the output of the program there. However, we did not spend much effort in tuning trainnig parameters to improve the localization accuracy, since it is not essential for us.
Citations
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论