在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):vislearn/dsacstar开源软件地址(OpenSource Url):https://github.com/vislearn/dsacstar开源编程语言(OpenSource Language):C++ 73.4%开源软件介绍(OpenSource Introduction):DSAC* for Visual Camera Re-Localization (RGB or RGB-D)
Change Log
IntroductionDSAC* is a learning-based visual re-localization method, published in TPAMI 2021. After being trained for a specific scene, DSAC* is able to estimate the camera rotation and translation from a single, new image of the same scene. DSAC* is versatile w.r.t what data is available at training and test time. It can be trained from RGB images and ground truth poses alone, or additionally utilize depth maps (measured or rendered) or sparse scene reconstructions for training. During test time, it supports pose estimation from RGB as well as RGB-D inputs. DSAC* is a combination of Scene Coordinate Regression with CNNs and Differentiable RANSAC (DSAC) for end-to-end training. This code extends and improves our previous re-localization pipeline, DSAC++ with support for RGB-D inputs, support for data augmentation, a leaner network architecture, reduced training and test time, as well as other improvements for increased accuracy. For more details, we kindly refer to the paper. You find a BibTeX reference of the paper at the end of this readme. InstallationDSAC* is based on PyTorch, and includes a custom C++ extension which you have to compile and install (but it's easy). The main framework is implemented in Python, including data processing and setting parameters. The C++ extension encapsulates robust pose optimization and the respective gradient calculation for efficiency reasons. DSAC* requires the following python packages, and we tested it with the package versions in brackets
The repository contains an conda env create -f environment.yml
conda activate dsacstar You compile and install the C++ extension by executing: cd dsacstar
python setup.py install Compilation requires access to OpenCV header files and libraries. If you are using Conda, the setup script will look for the OpenCV package in the current Conda environment. Otherwise (or if that fails), you have to set the OpenCV library directory and include directory yourself by editing the setup.py file. If compilation succeeds, you can Note: The code does not support OpenCV 4.x at the moment, due to legacy function calls in the Data StructureThe Each sub-folder of
Training and test folders contain the following sub-folders:
Correspondences of files across the different sub-folders will be established by alphabetical ordering. Details for image files: Any format supported by Details for pose files: Text files containing the camera pose h as 4x4 matrix following the 7Scenes/12Scenes convention. The pose transforms camera coordinates e to scene coordinates y, i.e. y = he. Details for calibration files: Text file. At the moment we only support the camera focal length (one value shared for x- and y-direction, in px). The principal point is assumed to lie in the image center. Details for init files: (3xHxW) tensor (standard PyTorch file format via Details for depth files: Any format supported by Details for eye files: Same format, size and conventions as Supported DatasetsPrior to using these datasets, please check their orignial licenses (see the website links at the beginning of each section). 7Scenes7Scenes (MSR) is a small-scale indoor re-localization dataset. The authors provide training/test split information, and a dense 3D scan of each scene, RGB and depth images as well as ground truth poses. We provide the Python script Note that the provided depth images are not yet registered to the RGB images, and using them directly will lead to inferior results. As an alternative, we provide rendered depth maps here. Just extract the archive inside For RGB-D experiments we provide pre-computed camera coordinate files ( 12Scenes12Scenes (Stanford) is a small-scale indoor re-localization dataset. The authors provide training/test split information, and a dense 3D scan of each scene, RGB and depth images as well as ground truth poses. We provide the Python script Provided depth images are registered to the RGB images, and can be used directly.However, we provide rendered depth maps here which we used in our experiments. Just extract the archive inside For RGB-D experiments we provide pre-computed camera coordinate files ( Cambridge LandmarksCambridge Landmarks is an outdoor re-localization dataset. The dataset comes with a set of RGB images of five landmark buildings in the city of Cambridge (UK). The authors provide training/test split information, and a structure-from-motion (SfM) reconstruction containing a 3D point cloud of each building, and reconstructed camera poses for all images. We provide the Python script Note: The Cambridge Landmarks dataset contains a sixth scene, Street, which we omitted in our experiments due to the poor quality of the SfM reconstruction. Training DSAC*We train DSAC* in two stages: Initializing scene coordinate regression, and end-to-end training. DSAC* supports various variants of camera re-localization, depending on what information about the scene is available at training and test time, e.g. a 3D reconstruction of the scene, or depth measurements for images. Note: We provide pre-trained networks for 7Scenes, 12Scenes, and Cambridge, each trained for the three main scenarios investigated in the paper: RGB only (RGB), RGB + 3D model (RGBM) and RGB-D RGBD). Download them here. You may call all training scripts with the Each training script will create a log file InitalizationRGB only (mode 0)If only RGB images and ground truth poses are available (minimal setup), initialize a network by calling: python train_init.py <scene_name> <network_output_file> --mode 0 Mode 0 triggers the RGB only mode which requires no pre-computed ground truth scene coordinates nor depth maps. You specify a scene via RGB + 3D Model (mode 1)When a 3D model of the scene is available, it may be utilized during the initalization stage which usually leads to improved accuracy. You may utilize the 3D model in two ways: Either you use it together with the ground truth poses to render dense depth maps for each RGB image (see In this case, the training script will generate ground truth scene coordinates from the depth maps and ground truth poses (implemented in python train_init.py <scene_name> <network_output_file> --mode 1 Alternatively, you may pre-compute ground truth scene coordinate files directly (see python train_init.py <scene_name> <network_output_file> --mode 1 -sparse RGB-D (mode 2)When (measured) depth maps for each image are available, you call: python train_init.py <scene_name> <network_output_file> --mode 2 This uses the Note: The 7Scenes depth maps are not registered to the RGB images, and hence are not directly usable for training. The 12Scenes depth maps are registered properly and may be used as is. However, in our experiments, we used rendered depth maps for both 7Scenes and 12Scenes to initialize scene coordinate regression. End-To-End TrainingEnd-To-End training supports two modes: RGB (mode 1) and RGB-D (mode 2) depending on whether depth maps are available or not. python train_e2e.py <scene_name> <network_input_file> <network_output_file> --mode <1 or 2>
Mode 2 (RGB-D) requires pre-computed camera coordinate files (see Data Structure section above). We provide these files for 7Scenes/12Scenes, see Supported Datasets section. Testing DSAC*Testing supports two modes: RGB (mode 1) and RGB-D (mode 2) depending on whether depth maps are available or not. To evaluate on a scene, call: python test.py <scene_name> <network_input_file> --mode <1 or 2> This will estimate poses for the test set, and compare them to the respective ground truth. You specify a scene via
Mode 2 (RGB-D) requires pre-computed camera coordinate files (see Data Structure section above). We provide these files for 7Scenes/12Scenes, see Supported Datasets section. Note that these files have to be generated from the measured depth maps (but ensure proper registration to RGB images). You should not utlize rendered depth maps here, since rendering would use the ground truth camera pose which means that ground truth test information leaks into your input data. Call the test script with the PublicationsPlease cite the following paper if you use DSAC* or parts of this code in your own work.
This code builds on our previous camera re-localization pipelines, namely DSAC and DSAC++:
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论