在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):gmberton/CosPlace开源软件地址(OpenSource Url):https://github.com/gmberton/CosPlace开源编程语言(OpenSource Language):Python 100.0%开源软件介绍(OpenSource Introduction):Rethinking Visual Geo-localization for Large-Scale ApplicationsThis is the official repository for the CVPR 2022 paper Rethinking Visual Geo-localization for Large-Scale Applications. The paper presents a new dataset called San Francisco eXtra Large (SF-XL, go here to download it), and a highly scalable training method (called CosPlace), which allows to reach SOTA results with compact descriptors. The images below represent respectively:
TrainAfter downloading the SF-XL dataset, simply run
the script automatically splits SF-XL in CosPlace Groups, and saves the resulting object in the folder To change the backbone or the output descriptors dimensionality simply run
You can also speed up your training with Automatic Mixed Precision (note that all results/statistics from the paper did not use AMP)
Run Dataset size and lightweight versionThe SF-XL dataset is about 1 TB. For training only a subset of the images is used, and you can use this subset for training, which is only 360 GB. If this is still too heavy for you (e.g. if you're using Colab), but you would like to run CosPlace, we also created a small version of SF-XL, which is only 5 GB. Obviously, using the small version will lead to lower results, and it should be used only for debugging / exploration purposes. More information on the dataset and lightweight version are on the README that you can find on the dataset download page (go here to find it). ReproducibilityResults from the paper are fully reproducible, and we followed deep learning's best practices (average over multiple runs for the main results, validation/early stopping and hyperparameter search on the val set). If you are a researcher comparing your work against ours, please make sure to follow these best practices and avoid picking the best model on the test set. LimitationsGiven that we use only 8 out of 50 groups, the code in datasets/train_dataset.py only allows for groups with L=0 to be used for training, to keep the code more readable. Therefore you can use up to NxNx1 groups, which is up to 25 when N=5 as in the paper. TestYou can test a trained model as such
You can download plenty of trained models below. Model ZooIn the table below are links to models with different backbones and dimensionality of descriptors, trained on SF-XL. If you want to use these weights in your own code, make sure that the model is the same as ours: CNN backbone -> L2 -> GeM -> FC -> L2.
Or you can download all models at once at this link IssuesIf you questions regarding our code or dataset, feel free to open an issue or send an email to [email protected] AcknowledgementsParts of this repo are inspired by the following repositories:
CiteHere is the bibtex to cite our paper
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论