在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):gante/mmWave-localization-learning开源软件地址(OpenSource Url):https://github.com/gante/mmWave-localization-learning开源编程语言(OpenSource Language):Python 85.5%开源软件介绍(OpenSource Introduction):Beamformed Fingerprint LearningAn ML-based algorithm that enables energy efficient accurate positioning from mmWave transmissions, with and without tracking. Table of ContentsBackgroundWith 5G millimeter wave wireless communications, the resulting radiation reflects on most visible objects, creating rich multipath environments, as depicted in the simulation below. The radiation is thus significantly shaped by the obstacles it interacts with, carrying latent information regarding the relative positions of the transmitter, the obstacles, and the mobile receiver. In this work, the creation of beamformed fingerprints is achieved through a pre-established codebook of beamforming patterns transmitted by a single base station. Making use of the aforementioned hidden information, deep learning techniques are employed to convert the received beamformed fingerprints (see examples below) into a mobile device’s position. Average errors of down to 3.30/1.78 meters (non-tracking/tracking) are obtained on realistic outdoor scenarios, containing mostly non-line-of-sight positions. Moreover, it was shown that this system is 47x and 85x more energy efficient than conventional A-GPS low-power implementations (for continuous and sporadic position fixes, respectively), making it a very competitive and promising alternative for outdoor positioning. The image shown at the top (left) contains the simulated results for the average error per covered position. Given that the transmitter is the red triangle at the center of the image, and most of the solid yellow shapes are buildings, it is possible to confirm that being in a NLOS position is not a constraint for the proposed system. It is able to provide an estimative for every position that has mmWave signal. This repository also contains tools to evaluate the model performance on a low-power embedded system (Nvidia Jetson TX2), which demonstrates the low energy requirements of this solution - < 10 mJ per position estimate, if the position inference is done at the mobile device. The comparison results are also observable at the top (right). For more information, refer to papers section of this README file. If you find any issue, please contact me ([email protected]). PapersCitationThere are two main citations for this work. By default, consider using the following:
If you are concerned about the energy efficiency of positioning methods or ML-enabled mobile applications, please use:
List of Papers(From newest to oldest)
Getting StartedThese instructions will get you a copy of the project up and running on your local machine for development and testing purposes. Before InstallingTo ensure a smooth process, please ensure you have the following requirements. Hardware
Software
InstallationClone this repository, and then install it and its requirements. It should be something similar to this:
DatasetThe data was generated using the Wireless InSite ray-tracing simulator and a high precision open-source 3D map of New York, made available by the New York City Department of Information Technology & Telecommunications. The simulation consists of a 400 by 400 meters area, centered at the Kaufman Management Center. If you would like to have the 3D files for this or other sections of NYC, feel free to email me. The dataset is available here -- if the link is broken or something is not working properly, please raise an issue in the repository. This file is the result of some additional processing on top of the from Wireless InSite output files, check here if you'd like to know more details. ExperimentsConfigurationAll experiments steps are controled by the configuration file, a TrackingThe use of a tracking or a non-tracking dataset is entirely defined by the model architecture, defined in the Running an ExperimentAssuming you have set a configuration file in
The last step ends with the key metrics being printed to your terminal. If you want to visualize additional results, you can use the visualization tools provided here. E.g.:
For the settings defined in Note - If different sampling frequencies are desired, the original, text-based data ( Evaluate performance on an Nvidia JetsonIf you wish to evaluate the performance (power consumption and throughput) of a model on an embedded system, this sub-section is for you. The tools in this sub-section were designed for this repository, but can easily be modified to work with any TF model on an Nvidia Jetson TX2 device. Assuming you have copied your experiment configuration file to
The script will print throughput-related information to the screen (right part of the image), and the power-related data
will be stored to NOTE: The above was working as of commit 6220366, using CUDA 10 and TF 1.14. Meanwhile, I've lost access the machine, and can no longer confirm that the above works with the current code base. LicenseThis project is licensed under the MIT License - see the LICENSE.md file for details Acknowledgments
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论