• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

lab11/vlc-localization: Indoor localization using LED lights and smartphones

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

lab11/vlc-localization

开源软件地址(OpenSource Url):

https://github.com/lab11/vlc-localization

开源编程语言(OpenSource Language):

Python 66.5%

开源软件介绍(OpenSource Introduction):

VLC Localization

The goal of this project is to address the indoor localization problem. Our system achieves decimeter-scale accuracy using unmodified commercial smartphones as receivers and commercial LED bulbs (with some minor modifications) as transmitters.

In principle, the LED lights act like stars in the sky and depending on the angle and orientation of visible constellations, the phone can recover its location. The key insight was in finding a way to "label" the stars so the phone can identify them. CMOS imagers use a rolling shutter, capturing a frame line-by-line instead of in one shot. By duty-cycling each LED lights at a unique frequency, the smartphone camera can detect each frequency while remaining imperceptible to room occupants.

For more detail on the theory and technique, please see our paper from MobiCom 2014.

System Overview

To build an end-to-end localization demo requires pulling together a few pieces:

  • LED Light Sources
  • Instructions for building your own LED light sources or modifying existing commerical sources can be found in lights/
  • Image Capture
  • You will need a camera with a reasonably high resolution and the ability to manually fix ISO and exposure values. The exposure value is particularly important, anything slower than 1/8000 sec will not work.
  • We have built applications for Windows Phone 8 and iOS that will capture and upload images (see apps/).
  • Android does not support an exposure control API, which means our app will not work on android phones.
  • Image Processing
  • processing/processors/XXX.py
  • This step scans a captured image, identifies each transmitter, and outputs a set of labeled coordinates (tx_freq :: (px_x, px_y)). These coordinates are in the pixel coordinate system of the captured image.
  • We have a few competing approaches for image processing. In particular, opencv_fft uses opencv to identify and isolate individual transmitters and then runs an FFT over each sub-region to extract the frequency. The opencv approach begins the same, but then uses edge detection to identify dark/light transitions.
  • Localization
  • processing/aoa.py, processing/aoa_full.py
  • This step requires out-of-band knowledge of the transmitter locations. It takes a set of transmitter labels and coordinates in the transmitter coordinate system (e.g. meters) and the coordinates from the image processing step and solves for the image capture device's location in the transmitter coordinate system.
  • The aoa_full.py script ties together image processing and Angle of Arrival (AoA) calculation.
  • The aoa.py script takes in known transmitter positions and the coordinates of transmitter projections in the imager coordinate system. It returns the imager (phone) position and orientation in the transmitter's coordinate system.
  • Cloud Service
  • cloud_service/
  • We have a very basic cloudlet app that will accept image uploads, process the image, and report localization results to gatd.
  • This tool selects the processing to use (e.g. import processors.opencv_fft) and uses meta-data from the uploaded image to determine the room the picture was taken in and the type of phone the picture was taken with.
  • Visualization
  • We have some initial visualization ideas in the web/ folder. They rely on our cloud service and gatd.

Debugging

Most of the processing is instrumented using a "pretty_logger". A function decorated with @logger.op will print its entry/exit, increase a level of indentation for clearer output, save a copy of its output to a result file (cloud service only), and time its duration.

The processing scripts should respect the environment variable DEBUG. Currently, DEBUG can range from DEBUG=1 for light information, DEBUG=2 for lots of information, and DEBUG=3 will write out all of the intermediate images generated during image processing to /tmp/luxp-*. DEBUG=3 runs noticably slower.




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap