- Beijing University of Posts and Telecommunications
Recent advances from wearables have significantly changed the way how humans communicate with the surrounding environment. To some extent, they have extended and augmented the capability of humans. For example, with a Google Glass, people can take pictures simply by winking eyes twice, which releases human hands from the cumbersome image-taking process. Thus, it enables new application scenarios that were not possible before. In this paper, we investigate utilizing vision-based techniques to provide a wearable positioning system. Specifically, we propose a Human-centric Positioning System (HoPS) that utilizes traffic signposts together with context information for real-time positioning. Towards that direction, we make three primary contributions: (1) we make several important observations that guide our design of HoPS system; for example, we find out that approximately 40 percent of traffic signposts monopolize a cell tower, and there are at most six signposts within the coverage of a single cell tower; (2) we investigate the impact factors of object detection success rate, and find its correlation with image quality, and resolution; and (3) we design and implement HoPS and an advanced version of HoPS based on additional context information from Wi-Fi network, which we name HoPS-WiFi. Experimental results demonstrate the effectiveness of HoPS, especially HoPS-WiFi, which can estimate the relevant location correctly within 1.3 seconds.
|Number of pages||15|
|Journal||IEEE Transactions on Mobile Computing|
|Publication status||Published - 2017|
|MoE publication type||A1 Journal article-refereed|
- Context awareness, Human factors, Image processing, Mobile computing, Object detection, Position control, Traffic control, Wearable computing, Wireless fidelity, Vision-based positioning system, feasibility study, human-centric