Indoor Pedestrian Positioning Method Based on Ultra-Wideband with a Graph Convolutional Network and Visual Fusion.

Huizhen Mu, Chao Yu, Shuna Jiang, Yujing Luo, Kun Zhao, Wen Chen
Author Information
  1. Huizhen Mu: Engineering Center of SHMEC for Space Information and GNSS, East China Normal University, Shanghai 200241, China. ORCID
  2. Chao Yu: Engineering Center of SHMEC for Space Information and GNSS, East China Normal University, Shanghai 200241, China. ORCID
  3. Shuna Jiang: Engineering Center of SHMEC for Space Information and GNSS, East China Normal University, Shanghai 200241, China.
  4. Yujing Luo: Engineering Center of SHMEC for Space Information and GNSS, East China Normal University, Shanghai 200241, China.
  5. Kun Zhao: Engineering Center of SHMEC for Space Information and GNSS, East China Normal University, Shanghai 200241, China. ORCID
  6. Wen Chen: Engineering Center of SHMEC for Space Information and GNSS, East China Normal University, Shanghai 200241, China. ORCID

Abstract

To address the challenges of low accuracy in indoor positioning caused by factors such as signal interference and visual distortions, this paper proposes a novel method that integrates ultra-wideband (UWB) technology with visual positioning. In the UWB positioning module, the powerful feature-extraction ability of the graph convolutional network (GCN) is used to integrate the features of adjacent positioning points and improve positioning accuracy. In the visual positioning module, the residual results learned from the bidirectional gate recurrent unit (Bi-GRU) network are compensated into the mathematical visual positioning model's solution results to improve the positioning results' continuity. Finally, the two positioning coordinates are fused based on particle filter (PF) to obtain the final positioning results and improve the accuracy. The experimental results show that the positioning accuracy of the proposed UWB positioning method based on a GCN is less than 0.72 m in a single UWB positioning, and the positioning accuracy is improved by 55% compared with the Chan-Taylor algorithm. The proposed visual positioning method based on Bi-GRU and residual fitting has a positioning accuracy of 0.42 m, 71% higher than the Zhang Zhengyou visual positioning algorithm. In the fusion experiment, 80% of the positioning accuracy is within 0.24 m, and the maximum error is 0.66 m. Compared with the single UWB and visual positioning, the positioning accuracy is improved by 56% and 52%, respectively, effectively enhancing indoor pedestrian positioning accuracy.

Keywords

References

  1. IEEE Trans Pattern Anal Mach Intell. 2018 Apr;40(4):973-986 [PMID: 28475049]
  2. Sensors (Basel). 2023 Feb 26;23(5): [PMID: 36904799]
  3. Sensors (Basel). 2023 May 08;23(9): [PMID: 37177772]

Grants

  1. 61771197/National Natural Science Foundation of China
  2. 22DZ2229004/the scientific foundation of Science and Technology Commission of Shanghai Municipality

Word Cloud

Created with Highcharts 10.0.0positioningaccuracyvisualUWBresults0mindoormethodGCNimprovebasedmodulenetworkresidualBi-GRUparticlefilterproposedsingleimprovedalgorithmfusionaddresschallengeslowcausedfactorssignalinterferencedistortionspaperproposesnovelintegratesultra-widebandtechnologypowerfulfeature-extractionabilitygraphconvolutionalusedintegratefeaturesadjacentpointslearnedbidirectionalgaterecurrentunitcompensatedmathematicalmodel'ssolutionresults'continuityFinallytwocoordinatesfusedPFobtainfinalexperimentalshowless7255%comparedChan-Taylorfitting4271%higherZhangZhengyouexperiment80%within24maximumerror66Compared56%52%respectivelyeffectivelyenhancingpedestrianIndoorPedestrianPositioningMethodBasedUltra-WidebandGraphConvolutionalNetworkVisualFusionvisionsensor

Similar Articles

Cited By (1)