dc.description.abstract | The concept of telepresence is to provide users feeling of being exist at remote places and also provide users the ability to interact with the remote environment. In recent years, telepresence is often achieved using robots as agents for humans, and the integration with virtual reality (VR) technology can offer users an immersive audiovisual experience. The integration of VR and telepresence robots projects users into robots′ first-person perspective and enables remote control of its actions. However, the design and mechanisms of the robot usually limits users′ possible interactions with the remote environment. On the other hand, with the development of Internet of Things (IoT) technology, embedded devices provide various sensing and tasking resources through the digital realm. However, users usually access IoT resources through non-intuitive application interfaces on computers and smartphones. In order to address these issues in the telepresence systems and the IoT, this study aims to integrate the IoT and telepresence technologies, improving the interactivity with remote environments in an intuitive manner.
Specifically, this research designs and implements a telepresence robot system that employs VR as the human-machine interface, where IoT sensing and tasking resources are georeferenced and shown at corresponding positions on VR displays. The methodology of this study encompasses: (1) integrating a telepresence robot, a 360 ̊ panoramic camera, and a VR device in terms of video transmission and robot controls; (2) utilizing a Simultaneous Localization and Mapping (SLAM) algorithm with panoramic imagery for robot localization; (3) aligning the extracted 3D SLAM model with a Building Information Model (BIM) model annotated with IoT device locations in order to register coordinate systems of the robot, the VR displays, and the IoT resources; (4) leveraging the Open Geospatial Consortium (OGC) SensorThings API international open standard for interoperable connections to IoT sensing and tasking resources, which are presented within the VR environment for intuitive interactions. The system has a video delay around 800ms while running at the resolution of 1920x960 across Wide-Area-Network (WAN). When projecting IoT resources to VR displays, the positioning accuracy affected by lens distortion errors. While the distortions are proportional to the distances to the principle point, the largest distortions happened near the edges of image and were within 10cm, which did not cause significant issue when viewing in VR. Overall, the proposed solution improves telepresence experience in terms of the interactivity with remote environment in an immersive and intuitive manner. | en_US |