關鍵字:深度學習、導盲系統、室內引導、偵測系統、視障者 ;It is difficult for the visually impaired to walk independently in a strange and complicated indoor public space. Therefore, how to obtain environmental information so that the visually impaired can reach the destination without the assistance of others is an important research topic. Therefore, this paper uses image processing technology combined with deep learning to develop an indoor blind navigation assistance system that allow the visually impaired to walk independently in unfamiliar environments. The indoor blind navigation assistance system developed in this paper consists of three parts: (1) Sign detection: using the YOLOv3 model with depth images to detect common indoor signs and ground hazard indicators, and calculate their distance and location, (2) Word detection and recognition: using PSENET model with the OCR text recognition API to detect and recognize the text in the indicator plate. (3) direction and indicator information pairing: Using YOLOv3 model to detect the area and match the direction indicating arrows and information. Combining the functions of the above three parts, after obtaining the image of the front signboards, the information of the environment can be identified, and the system can assist the visually impaired to grasp the unfamiliar environment information in front via voiced signals. The purpose of the development of this system is to assist visually impaired people to provide information in front of them in a public space in an unfamiliar environment to help the visually impaired reach their destination. The experimental results show that the average accuracy of the detected indicators reaches 93%, and the accuracy of direction and sign information matching is 86%. From the experimental results, it proves that the system has a certain degree of usability.
Keywords: deep learning, navigation systems for the blind persons, indoor navigation, detection system, visually impaired