摘要(英) |
Compared to other disabilities, people with visual impairment are more inconvenient to take care themselves. However, the development in blindness guidance aids is slow, for example, white cane has been used for more than a hundred years, but there appears no new aids can completely replace it. Guide dogs are another option, but it is not easy to use widely. Therefore, more and more researches of aid system using computer vision have been published.
In this paper, we use Kinect as the depth sensor to build a guidance aid system for blinds. The system has two features. (1) Rebuild the path information basing on the normal vector provided by Kinect with the algorithm of erosion and dilation, which could retrieve the length of road and the height of obstacle (if any), helping the visually impaired to recognize the walking path. (2) Provide the assistance for the visually impaired to find daily necessities. First, our system trains identification model from convolutional neural networks, which will finally applied for the recognition in a series of segmented images, and get the item locations from the statistical results. That could help the visually impaired quickly find those items.
The accuracy of the floor extraction algorithm used in the system is about 98.8%, which means it could fix the errors read from Kinect in the depth information and could efficiently extraction the regions belong to the floor. For the object detection tests, we prepared ten thousand pictures for each of the three conditions, and we got only less than 5% of chances not to find the target object, which certainly proves that the method we used could efficiently identify the target object. |
參考文獻 |
[1] 身心障礙者服務資訊網. [Online]. Available: http://disable.yam.org.tw/node/5. [Accessed: 16-May-2017].
[2] 台灣導盲犬協會. [Online]. Available: http://p.udn.com.tw/upf/newmedia/2016_vist/03/20160304_noseedog/index.html. [Accessed: 17-May-2017].
[3] The Miniguide mobility aid. [Online]. Available: http://www.gdp-research.com.au/minig_1.htm. [Accessed: 30-May-2017].
[4] S. Shoval, I. Ulrich, and J. Borenstein, “NavBelt and the Guide-Cane [obstacle-avoidance systems for the blind and visually impaired],” IEEE Robotics & Automation Magazine, Vol. 10, pp. 9-20, 2003.
[5] I. Ulrich and J. Borenstein, “The GuideCane: Applying mobile robot technologies to assist the visually impaired,” IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, Vol. 31, pp. 131-136, 2001.
[6] NavBelt 介紹網站. [Online]. Available: http://www-personal.umich.edu/~johannb/navbelt.htm. [Accessed: 19-June-2017].
[7] GuideCane 介紹網站. [Online]. Available: http://www.arrickrobotics.com/robomenu/guidecan.html. [Accessed: 19-June-2017].
[8] 36Kr:研發電子導盲犬,Doogo想讓盲人出行更簡單. [Online]. Available: https://36kr.com/p/5057479.html. [Accessed: 19-June-2017].
[9] NSK Develops a Guide-Dog Style Robot. [Online]. Available: http://www.nskeurope.de/nsk-develops-a-guide-dog-style-1074.htm. [Accessed: 19-June-2017].
[10] Y. Wei and M. Lee, “A Guide-dog Robot System Research for the Visually Impaired,” IEEE International Conference on Industrial Technology, pp. 800-805, 2014
[11] Y. Wei, X. Kou, and M. C. Lee, “A new vision and navigation research for a guide-dog robot system in urban system,” IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 1290-1295, 2014.
[12] Y. Wei, X. Kou, and M. C. Lee, “Development of a guide-dog robot system for the visually impaired by using fuzzy logic based human-robot interaction approach,” International Conference on Control, Automation and Systems, pp. 136-141, 2013.
[13] C. Galindo, J. Gonzalez, and J. A. Fernandez-Madrigal, “Control architecture for human-robot integration: Application to a robotic wheelchair,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), Vol. 36, pp. 1053-1067, 2006.
[14] V. Kulyukin, C. Gharpure, J. Nicholson, and G. Osborne, “Robot-assisted wayfinding for the visually impaired in structured indoor environments,” Autonomous Robots, Vol. 21, pp. 29-41, 2006.
[15] Pivothead Eyewear 官方網站. [Online]. Available: http://www.Pivothead.com/local-live3/. [Accessed: 20-June-2017].
[16] SoftBank-Backed CloudMinds Aims To Create A Cloud Intelligence Ecosystem. [Online]. Available: https://www.chinamoneynetwork.com/2017/02/15/softbank-backed-cloudminds-aims-to-create-a-cloud-intelligence-ecosystem. [Accessed:20-June-2017].
[17] C. Ye, S. Hong, and X. Qian, “A co-robotic cane for blind navigation,” Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on, pp. 1082-1087, 2014.
[18] C. Ye, S. Hong, X. Qian, and W. Wu, “Co-Robotic Cane: A New Robotic Navigation Aid for the Visually Impaired,” IEEE Systems, Man, and Cybernetics Magazine, Vol. 2, pp. 33-42, 2016.
[19] Y. Hirahara, Y. Sakurai, Y. Shiidu, K. Yanashima, and K. Magatani, “Development of the navigation system for the visually impaired by using white cane,” Engineering in Medicine and Biology Society, 2006. EMBS ′06. 28th Annual International Conference of the IEEE, 2006.
[20] W. Uemura and T. Hayama, “A white cane mounted with infrared sensors to detect automatic doors,” Consumer Electronics - Berlin (ICCE-Berlin), 2015 IEEE 5th International Conference on, 2015.
[21] 網路文章. [Online]. Available: http://montanahan.blogspot.tw/2017/12/. [Accessed: 19-June-2017].
[22] Second Sight 官方網站. [Online]. Available: http://www.secondsight.com/system-overview-en.html. [Accessed: 19-June-2017].
[23] 導盲秘書拉一把(A Gentle Tug)之創新設計與技術介紹. [Online]. Available: https://www.artc.org.tw/chinese/03_service/03_02detail.aspx?pid=2232. [Accessed: 30-May-2017].
[24] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, Vol. 521, pp. 436-444, 2015.
[25] D. H. Hubel and T. N. Wiesel, “Receptive fields binocular interaction, and functional architecture in the cat’s visual cortex,” Journal of Physiology, Vol. 160, pp. 106-154, 1962.
[26] Y. Boureau, J. Ponce, and Y. LeCun, “A Theoretical Analysis of Feature Pooling in Visual Recognition,” Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 111-118, 2010.
[27] Intel RealSense SR300官方網站. [Online]. Available: https:// software.intel.com/en-us/articles/introducing-the-intel-realsense-camera-sr300. [Accessed: 20-May-2017].
[28] Intel RealSense R200介紹文章. [Online]. Available: https:// software.intel.com/en-us/articles/realsense-r200-camera. [Accessed: 20-May-2017].
[29] ZED Stereo Camera官方網站. [Online]. Available: https://www.stereolabs.com/zed/specs/. [Accessed: 21-May-2017]
[30] Kinect維基百科. [Online]. Available: https://en.wikipedia.org/wiki/Kinect. [Accessed: 19-May-2017].
[31] Ramer-Douglas-Peucker algorithm. [Online]. Available: https://en.wikipedia.org/wiki/ Ramer–Douglas–Peucker_algorithm. [Accessed: 3-July-2017].
[32] 常態分佈維基百科. [Online]. Available: https://zh.wikipedia.org/wiki/正态分布. [Accessed: 14-June-2017].
[33] A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello, “ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation,” arXiv preprint arXiv:1606.02147, 2016. |