參考文獻 |
參考文獻
[1] R. Bourne, et al., "Trends in prevalence of blindness and distance and near vision impairment over 30 years: an analysis for the global burden of disease study," Lancet Global Health, vol. 9, no. 2, pp. e130-e143, Feb. 2021.
[2] "衛生福利部統計處," [Online]. Available: https://dep.mohw.gov.tw/DOS/cp-2976-13815-113.html. [Accessed: June, 2020].
[3] A. Riazi, et al., "Outdoor difficulties experienced by a group of visually impaired Iranian people," Journal of current ophthalmology, vol. 28, pp. 85-90, June. 2016.
[4] G. P. Soong, J. E. Lovie-Kitchin, and B. Brown, "Does mobility performance of visually impaired adults improve immediately after orientation and mobility training?," Optometry and Vision Science, vol. 78, no. 9, pp. 657-666, Sept. 2001.
[5] R. Pyun, Y. Kim, P. Wespe, R. Gassert, and S. Schneller, "Advanced augmented white cane with obstacle height and distance feedback, " IEEE International Conference on Rehabilitation Robotics, (ICORR), pp. 1–6, Jun. 2013.
[6] L. M. Tomkins et al., "Behavioral and physiological predictors of guide dog success, " Journal of Veterinary Behavior: Clinical Applications and Research, 2011, vol. 6, pp. 178-187.
[7] J. Bai, D. Liu, G. Su, and Z. Fu, "A cloud and vision-based navigation system used for blind people," International Conference on Artificial Intelligence Automation Control Technologies (AIACT), Wuhan, China, 2017, pp. 127–162.
[8] L. Whitmarsh, "The benefits of guide dog ownership," Visual Impairment Research, vol. 7, no. 1, pp. 27–42, 2005.
[9] 邱文欣, "基於深度學習之單眼距離估測與機器人戶外行走控制," 碩 士, 電機工程學系, 國立中央大學, 桃園市, 2019.
[10] 汪孟璇, "基於深度學習之道路資訊辨識導盲系統," 碩 士, 電機工程學系, 國立中央大學, 桃園市, 2020.
[11] 沈鴻儒, "基於深度學習之道路障礙物偵測與盲人行走輔助技術," 碩 士, 電機工程學系, 國立中央大學, 桃園市, 2020.
[12] 城筱筑, "基於AI技術之視障人士的路況分析及障礙物辨識與測距," 碩 士, 電機工程學系, 國立中央大學, 桃園市, 2021.
[13] J. M. Sáez, F. Escolano, and M. A. Lozano, "Aerial obstacle detection with 3-d mobile devices, " IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 1, pp. 74-80, Jan. 2015.
[14] W.-J. Chang, L.-B. Chen, M.-C. Chen, J.-P. Su, C.-Y. Sie, and C.-H. Yang, "Design and implementation of an intelligent assistive system for visually impaired people to aerial obstacles avoidance and fall detection," IEEE Sensors Journal, vol. 20, no. 17, pp. 10199–10210, Sep. 2020.
[15] W. M. Elmannai and K. M. Elleithy, "A highly accurate and reliable data fusion framework for guiding the visually impaired," IEEE Access, vol. 6, pp. 33029-33054, 2018.
[16] D. Croce, L. Giarre, F. Pascucci, I. Tinnirello, G. E. Galioto, D. Garlisi, and A. Lo Valvo, "An indoor and outdoor navigation system for visually impaired people," IEEE Access, vol. 7, pp. 170406-170418, 2019.
[17] A. Aladrén, G. López-Nicolás, L. Puig, and J. J. Guerrero, "Navigation assistance for the visually impaired using RGB-D sensor with range expansion," IEEE Systems Journal, vol. 10, no. 3, pp. 922-932, Sept. 2016.
[18] A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello, "Enet: A deep neural network architecture for real-time semantic segmentation," arXiv preprint arXiv:1606.02147, 2016.
[19] R. P. Poudel, S. Liwicki, and R. Cipolla, "Fast-Scnn: Fast semantic segmentation network," arXiv preprint arXiv:1902.04502, 2019.
[20] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, "Mobilenets: Efficient convolutional neural networks for mobile vision applications," arXiv preprint arXiv:1704.04861, 2017.
[21] J. Redmon and A. Farhadi, "Yolov3: An incremental improvement," arXiv preprint arXiv:1804.02767, 2018.
[22] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," IEEE Conference on Computer Vision Pattern Recognition (CVPR), Jun. 2016, pp. 779–788.
[23] J. Redmon and A. Farhadi, "YOLO9000: better, faster, stronger," arXiv preprint arXiv:1612.08242, 2016.
[24] S. Ren, K. He, R. Girshick and J. Sun, "Faster R-CNN: Towards real-time object detection with region proposal networks," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, June 2017.
[25] R. Girshick, J. Donahue, T. Darrell and J. Malik, "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 580-587.
[26] R. Girshick, "Fast R-CNN," in IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1440-1448.
[27] J. R. Uijlings, K. E. Van De Sande, T. Gevers and A. W. Smeulders, "Selective Search for Object Recognition," International Journal of Computer Vision, 2013, pp. 154-171.
[28] Bochkovskiy, Alexey, Chien-Yao Wang, and Hong-Yuan Mark Liao, "Yolov4: Optimal speed and accuracy of object detection," arXiv preprint arXiv:2004.10934, 2020.
[29] R. Jocher et al., "yolov5: v5.0," 2020. [Online]. Available: https://github.com/ultralytics/yolov5
[30] "Nvidia Jetson AGX Xavier", [Online]. Available: https://www.nvidia.com/zh-tw/autonomous-machines/embedded-systems/jetson-agx-xavier/. [Accessed: June, 2021].
[31] "Stereolabs ZED", [Online]. Available: https://www.stereolabs.com/zed/ [Accessed: June, 2021].
[32] "Stereolabs ZED 2, " [Online]. Available: https://www.stereolabs.com/zed-2/. [Accessed: June, 2021].
[33] “萬 用 AC 行 動 電 源 enerpad AC42K," [Online]. Available: https://www.enerpad.com.tw/product/10715e6a-83ad-4195-a586-fe2aba024e42. [Accessed: June, 2021].
[34] “Desire Power V8 14.8V 5200mAh 35C-70C 4S鋰電池," [Online]. Available: https://shopee.tw/-%E6%90%9E%E5%95%A5%E9%A3%9B%E6%A9%9F-Desire-Power-V8-14.8V-5200mAh-35C-70C-4S%E9%8B%B0%E9%9B%BB%E6%B1%A0XT60-BSMI%E8%AA%8D%E8%AD%89-i.17393576.2037112384. [Accessed: June, 2021].
[35] “HAGiBiS海備思USB鋁合金外接音效卡," [Online]. Available: https://24h.pchome.com.tw/prod/DCAC1S-A900AOOTG. [Accessed: June, 2021].
[36] “SIM7600CE 4G HAT用戶手冊," [Online]. Available: https://www.waveshare.net/w/upload/7/73/SIM7600CE-4G-HAT-Manual-CN_.pdf. [Accessed: June, 2021].
[37] "Invignal 號誌領航先導技術," [Online]. Available: http://www.invignal.com/. [Accessed: June, 2021].
[38] 王文俊, "認識Fuzzy理論與應用 第四版," 全華科技圖書公司, 2017.
[39] "Position Tracking Overview," [Online]. Available: https://www.stereolabs.com/docs/positional-tracking/. [Accessed: June, 2021].
[40] "Coordinate Frames," [Online]. Available: https://www.stereolabs.com/docs/positional-tracking/coordinate-frames/. [Accessed: June, 2021]. |