參考文獻 |
[1] R. Bourne et al., "Trends in prevalence of blindness and distance and near vision impairment over 30 years: an analysis for the global burden of disease study," The Lancet Global Health, no. 2, pp. 130-143, 2021.
[2] 衛生福利部統計處, "身心障礙人數(年)," 2022. [Online]. Available: https://statdb.dgbas.gov.tw/pxweb/Dialog/viewplus.asp?ma=SW0109A1A&ti=%A8%AD%A4%DF%BB%D9%C3%AA%A4H%BC%C6-%A6~&path=../PXfile/SocialWelfare/&lang=9&strList=L.
[3] 賴怡靜, "基於深度學習之距離估測與自動避障的戶外導航機器人," 碩士, 電機工程學系, 國立中央大學, 2018.
[4] 邱文欣, "基於深度學習之單眼距離估測與機器人戶外行走控制," 碩士, 電機工程學系, 國立中央大學, 2019.
[5] 汪孟璇, "基於深度學習之道路資訊辨識導盲系統," 碩士, 電機工程學系, 國立中央大學, 2020.
[6] 沈鴻儒, "基於深度學習之道路障礙物偵測與盲人行走輔助技術," 碩士, 電機工程學系, 國立中央大學, 2020.
[7] 城筱筑, "基於AI技術之視障人士的路況分析及障礙物辨識與測距," 碩士, 電機工程學系, 國立中央大學, 2021.
[8] 謝易軒, "基於AI技術之視障人士的行進避障及超商辨識與引導," 碩士, 電機工程學系, 國立中央大學, 2021.
[9] C. Godard, O. Mac Aodha, and G. J. Brostow, "Unsupervised monocular depth estimation with left-right consistency," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 270-279.
[10] A. Paszke, et al., "Enet: A deep neural network architecture for real-time semantic segmentation," 2016. [Online]. Available: https://doi.org/10.48550/arXiv.1606.02147.
[11] "Stereolabs ZED." [Online]. Available: https://www.stereolabs.com/zed/.
[12] "Nvidia Jetson AGX Xavier." [Online]. Available: https://www.nvidia.com/zh-tw/autonomous-machines/embedded-systems/jetson-agx-xavier/.
[13] R. P. Poudel, S. Liwicki, and R. Cipolla, "Fast-scnn: Fast semantic segmentation network," 2019. [Online]. Available: https://doi.org/10.48550/arXiv.1902.04502.
[14] J. Redmon and A. Farhadi, "YOLOv3: An incremental improvement," 2018. [Online]. Available: https://doi.org/10.48550/arXiv.1804.02767.
[15] "roLabelImg," 2017. [Online]. Available: https://github.com/cgvict/roLabelImg.
[16] "Stereolabs ZED 2." [Online]. Available: https://www.stereolabs.com/zed-2/.
[17] G. Jocher et al., "YOLOv5 (ultralytics)," 2022. [Online]. Available: https://doi.org/10.5281/zenodo.3908559.
[18] S. Al-Khalifa and M. Al-Razgan, "Ebsar: Indoor guidance for the visually impaired," Computers & Electrical Engineering, vol. 54, pp. 26-39, Aug. 2016.
[19] T. Priya, K. S. Sravya, and S. Umamaheswari, "Machine-Learning-Based device for visually impaired person," in Artificial Intelligence and Evolutionary Computations in Engineering Systems, Singapore, 2020, pp. 79-88.
[20] A. Rodrigues, et al., "Getting smartphones to Talkback: Understanding the smartphone adoption process of blind users," Conference on Computers & Accessibility, Lisbon, Portugal, 2015.
[21] M. Avila, et al., "Remote assistance for blind users in daily life: A survey about Be My Eyes," The 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Corfu, Island, Greece, 2016.
[22] X. Nguyen, et al., "Artificial vision: The effectiveness of the OrCam in patients with advanced inherited retinal dystrophies," Acta Ophthalmol., no. 4, pp. 986-993, Jun. 2022.
[23] K. Matusiak, P. Skulimowski, and P. Strurniłło, "Object recognition in a mobile phone application for visually impaired users," The 6th International Conference on Human System Interactions, Jun. 2013, pp. 479-484.
[24] L. Ţepelea, I. Gavriluţ, and A. Gacsádi, "Smartphone application to assist visually impaired people," The 14th International Conference on Engineering of Modern Electric Systems, Jun. 2017, pp. 228-231.
[25] S. M. Felix, S. Kumar, and A. Veeramuthu, "A smart personal AI assistant for visually impaired people," The 2nd International Conference on Trends in Electronics and Informatics, May 2018, pp. 1245-1250.
[26] D. Croce et al., "An indoor and outdoor navigation system for visually impaired people," IEEE Access, vol. 7, pp. 170406-170418, 2019.
[27] D. Ahmetovic, et al., "ReCog: Supporting blind people in recognizing personal objects," Conference on Human Factors in Computing Systems, 2020, pp. 1–12.
[28] G. Senarathne, et al., "BlindAid : Android-based mobile application guide for visually challenged people," The 12th Annual Information Technology, Electronics and Mobile Communication Conference, Oct. 2021, pp. 39-45.
[29] W. Liu et al., "SSD: Single shot multiBox detector," in European Conference on Computer Vision2016, pp. 21-37.
[30] M. M. Islam, et al., "Developing walking assistants for visually impaired people: A review," IEEE Sensors Journal, vol. 19, no. 8, pp. 2814-2828, 2019.
[31] N. Martiniello, et al., "Exploring the use of smartphones and tablets among people with visual impairments: Are mainstream devices replacing the use of traditional visual aids?," Assistive Technology, vol. 34, no. 1, pp. 34-45, Jan. 2022.
[32] "Samsung Galaxy S9," 2018. [Online]. Available: https://www.samsung.com/tw/support/mobile-devices/what-are-the-new-samsung-galaxy-s9-and-s9-plus-specs/.
[33] "Google Pixel 6 Pro," 2021. [Online]. Available: https://store.google.com/tw/product/pixel_6_pro?hl=zh-TW.
[34] "realme 5 Pro," 2019. [Online]. Available: https://www.realme.com/tw/realme-5-pro.
[35] "Sony Xperia 1 iii 智慧型手機," 2021. [Online]. Available: https://store.sony.com.tw/product/show/ff8080817c9632be017cb62ab0b2191e.
[36] "NanoReview." [Online]. Available: https://nanoreview.net/en.
[37] "Smartphone Processors Ranking," 2022. [Online]. Available: https://nanoreview.net/en/soc-list/rating.
[38] T.-Y. Lin et al., "Microsoft COCO: Common objects in context," European Conference on Computer Vision, 2014, pp. 740-755.
[39] S. Shao, et al., "Objects365: A large-scale, high-quality dataset for object detection," International Conference on Computer Vision, 2019, pp. 8429-8438.
[40] M. Everingham, et al., "The pascal visual object classes (VOC) challenge," International journal of computer vision, vol. 88, no. 2, pp. 303-338, 2010.
[41] "LabelImg." [Online]. Available: https://github.com/heartexlabs/labelImg.
[42] "JSON Data format." [Online]. Available: https://cocodataset.org/#format-data.
[43] "Background Subtraction." [Online]. Available: https://docs.opencv.org/4.x/d1/dc5/tutorial_background_subtraction.html.
[44] L. Roeder, "Netron : Visualizer for neural network, deep learning, and machine learning models," 2017. [Online]. Available: https://doi.org/10.5281/zenodo.5854962.
[45] A. Bochkovskiy, C.-Y. Wang, and H.-Y. Liao, "YOLOv4: Optimal speed and accuracy of object detection," 2020. [Online]. Available: https://doi.org/10.48550/arXiv.2004.10934.
[46] M. Tan, R. Pang, and Q. V. Le, "EfficientDet: Scalable and efficient object detection," Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp. 10778-10787.
[47] "Camera-samples." [Online]. Available: https://github.com/android/camera-samples.
[48] "Yolov5s_android," 2021. [Online]. Available: https://github.com/lp6m/yolov5s_android.
[49] "Tensorflow." [Online]. Available: https://github.com/tensorflow.
[50] A. Bewley, et al., "Simple online and realtime tracking," International Conference on Image Processing, Sept. 2016, pp. 3464-3468.
[51] L. Biewald, "Experiment Tracking with Weights and Biases," 2020. [Online]. Available: https://www.wandb.com/. |