參考文獻 |
[1] "衛生福利部統計處," [Online]. Available: https://dep.mohw.gov.tw/DOS/cp-2976-13815-113.html. [Accessed: June, 2020].
[2] 邱文欣, "基於深度學習之單眼距離估測與機器人戶外行走控制," 碩士, 電機工程學系, 國立中央大學, 桃園市, 2019.
[3] 廖浤鈞, "基於深度學習之關聯式追蹤網路," 碩士, 資訊工程學系, 國立中央大學, 桃園市, 2020.
[4] 沈鴻儒, "基於深度學習之道路障礙物偵測與盲人行走輔助技術," 碩士, 電機工程學系, 國立中央大學, 桃園市, 2020.
[5] A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello, "Enet: A deep neural network architecture for real-time semantic segmentation," arXiv preprint arXiv:1606.02147, 2016.
[6] C. Godard, O. Mac Aodha, and G. J. Brostow, "Unsupervised monocular depth estimation with left-right consistency," IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Honolulu, Hawaii, 2017, pp. 270-279.
[7] H. C. Wang, R. K. Katzschmann, S. Teng, B. Araki, L. Giarré, and D. Rus, "Enabling independent navigation for visually impaired people through a wearable vision-based feedback system," IEEE International Conference on Robotics and Automation(ICRA), Singapore, 2017, pp. 6533-6540.
[8] H. Badino, U. Franke, and D. Pfeiffer, "The stixel world-a compact medium level representation of the 3d-world," 31st DAGM Symposium on Pattern Recognition, Jena, Germany, 2009, pp. 51-60.
[9] Y. Zhang, Y. Zhao, T. Wei, and J. Chen, "Dynamic path planning algorithm for wearable visual navigation system based on the improved A*," IEEE International Conference on Imaging Systems and Techniques(IST), Beijing, China, 2017, pp. 1-6.
[10] K. Yang, L. M. Bergasa, E. Romera, R. Cheng, T. Chen, and K. Wang, "Unifying terrain awareness through real-time semantic segmentation," IEEE Intelligent Vehicles Symposium(IV), Suzhou, China, 2018, pp. 1033-1038.
[11] J. Bai, Z. Liu, Y. Lin; Y. Li; S. Lian, and D. Liu, "Wearable travel aid for environment perception and navigation of visually impaired people," Electronics, vol. 8, no. 6, pp. 697, 2019.
[12] J. Bai, S. Lian, Z. Liu, K. Wang, and D. Liu, "Smart guiding glasses for visually impaired people in indoor environment," IEEE Transactions on Consumer Electronics, vol. 63, no. 3, pp. 258-266, 2017.
[13] J. Bai, S. Lian, Z. Liu, K. Wang, and D. Liu, "Virtual-blind-road following-based wearable navigation device for blind people," IEEE Transactions on Consumer Electronics, vol. 64, no. 1, pp. 136-143, 2018.
[14] C. Yu, J. Wang, C. Peng, C. Gao, G. Yu, and N. Sang, "Bisenet: Bilateral segmentation network for real-time semantic segmentation," European Conference on Computer Vision(ECCV), Munich, Germany, 2018, pp. 325-341.
[15] E. Romera, J. M. Álvarez, L. M. Bergasa, and R. Arroyo, "ERFNet: Efficient residual factorized convnet for real-time semantic segmentation," IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 1, pp. 263-272, 2018.
[16] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Las Vegas, Nevada, 2016, pp. 770-778.
[17] J. M. Álvarez and L. Petersson, "DecomposeMe: Simplifying convnets for end-to-end learning," arXiv preprint arXiv:1606.05426, 2016.
[18] H. Zhao, X. Qi, X. Shen, J. Shi, and J. Jia, "ICNet for real-time semantic segmentation on high-resolution images," European Conference on Computer Vision(ECCV), Munich, Germany, 2018, pp. 405-420.
[19] R. P. Poudel, S. Liwicki, and R. Cipolla, "Fast-Scnn: Fast semantic segmentation network," arXiv preprint arXiv:1902.04502, 2019.
[20] H. Li, P. Xiong, H. Fan, and J. Sun, "DFANet: Deep feature aggregation for real-time semantic segmentation," IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Long Beach, California, 2019, pp. 9522-9531.
[21] J. Coughlan and H. Shen, "A fast algorithm for finding crosswalks using figure-ground segmentation," 2nd Workshop on Applications of Computer Vision in conjunction with ECCV, Graz, Austria, 2006.
[22] J. Choi, B. T. Ahn, and I. S. Kweon, "Crosswalk and traffic light detection via integral framework," The 19th Korea-Japan Joint Workshop on Frontiers of Computer Vision, Incheon, Korea, 2013, pp. 309-312.
[23] S. Mascetti, L. Picinali, A. Gerino, D. Ahmetovic, and C. Bernareggi, "Sonification of guidance data during road crossing for people with visual impairments or blindness," International Journal of Human-Computer Studies, vol. 85, pp. 16-26, 2016.
[24] D. Ahmetovic, C. Bernareggi, A. Gerino, and S. Mascetti, "ZebraRecognizer: Efficient and precise localization of pedestrian crossings," 22nd International Conference on Pattern Recognition(ICPR), Stockholm, Swedish, 2014, pp. 2566-2571.
[25] Y. Zhai, G. Cui, Q. Gu, and L. Kong, "Crosswalk detection based on MSER and ERANSAC," IEEE International Conference on Intelligent Transportation Systems, Las Palmas, Spain, 2015, pp. 2770-2775.
[26] J. Matas, O. Chum, M. Urban, and T. Pajdla, "Robust wide baseline stereo from maximally stable extremal regions," 13th British Machine Vision Conference, Cardiff, UK, pp. 384-393, 2002.
[27] M. Fischler and R. Bolles, "Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography," Communications of the ACM, vol. 24, no. 6, pp. 381-395, 1981.
[28] R. Cheng, K. Wang, K. Yang, N. Long, W. Hu, H. Chen, J. Bai, and D. Liu, "Crosswalk navigation for people with visual impairments on a wearable device," Journal of Electronic Imaging, vol. 26, no. 5, pp. 053025, 2017.
[29] V. Tümen and B. Ergen, "Intersections and crosswalk detection using deep learning and image processing techniques," Physica A: Statistical Mechanics and its Applications, vol. 543, 2019.
[30] "Jetson AGX Xavier," [Online]. Available: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/. [Accessed: June, 2020].
[31] "Intel Dual Band Wireless-AC 8265 NGW," [Online]. Available: https://ark.intel.com/content/www/tw/zh/ark/products/94150/intel-dual-band-wireless-ac-8265.html. [Accessed: June, 2020].
[32] "Samsung Galaxy S9 智慧型手機," [Online]. Available: https://www.samsung.com/tw/support/mobile-devices/what-are-the-new-samsung-galaxy-s9-and-s9-plus-specs/. [Accessed: June, 2020].
[33] "Logitech C930e 網路攝影機," [Online]. Available: https://www.logitech.com/zh-tw/product/c930e-webcam. [Accessed: June, 2020].
[34] "ZED," [Online]. Available: https://www.stereolabs.com/zed/. [Accessed: June, 2020].
[35] "Ublox-NEO-M8N," [Online]. Available: https://www.u-blox.com/en/product/neo-m8-series#tab-documentation-resources. [Accessed: June, 2020].
[36] "AQMD6010BLS 直流無刷馬達驅動器," [Online]. Available: http://www.akelc.com/BLDCMotor/show_78.html. [Accessed: June, 2020].
[37] "80BL110S50 直流無刷馬達," [Online]. Available: https://detail.1688.com/offer/1272236959.html. [Accessed: June, 2020].
[38] "萬用AC行動電源 enerpad AC42K," [Online]. Available: https://www.enerpad.com.tw/product/10715e6a-83ad-4195-a586-fe2aba024e42. [Accessed: June, 2020].
[39] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, "MobileNets: Efficient convolutional neural networks for mobile vision applications," arXiv preprint arXiv: 1704.04861, 2017.
[40] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. Chen, "MobileNetV2: Inverted residuals and linear bottlenecks," IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Salt Lake City, Utah, 2018, pp. 4510-4520.
[41] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, "Pyramid scene parsing network," IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Honolulu, Hawaii, 2017, pp. 2881-2890.
[42] 賴怡靜, "基於深度學習之距離估測與自動避障的戶外導航機器人," 碩士, 電機工程學系, 國立中央大學, 桃園市, 2018.
[43] N. Otsu, "A threshold selection method from gray-level histograms," IEEE Transactions on Systems, Man, and Cybernetics, vol. smc-9, no. 1, pp. 62-66, 1979.
[44] P. V. C. Hough, "Method and means for recognizing complex patterns," 1962.
[45] E. W. Weisstein, "Least squares fitting-Perpendicular offsets," [Online]. Available: https://mathworld.wolfram.com/LeastSquaresFittingPerpendicularOffsets.html. [Accessed: June, 2020].
[46] "Haversine formula," [Online]. Available: https://en.wikipedia.org/wiki/Haversine_formula. [Accessed: June, 2020].
[47] "Spherical trigonometry," [Online]. Available: https://en.wikipedia.org/wiki/Spherical_trigonometry. [Accessed: June, 2020]. |