博碩士論文 109522096 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:21 、訪客IP:18.217.60.35
姓名 陳妍如(Yen-Ju Chen)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱
(Detecting Driver Intention by Taillight Signals via Sequential Learning)
相關論文
★  Dynamic Overlay Construction for Mobile Target Detection in Wireless Sensor Networks★ 車輛導航的簡易繞路策略
★ 使用傳送端電壓改善定位★ 利用車輛分類建構車載網路上的虛擬骨幹
★ Why Topology-based Broadcast Algorithms Do Not Work Well in Heterogeneous Wireless Networks?★ 針對移動性目標物的有效率無線感測網路
★ 適用於無線隨意網路中以關節點為基礎的分散式拓樸控制方法★ A Review of Existing Web Frameworks
★ 將感測網路切割成貪婪區塊的分散式演算法★ 無線網路上Range-free的距離測量
★ Inferring Floor Plan from Trajectories★ An Indoor Collaborative Pedestrian Dead Reckoning System
★ Dynamic Content Adjustment In Mobile Ad Hoc Networks★ 以影像為基礎的定位系統
★ 大範圍無線感測網路下分散式資料壓縮收集演算法★ 車用WiFi網路中的碰撞分析
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 隨著深度學習技術的演進和發展,自動駕駛系統正快速發展。對於自動駕駛車輛而言,捕捉道路上其他車輛的駕駛意圖至關重要,其結果可以作為新的特徵用於軌跡預測以規劃安全的自動駕駛車輛駕駛軌跡。本研究提出了一種系統,可從尾燈信號的視頻流中識別其他車輛的駕駛意圖。為了實現這一目標,需要正確提取和識別尾燈的位置(即空間特徵)和尾燈狀態隨時間的變化(即時間特徵)。在我們的模型中,使用更長的 32 幀序列作為輸入來捕捉尾燈的完整變化。此外,採用遷移學習的經典卷積神經網絡和輕量級 WaveNet 分別提取輸入序列的空間和時間特徵。實驗結果表明,我們的系統在尾燈識別方面優於最先進的方法。
摘要(英) With the evolution and development of deep learning technologies, we have observed the rapid advancement of autonomous driving systems. For an autonomous driving vehicle, it is crucial to capture the driving intentions of other vehicles on the road, which can then be used for the autonomous driving vehicle to plan a safe driving route. This study proposes a system to identify the driving intention of other vehicles from the video streaming of their taillight signals. To achieve this goal, both the positions of taillights (i.e., spatial features) and the change of the status of taillights over time (i.e., temporal features) need to be properly extracted and recognized. In our system, a longer sequence of 32 frames is used as input to capture the complete change of taillights. In addition, a transfer-learned classical convolutional neural network and a light-weight WaveNet are adopted to extract spatial and temporal features of the input sequence, respectively.
% A dataset of more than 800 sequences with different types of taillight signals is collected from different sources and carefully labelled. Moreover, the dataset is augmented to ensure the model training converge.
The experiment results indicate that our system outperforms the state of the art approaches in taillight recognition.
關鍵字(中) ★ 車尾燈辨識
★ 長時間序
關鍵字(英) ★ taillight recognition
★ long sequence
論文目次 1 Introduction 1
2 Related Work 4
2.1 Taillight Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.1 Tradition Color Analysis . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.2 Machine Learning-based detection . . . . . . . . . . . . . . . . . . . 4
2.2 Taillight Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.1 Two-stage taillight recognition . . . . . . . . . . . . . . . . . . . . . 5
2.2.2 One-stage taillight recognition . . . . . . . . . . . . . . . . . . . . . 6
3 Preliminary 7
3.1 Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 WaveNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.4 Data Augmentation Techniques . . . . . . . . . . . . . . . . . . . . . . . . 9
4 Design 11
4.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2.1 Video Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.2.2 Turn Light Recognition . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2.3 Brake Light Recognition . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2.4 Driver Intention Determination . . . . . . . . . . . . . . . . . . . . 15
5 Performance 17
5.1 Rear Signal Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2 Environmental Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.3 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6 Conclusions 23
Reference 24
參考文獻 [1] Automotive lighting. https://en.wikipedia.org/wiki/Automotive_lighting#
Turn_signals. Accessed: 2022-09-20.
[2] Autonomous vehicle technology-a guide for policymakers. https://www.rand.org/content/dam/rand/pubs/research_reports/RR400/RR443-2/RAND_RR443-2.pdf. Accessed: 2022-08-12.
[3] Cielab. https://en.wikipedia.org/wiki/CIELAB_color_space. Accessed: 2022-02-17.
[4] digital image processing. https://www.sciencedirect.com/topics/engineering/image-processing. Accessed: 2022-03-29.
[5] The fall of rnn / lstm. https://towardsdatascience.com/the-fall-of-rnn-lstm-2d1594c74ce0. Accessed: 2022-09-27.
[6] Hsl and hsv. https://en.wikipedia.org/wiki/YCbCr. Accessed: 2022-02-22.
[7] Hsl and hsv. https://github.com/aleju/imgaug. Accessed: 2022-03-23.
[8] Image captioning. https://paperswithcode.com/task/image-captioning. Accessed: 2022-04-19.
[9] Image classification. https://www.sciencedirect.com/topics/engineering/image-classification. Accessed: 2022-03-29.
[10] Image recognition. https://www.sciencedirect.com/topics/engineering/image-recognition. Accessed: 2022-03-29.
[11] Image segmetation. https://www.sciencedirect.com/topics/computer-science/image-segmentation. Accessed: 2022-03-29.
[12] Let’s understand the problems with recurrent neural networks.
https://www.analyticsvidhya.com/blog/2021/07/lets-understand-the-problems-with-recurrent-neural-networks/. Accessed:2022-05-03.
[13] Machine translation. https://paperswithcode.com/task/machine-translation. Accessed: 2022-04-19.
[14] Preparing for the future of transportation. https://www.transportation.gov/sites/dot.gov/files/docs/policy-initiatives/automated-vehicles/320711/preparing-future-transportation-automated-vehicle-30.pdf. Accessed: 2022-08-12.
[15] The robot operating system. https://www.ros.org/. Accessed: 2022-09-24.
[16] Speech recognition. https://paperswithcode.com/task/speech-recognition. Accessed: 2022-04-19.
[17] speech synthesis. https://www.techtarget.com/whatis/definition/speech-synthesis. Accessed: 2022-05-03.
[18] Stop-and-go waves. https://rf.mokslasplius.lt/stop-and-go-waves/. Accessed: 2022-09-26.
[19] Text generation. https://paperswithcode.com/task/text-generation. Accessed: 2022-04-19.
[20] Want to generate your own music using deep learning? here’s a guide to do just that! https://www.analyticsvidhya.com/blog/2020/01/how-to-perform-automatic-music-generation/. Accessed: 2022-05-03.
[21] Ycbcr. https://en.wikipedia.org/wiki/YCbCr. Accessed: 2022-02-22.25
[22] Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig
Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
[23] Duan-Yu Chen and Yang-Jie Peng. Frequency-tuned taillight-based nighttime vehicle braking warning system. IEEE Sensors Journal, 12(11):3285–3292, 2012.
[24] Davi Frossard, Eric Kee, and Raquel Urtasun. Deepsignals: Predicting intent of drivers through visual signals. 2019 International Conference on Robotics and Automation (ICRA), pages 9697–9703, 2019.
[25] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735–1780, nov 1997.
[26] Han-Kai Hsu, Yi-Hsuan Tsai, Xue Mei, Kuan-Hui Lee, Naoki Nagasaka, Danil Prokhorov, and Ming-Hsuan Yang. Learning to tell brake and turn signals in videos using cnn-lstm structure. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pages 1–6, 2017.
[27] Kuan-Hui Lee, Takaaki Tagawa, Jia-En M. Pan, Adrien Gaidon, and Bertrand Douillard. An attention-based recurrent convolutional network for vehicle taillight recognition. CoRR, abs/1906.03683, 2019.
[28] Qiaohong Li, Sahil Garg, Jiangtian Nie, Xiang Li, R. W. Liu, Zhiguang Cao, and M. Shamim Hossain. A highly efficient vehicle taillight detection approach based on deep learning. IEEE Transactions on Intelligent Transportation Systems, 22:4716–4726, 2021.
[29] Ce Liu, Jenny Yuen, and Antonio Torralba. SIFT Flow: Dense Correspondence Across Scenes and Its Applications, pages 15–49. Springer International Publishing, Cham, 2016.
[30] Ming Liu, Bingyan Liao, Chenye Wang, Yayun Wang, and Yaonong Wang. Real-time vehicle taillight recognition based on siamese recurrent neural network. 2020.
[31] Van Parsons. Stratified Sampling. 02 2017.
[32] JEFFREY RAUCH, MICHAEL TAYLOR, and Ralph Phillips. Exponential decay of solutions to hyperbolic equations in bounded domains. Indiana University Mathematics Journal, 24(1):79–86, 1974.
[33] David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by back-propagating errors. Nature, 323:533–536, 1986.
[34] David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning Representations by Back-Propagating Errors, page 696–699. MIT Press, Cambridge, MA, USA, 1988.
[35] Baoguang Shi, Mingkun Yang, XinggangWang, Pengyuan Lyu, Cong Yao, and Xiang Bai. Aster: An attentional scene text recognizer with flexible rectification. IEEE transactions on pattern analysis and machine intelligence, 41(9):2035–2048, 2018.
[36] Xingjian Shi, Zhourong Chen, HaoWang, Dit-Yan Yeung, Wai-kin Wong, and Wangchun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS’15, page 802–810, Cambridge, MA, USA, 2015. MIT Press.
[37] P.Y. Simard, D. Steinkraus, and J.C. Platt. Best practices for convolutional neural networks applied to visual document analysis. In Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings., pages 958–963, 2003.
[38] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[39] John Tantalo and Francis L. Merat. Brake light detection by image segmentation. 2006.
[40] A¨aron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. CoRR, abs/1609.03499, 2016.
[41] A¨aron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with pixelcnn decoders.
CoRR, abs/1606.05328, 2016.
[42] Flaviu Ionut Vancea, Arthur Daniel Costea, and Sergiu Nedevschi. Vehicle taillight
detection and tracking using deep learning and thresholding for candidate generation. In 2017 13th IEEE International Conference on Intelligent Computer Communication and Processing (ICCP), pages 267–272, 2017.
[43] Guangyu Zhong, Yi-Hsuan Tsai, Yi-Ting Chen, Xue Mei, Danil Prokhorov, Michael James, and Ming-Hsuan Yang. Learning to tell brake lights with convolutional features. In 2016 IEEE 19th International Conference on Intelligent Transportation
指導教授 孫敏德(Min-Te Sun) 審核日期 2022-9-28
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明