博碩士論文 104521044 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:23 、訪客IP:3.226.245.48
姓名 蔡奇恩(Chi-En Tsai)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 即時的人臉偵測與人臉辨識之門禁系統
(Real time face detection and recognition for access control system application)
相關論文
★ 即時的SIFT特徵點擷取之低記憶體硬體設計★ 具即時自動跟隨功能之自走車
★ 應用於多導程心電訊號之無損壓縮演算法與實現★ 離線自定義語音語者喚醒詞系統與嵌入式開發實現
★ 晶圓圖缺陷分類與嵌入式系統實現★ 補償無乘法數位濾波器有限精準度之演算法設計技巧
★ 可規劃式維特比解碼器之設計與實現★ 以擴展基本角度CORDIC為基礎之低成本向量旋轉器矽智產設計
★ JPEG2000靜態影像編碼系統之分析與架構設計★ 適用於通訊系統之低功率渦輪碼解碼器
★ 應用於多媒體通訊之平台式設計★ 適用MPEG 編碼器之數位浮水印系統設計與實現
★ 適用於視訊錯誤隱藏之演算法開發及其資料重複使用考量★ 一個低功率的MPEG Layer III 解碼器架構設計
★ 具有高品質反量化演算的AAC解碼器 之平台式設計★ 適用於第三代行動通訊之最大事後機率演算法發展及渦輪碼解碼器超大型積體電路設計
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 ( 永不開放)
摘要(中) 因著智慧城市與智慧家庭推動,人們越來越重視生活的品質,期待用科技改變我們的生活。而近年來隨著GPU進步與大數據時代的來臨,深度學習帶給各領域帶來革命性的進展,電腦視覺方面更是由深度學習引領著。科技始終來自人性,科技使我們的生活更便利。在我們生活週遭有著各式不同形態的門禁系統,從鑰匙、門禁卡、到生物特徵辨識。生物特徵辨識透過每個人獨有的特徵,不須另外攜帶任何形式的鑰匙。人臉由於不需接觸或是做出額外的動作,為所有生物特徵辨識中最為方便的方法,然而卻也是最為複雜的方法。
本論文提出透過深度網路,動態的對使用者進行人臉偵測與人臉辨識的門禁系統,不需特別停下進行辨識。人臉偵測採用卷積神經網路架構SSD(Single Shot MultiBox Detector)為主要方法,人臉辨識則採用VGGFace為主要方法。透過圖庫蒐集,資料增強,影像前後處理與實驗設計,訓練更為健全(robust)的人臉偵測與人臉辨識子系統。透過兩個系統的結合,便利用連續的影像,判斷是否為實驗室成員。連續的影像可避免單張影像誤測而造成的嚴重後果。本論文利用1280x960的彩色影片進行實驗測試,在GPU加速下可達到約30fps的速度。
摘要(英) Through the promotion of smart cities and smart families, people are paying more attention to the quality of life. They look forward to change the style of life by technology. The age of big data and GPU acceleration improvement brought deep learning a revolutionary progress in various fields, especially in computer vision. Technology derives from humanity, and technology makes our lives more convinent. There are various forms of access control system, such as keys, access cards, biometric identification, around us. Biometric identification distinguish different people by the unique characteristics of each person. Therefore users don’t need to bring any forms of keys anymore. Faces identification are the most convenient method for all biometric identification, because they don′t need to touch anything or make any extra moves. However, it’s the most complex method.
We propose an access control system that performs face detection and face recognition dynamically, which makes users no need to stop for recognition. We use SSD(Single Shot MultiBox Detector)as the main model for face detection and VGGFace as the main model for face recognition. Through the collection of dataset, data augmentation, pre-processing of image, post-processing of image, and experimental design, we train the robust face detection and face recognition sybsystems. The system use continuous images as input to determine whether it’s a laboratory member. Using continuous frame as input can avoid false positive case make the system output wrong result. We use 1280x960 color video for experimental testing, and achieve about 30fps speed under GPU acceleration.
關鍵字(中) ★ 深度學習
★ 人臉偵測
★ 人臉辨識
關鍵字(英) ★ deep learning
★ face detection
★ face recognition
論文目次 第一章 Introduction 1
1-1 Motivation 1
1-2 Related Work of Face Detection 2
1-3 Related Work of Face Recognition 4
1-4 Method 6
1-5 Thesis Organization 8
第二章 Convolutional Neural Network-SSD 9
2-1 Introduction 9
2-2 Convolutional Neural Netwrok 10
2-2-1 Local Receptive Fields 12
2-2-2 Shared Weights 13
2-2-3 Pooling Layer 14
2-2-4 Fully Connected Layer 15
2-2-5 Activation Function 16
2-3 SSD 18
2-3-1 Model 18
2-3-2 Training 19
2-3-3 Prediction 23
第三章 Access Control System 24
3-1 Face Detection 24
3-1-1 Data gathering and pre-processing 24
3-1-2 Training Method and parameters 26
3-1-3 Post-processing 28
3-1-4 Experimental Result 29
3-2 Face Recognition 33
3-2-1 VGG Face Model and dataset 34
3-2-2 Data Gathering and Pre-processing 35
3-2-1 Training Method and Parameters 38
3-2-2 Experimental Result 39
3-3 System Architecture and Result 42
第四章 Conclusion 45
第五章 Reference 46
參考文獻 [1] Y. Sun, X. Wang, and X. Tang, “Deep Learning Face Representation by Joint Identification-Verification,” pp. 1–9, 2014.
[2] Y. Sun, D. Liang, X. Wang, and X. Tang, “DeepID3: Face Recognition with Very Deep Neural Networks,” pp. 2–6, 2015.
[3] P. Viola and M. Jones, “Robust real-time face detection,” Int. J. Comput. Vis., vol. 57, no. 2, pp. 137–154, 2004.
[4] Z. Zhuang, Y. Cheng, Q. Sun, and Y. Wang, “Robust face detection and analysis,” J. Electron., vol. 17, no. 3, pp. 193–201, Jul. 2000.
[5] M. P. Beham and S. M. M. Roomi, “Face recognition using appearance based approach: A literature survey,” Proc. Int. Conf. Work. Recent Trends Technol. Mumbai, Maharashtra, India, vol. 2425, no. February 2015, p. 1621, 2012.
[6] C. Author et al., “Facial Features for Template Matching Based Face Recognition,” Am. J. Appl. Sci., vol. 6, no. 11, pp. 1897–1901, 2009.
[7] A. Kour, “Face Recognition using Template Matching,” vol. 115, no. 8, pp. 10–13, 2015.
[8] L. Haoxiang and Lin, “A Convolutional Neural Network Cascade for Face Detection,” IEEE Conf. Comput. Vis. Pattern Recognit., pp. 5325–5334, 2015.
[9] “FDDB : Main.” [Online]. Available: http://vis-www.cs.umass.edu/fddb/index.html. [Accessed: 09-Apr-2018].
[10] Y. Taigman, M. Yang, and M. A. Ranzato, “Deepface: Closing the gap to human -level performance in face verification,” CVPR IEEE Conf., pp. 1701–1708, 2014.
[11] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” Univ. Massachusetts Amherst Tech. Rep., vol. 1, pp. 07–49, 2007.
[12] L. Wolf, T. Hassner, and I. Maoz, “Face recognition in unconstrained videos with matched background similarity,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 529–534, 2011.
[13] W. Liu et al., “SSD: Single shot multibox detector,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016.
[14] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep Face Recognition.”
[15] “YouTube.” [Online]. Available: https://www.youtube.com/.
[16] N. Zhang, R. Farrell, F. Iandola, and T. Darrell, “Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction.”
[17] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” Nov. 2013.
[18] R. Girshick, “Fast R-CNN,” Apr. 2015.
[19] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” Jun. 2015.
[20] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” Jun. 2015.
[21] K. Fukushima, “Biological Cybernetics Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position,” Biol. Cybern., vol. 36, no. 193, 1980.
[22] D. H. HUBEL and T. N. WIESEL, “Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex.,” J. Physiol., vol. 160, no. 1, pp. 106–54, Jan. 1962.
[23] M. A. Nielsen, “Neural Networks and Deep Learning.” Determination Press, 2015.
[24] “OVERFITTING.” [Online]. Available: https://zh.wikipedia.org/wiki/過適. [Accessed: 23-Apr-2018].
[25] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[26] “CNN introduction.” [Online]. Available: https://read01.com/oyPn3.html#.WtylqS5uayo. [Accessed: 22-Apr-2018].
[27] “Deep learning for complete beginners: convolutional neural networks with keras.” [Online]. Available: https://cambridgespark.com/content/tutorials/convolutional-neural-networks-with-keras/index.html. [Accessed: 23-Apr-2018].
[28] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Sep. 2014.
[29] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “OBJECT DETECTORS EMERGE IN DEEP SCENE CNNS.”
[30] “LFW.” [Online]. Available: http://vis-www.cs.umass.edu/lfw/index.html. [Accessed: 15-Apr-2018].
[31] N. Bodla, B. Singh, R. Chellappa, and L. S. Davis, “Soft-NMS -- Improving Object Detection With One Line of Code,” Apr. 2017.
[32] “YouTube Faces Database : Main.” [Online]. Available: https://www.cs.tau.ac.il/~wolf/ytfaces/. [Accessed: 20-Apr-2018].
[33] T. DeVries and G. W. Taylor, “Improved Regularization of Convolutional Neural Networks with Cutout,” Aug. 2017.
[34] Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random Erasing Data Augmentation,” Aug. 2017.
指導教授 蔡宗漢(Tsung-Han Tsai) 審核日期 2018-5-8
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明