博碩士論文 106552007 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:25 、訪客IP:3.239.40.250
姓名 劉榮勝(Rong-Sheng Liu)  查詢紙本館藏   畢業系所 資訊工程學系在職專班
論文名稱 基於深度學習方法之高精確度瞳孔放大片偵測演算法
(Ultra-Accurate Detection of the Existence of Cosmetic Contact Lens for Iris Images based on Deep Learning)
相關論文
★ 基於虹膜色彩空間的極端學習機的多類型頭痛分類★ 以多分數加權融合方式進行虹膜影像品質檢定
★ 基於深度學習之工業用智慧型機器視覺系統:以文字定位與辨識為例★ 基於深度學習的即時血壓估測演算法
★ 基於深度學習之工業用智慧型機器視覺系統:以焊點品質檢測為例★ 基於pix2pix深度學習模型之條件式虹膜影像生成架構
★ 以核方法化的相關濾波器之物件追蹤方法 實作眼動儀系統★ 雷射都普勒血流原型機之驗證與校正
★ AILIS: An Adaptive and Iterative Learning Method for Accurate Iris Segmentation★ 以生成對抗式網路產生特定目的影像—以虹膜影像為例
★ 一種基於Faster R-CNN的快速虹膜切割演算法★ 運用深度學習、支持向量機及教導學習型最佳化分類糖尿病視網膜病變症狀
★ 應用卷積神經網路的虹膜遮罩預估★ Collaborative Drama-based EFL Learning with Mobile Technology Support in Familiar Context
★ 可用於自動訓練深度學習網路的網頁服務
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來,瞳孔放大片已經成為許多民眾的生活用品,更是不少愛美時尚男女的生活必需品。為了符合更多需求,廠商也針對色澤、風格和紋理提供更多的選擇,豐富產品的多變性。這些瞳孔放大片也因改變虹膜紋理在虹膜辨識上受到考驗。

若要透過深度學習的方法,則需要收集大量的資料進行網路模型的訓練,從資料中萃取出複雜的規則;另外在深度學習進行訓練之前,需要對影像做預處理,如:影像的分割、影像形式的轉換及利用影像處理的方法增加影像的數量等,來達到高準確以及穩健的結果。本篇論文收集台灣市售的9家廠牌、18種款式及101位配戴虹膜放大片與未配戴時的樣本,實驗中使用的圖像總數為30390,透過深度學習的方式訓練模型,使得測試準確度可達到99%以上的水準。
摘要(英) In recent years, Cosmetic Contact Lens (CCL) has become a daily necessity for many people, and it is also a necessity for many people who love beauty and fashion. In order to meet more needs, manufacturers also provide more choices for color, style and texture to enrich the variability of products. These Cosmetic Contact Lens (CCL) also becomes a challenge for iris recognition because it changes the appearance of the texture of the iris.

However, in deep learning method, one needs to collect a lot of data for the training of the network model, and extract rules from the data. In addition, before training a deep learning, it is better to preprocess the image for the sake of data augmentation, such as : image cropping, scaling, rotating to achieve higher accuracy and robustness. This paper collects CCL samples from 9 brands and 18 styles from Taiwan. We invite 101 participants and collect eye images with and without wearing CCL. The total number of images used in the experiment is 30390. At the end, we can achieve an accuracy higher than 99% using deep learning based models.
關鍵字(中) ★ 瞳孔放大片
★ 深度學習
★ 虹膜分割
★ 虹膜識別
關鍵字(英) ★ Cosmetic contact lens
★ Deep learning
★ Iris segmentation
★ Iris recognition
論文目次 中文摘要 i
英文摘要 ii
致謝 iii
目錄 iv
圖目錄 vi
表目錄 viii
一、緒論 1
1-1 前言 1
1-2 研究目的 2
1-3 論文架構 3
二、文獻回顧 4
2-1 深度學習影像分類網路介紹 4
2-1-1 AlexNet 4
2-1-2 GoodLeNet 5
2-1-3 VGGNet 10
2-1-4 ResNet 12
2-1-5 DenseNet 13
2-1-6 SqueezeNet 14
2-1-7 MobileNet與MobileNetV2 16
2-2 虹膜辨識介紹 20
三、方法說明 22
3-1 方法架構 22
3-2 資料前處理 23
3-3 深度學習網路 24
四、實驗 26
4-1 實驗設備 26
4-2 資料收集 26
4-3 實驗結果 28
4-3-1 原始影像偵測結果 28
4-3-2 預處理影像偵測結果 32
4-3-3 未知影像偵測結果 36
4-4 討論 36
五、結論與未來展望 38
5-1 結論 38
5-2 未來展望 39
六、參考文獻 40
參考文獻 [1] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998, doi: 10.1109/5.726791.
[2] Krizhevsky, A., Sutskever, I., & Hinton, G.E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. NIPS.
[3] C. Szegedy et al., "Going deeper with convolutions," 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015, pp. 1-9, doi: 10.1109/CVPR.2015.7298594.
[4] Ioffe, Sergey and Szegedy, Christian. "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.." CoRR abs/1502.03167 (2015).
[5] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 2818-2826, doi: 10.1109/CVPR.2016.308.
[6] Szegedy, Christian, Ioffe, Sergey and Vanhoucke, Vincent. "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning.." CoRR abs/1602.07261 (2016).
[7] Simonyan, Karen and Andrew Zisserman. “Very Deep Convolutional Networks for Large-Scale Image Recognition.” CoRR abs/1409.1556 (2015): n. pag.
[8] VGGNet. Retrieved June 14, 2020, from https://www. itread01.com/content/1568289844.html
[9] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770-778.
[10] Huang, G., Liu, Z., & Weinberger, K.Q. (2017). Densely Connected Convolutional Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2261-2269.
[11] Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J. & Keutzer, K. (2016). SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size (cite arxiv:1602.07360Comment: In ICLR Format)
[12] Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M. & Adam, H. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications (cite arxiv:1704.04861)
[13] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen, "MobileNetV2: Inverted Residuals and Linear Bottlenecks," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 4510-4520, doi:10.1109/CVPR.2018.00474.
[14] MobileNetV2. Retrieved June 14, 2020, from https://medium.com/@chih.sheng.huang821/%E6%B7%B1%E5%BA%A6%E5%AD%B8%E7%BF%92-mobilenet-depthwise-separable-convolution-f1ed016b3467
[15] J. Daugman, “How Iris Recognition Works,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 21-30, JAN 2004.
[16] J. Daugman, “Probing the Uniqueness and Randomness of IrisCodes: Results from 200 Billion Iris Pair Comparisons”, Proceedings of the IEEE, Vol 94, Issue 11, pp. 1927-1935, IEEE, November 2006.
[17] J. Daugman, “New Methods in Iris Recognition”, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), Vol 37, Issue 5, pp. 1167-1175, January 2007.
[18] H. Hofbauer, F. A.-Fernandez, J. Bigun, and A. Uhl, “Experimental Analysis Regarding the Influence of Iris Segmentation on the Recognition Rate,” The Institution of Engineering and Technology Biometrics, vol. 5, no. 3, pp. 200-211, AUG 2016.
[19] Po-Jen Huang, “A Fast Iris Segmentation Algorithm based on Faster R-CNN”, https://ndltd.ncl.edu.tw/cgi-in/gs32/gsweb.cgi/ccd=VOW3dO/record?r1=1&h1=2
[20] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, JAN 2016.
[21] Y.-H. Li and P.-J. Huang, “An Accurate and Efficient User Authentication Mechanism on Smart Glasses based on Iris Recognition,” Mobile Information Systems, vol. 2017, Article ID 1281020, pp. 1-14, JUL 2017.
[22] MobileNet. Retrieved June 14, 2020, from https://zhuanlan.zhihu.com/p/54425450
指導教授 栗永徽(Yung-Hui Li) 審核日期 2020-8-11
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明