博碩士論文 105522060 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:6 、訪客IP:3.17.128.129
姓名 黃柏仁(Po-Jen Huang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 一種基於Faster R-CNN的快速虹膜切割演算法
(A Fast Iris Segmentation Algorithm based on Faster R-CNN)
相關論文
★ 基於虹膜色彩空間的極端學習機的多類型頭痛分類★ 以多分數加權融合方式進行虹膜影像品質檢定
★ 基於深度學習之工業用智慧型機器視覺系統:以文字定位與辨識為例★ 基於深度學習的即時血壓估測演算法
★ 基於深度學習之工業用智慧型機器視覺系統:以焊點品質檢測為例★ 基於pix2pix深度學習模型之條件式虹膜影像生成架構
★ 以核方法化的相關濾波器之物件追蹤方法 實作眼動儀系統★ 雷射都普勒血流原型機之驗證與校正
★ 以生成對抗式網路產生特定目的影像—以虹膜影像為例★ 運用深度學習、支持向量機及教導學習型最佳化分類糖尿病視網膜病變症狀
★ 應用卷積神經網路的虹膜遮罩預估★ Collaborative Drama-based EFL Learning with Mobile Technology Support in Familiar Context
★ 可用於自動訓練深度學習網路的網頁服務★ 基於深度學習方法之高精確度瞳孔放大片偵測演算法
★ 基於CNN方法之真假人臉識別模型★ 深度學習基礎模型與自監督學習
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 虹膜區域切割是整個虹膜識別流程中關鍵的步驟,大多數目前先進的虹膜區域切割演算法皆是建立於圖像的邊緣信息,然而,一般基於邊緣信息的檢測器會在出現鏡面反射或其他障礙物的圖像上產生過多影響定位虹膜內外邊界的雜訊點。本文提出了一種結合邊緣信息以及基於學習的混合型虹膜區域切割演算法,使用了僅有六層且設計良好的Faster R-CNN來定位且識別圖像上的眼睛,根據Faster R-CNN找到的區域邊界框,利用高斯混合模型定位瞳孔區域,最後透過五個關鍵的虹膜內邊界點擬合出虹膜內邊界圓,再以改良後的MIGREP演算法和邊界點選擇演算法找尋虹膜外邊界圓的邊界點,由這些找到的虹膜外邊界點擬合出虹膜外邊界圓。實驗結果顯示了本文所提出的演算法在具挑戰性的CASIA-Iris-Thousand資料庫上達到95.49%精確度。
摘要(英) Iris segmentation is a critical step in the entire iris recognition procedure. Most of the state-of-the-art iris segmentation algorithms are based on edge information. However, a large number of noisy edge points created by a normal edge-based detector in an image with specular reflection or other obstacles will mislead the pupillary boundary and limbus boundary localization. In this paper, we present a combination method of learning-based and edge-based algorithms for iris segmentation. A well-designed Faster R-CNN with only six layers is built to locate and classify the eye. With the bounding box found by Faster R-CNN, the pupillary region is located using a Gaussian mixture model. Then, the circular boundary of the pupillary region is fit according to five key boundary points. The enhanced version of MIGREP and a boundary point selection algorithm are used to find the boundary points of limbus, and the circular boundary of limbus is constructed using these bounding points. Experimental results showed that the proposed iris segmentation method achieved 95.49% accuracy on the challenging CASIA-Iris-Thousand database.
關鍵字(中) ★ 生物識別
★ 虹膜辨識
關鍵字(英) ★ iris segmentation
★ Faster R-CNN
論文目次 摘要 ………………………………………………………………… i
Abstract………………………………………………………………… ii
目錄 ………………………………………………………………… iii
圖表目錄 ………………………………………………………………… iv
表格目錄 ………………………………………………………………… v
第一章 背景介紹 …………………………………………… 1
  第1.1節 研究背景及動機 ……… 1
  第1.2節 研究目的 …………………… 2
第二章 文獻探討 ………………………………………… 4
  第2.1節 物件檢測 …………………… 4
  第2.2節 虹膜區域切割 …………… 6
第三章 方法論 ……………………………………………… 8
  第3.1節 圖像眼睛檢測 …………… 9
  第3.2節 高斯混合模型 …………… 10
  第3.3節 估測虹膜內邊界 ……… 12
  第3.4節 估測虹膜外邊界 ……… 14
第四章 實驗結果與分析 …………………………… 18
  第4.1節 資料集 ………………………… 18
  第4.2節 訓練檢測器模型 ……… 18
  第4.3節 區域切割的性能評估 24
第五章 結論 …………………………………………………… 26
Reference …………………………………………… 27
參考文獻 [1] J. Daugman, “How Iris Recognition Works,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 21-30, JAN 2004.
[2] H. Proenca and L. A. Alexandre, “Iris Recognition: Analysis of the Error Rates Regarding the Accuracy of the Segmentation Stage,” Image and Vision Computing, vol. 28, no. 1, pp. 202-206, JAN 2010.
[3] H. Hofbauer, F. A.-Fernandez, J. Bigun, and A. Uhl, “Experimental Analysis Regarding the Influence of Iris Segmentation on the Recognition Rate,” The Institution of Engineering and Technology Biometrics, vol. 5, no. 3, pp. 200-211, AUG 2016.
[4] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, JAN 2016.
[5] J. A. Bilmes, “A Gentle Tutorial of the EM Algorithm and Its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models,” Technical Report ICSI-TR-97-021, University of Berkeley, APR 1998.
[6] M. A. T. Figueiredo and A. K. Jain, “Unsupervised Learning of Finite Mixture Models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 3, pp. 381-396, MAR 2002.
[7] Y.-H. Li and P.-J. Huang, “An Accurate and Efficient User Authentication Mechanism on Smart Glasses based on Iris Recognition,” Mobile Information Systems, vol. 2017, Article ID 1281020, pp. 1-14, JUL 2017.
[8] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580-587, OCT 2014.
[9] M. Everingham, L. V. Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes (VOC) Challenge,” International Journal of Computer Vision, vol. 88, no. 2, pp. 303-338, JUN 2010.
[10] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan, “Object Detection with Discriminatively Trained Part-based Models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1627-1645, SEP 2010.
[11] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Advances in Neural Information Processing Systems 25, pp. 1097-1105, DEC 2012.
[12] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1904-1916, JAN 2015.
[13] R. Girshick, “Fast R-CNN,” 2015 IEEE International Conference on Computer Vision, pp. 1440-1448, DEC 2015.
[14] J. R. R. Uijlings, K. E. A. van de Sande, T. Gevers, and A. W. M. Smeulders, “Selective Search for Object Recognition,” International Journal of Computer Vision, vol. 104, no. 2, pp. 154-171, SEP 2013.
[15] C. L. Zitnick and P. Dollar, “Edge Boxes: Locating Object Proposals from Edges,” European Conference on Computer Vision, pp. 391-405, SEP 2014.
[16] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov, “Scalable Object Detection using Deep Neural Networks,” 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2155-2162, JUN 2014.
[17] C. Szegedy, S. Reed, D. Erhan, D. Anguelov, and S. Ioffe, “Scalable, High-Quality Object Detection,” arXiv 1412.1441v3, DEC 2015.
[18] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 779-788, JUN 2016.
[19] J. Daugman, “High Confidence Visual Recognition of Persons by A Test of Statistical Independence,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 11, pp. 1148-1161, NOV 1993.
[20] R. Wildes, “Iris Recognition: An Emerging Biometric Technology,” Proceedings of the IEEE, vol. 85, no. 9, pp. 1348-1363, SEP 1997.
[21] T. Tan, Z. He, and Z. Sun, “Efficient and Robust Segmentation of Noisy Iris Images for Non-Cooperative Iris Recognition,” Image and Vision Computing, vol. 28, no. 2, pp. 223-230, FEB 2010.
[22] Y. A.-Betancourt and M. G.-Silvente, “A Fast Iris Location based on Aggregating Gradient Approximation using QMA-OWA Operator,” International Conference on Fuzzy Systems, pp. 1-8, JUL 2010.
[23] J. I. Pelaez and J. M. Dona, “A Majority Model in Group Decision Making using QMA-OWA Operators,” International Journal of Intelligent Systems, vol. 21, no. 2, pp. 193-208, FEB 2006.
[24] H. Ghodrati, M. J. Dehghani, M. S. Helfroush, and K. Kazemi, “Localization of Noncircular Iris Boundaries using Morphology and Arched Hough Transform,” 2010 2nd International Conference on Image Processing Theory, Tools and Applications, pp. 458-463, JUL 2010.
[25] J. Canny, “A Computational Approach to Edge Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679-698, NOV 1986.
[26] X.-C. Wang and X.-M. Xiao, “An Iris Segmentation Method based on Difference Operator of Radial Directions,” 2010 6th International Conference on Natural Computation, pp. 135-138, AUG 2010.
[27] J. Liu, X. Fu, and H. Wang, “Iris Image Segmentation based on K-means Cluster,” 2010 IEEE International Conference on Intelligent Computing and Intelligent Systems, pp. 194-198, OCT 2010.
[28] F. Yan, Y. Tian, H. Wu, Y. Zhou, L. Cao, and C. Zhou, “Iris Segmentation using Watershed and Region Merging,” 2014 9th IEEE Conference on Industrial Electronics and Applications, pp. 835-840, JUN 2014.
[29] J. B. T. M. Roerdink and A. Meijster, “The Watershed Transform: Definitions, Algorithms and Parallelization Strategies,” Fundamenta Informaticae, vol. 41, no. 1-2, pp. 187-228, APR 2000.
[30] A. F. Abate, M. Frucci, C. Galdi, and D. Riccio, “BIRD: Watershed based Iris Detection for Mobile Devices,” Pattern Recognition Letters, vol. 57, pp. 41-49, MAY 2015.
[31] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active Contour Models,” International Journal of Computer Vision, vol. 1, no. 4, pp. 321-331, JAN 1988.
[32] A. A. Jarjes, K. Wang, and G. J. Mohammed, “Iris Localization: Detecting Accurate Pupil Contour and Localizing Limbus Boundary,” 2010 2nd International Asia Conference on Informatics in Control, Automation and Robotics, pp. 349-352, MAR 2010.
[33] G. J. Mohammed, B.-R. Hong, and A. A. Jarjes, “Accurate Pupil Features Extraction based on New Projection Function,” Computing and Informatics, vol. 29, no. 4, pp. 663-680, APR 2009.
[34] C. A. C. M. Bastos, I. R. Tsang, and G. D. C. Calvalcanti, “A Combined Pulling & Pushing and Active Contour Method for Pupil Segmentation,” 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 850-853, MAR 2010.
[35] Z. He, T. Tan, and Z. Sun, “Iris Localization via Pulling and Pushing,” 18th International Conference on Pattern Recognition, pp. 366-369, AUG 2006.
[36] V. N. Boddeti, B. V. K. V. Kumar, and K. Ramkumar, “Improved Iris Segmentation based on Local Texture Statistics,” 2011 Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers, pp. 2147-2151, NOV 2011.
[37] T. F. Chan and L. A. Vese, “Active Contours without Edges,” IEEE Transactions on Image Processing, vol. 10, no. 2, pp. 266-277, FEB 2001.
[38] E. Krichen, “Lef3a: Pupil Segmentation using Viterbi Search Algorithm,” 2012 5th IAPR International Conference on Biometrics, pp. 323-329, APR 2012.
[39] A. J. Viterbi, “Error Bounds for Convolutional Codes and An Asymptotically Optimum Decoding Algorithm,” IEEE Transactions on Information Theory, vol. 13, no. 2, pp. 260-269, APR 1967.
[40] R. Tang and S. Weng, “Improving Iris Segmentation Performance via Borders Recognition,” 2011 4th International Conference on Intelligent Computation Technology and Automation, pp. 580-583, MAR 2011.
[41] H. Li, Z. Sun, and T. Tan, “Robust Iris Segmentation based on Learned Boundary Detectors,” 2012 5th IAPR International Conference on Biometrics, pp. 317-322, APR 2012.
[42] J. Friedman, T. Hastie, and R. Tibshirani, “Additive Logistic Regression: A Statistical View of Boosting,” The Annals of Statistics, vol. 28, no. 2, pp. 337-407, APR 2000.
[43] D. Benboudjema, N. Othman, B. Dorizzi, and W. Pieczynski, “Challenging Eye Segmentation using Triplet Markov Spatial Models,” 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1927-1931, MAY 2013.
[44] W. Pieczynski, D. Benboudjema, and P. Lanchantin, “Statistical Image Segmentation using Triplet Markov Fields,” SPIE 4885 Image and Signal Processing for Remote Sensing VIII, SEP 2002.
[45] M. Happold, “Structured Forest Edge Detectors for Improved Eyelid and Iris Segmentation,” 2015 International Conference of the Biometrics Special Interest Group, pp. 28-33, SEP 2015.
[46] P. Dollar and C. L. Zitnick, “Structured Forests for Fast Edge Detection,” 2013 IEEE International Conference on Computer Vision, pp. 1841-1848, DEC 2013.
[47] K. W. Bowyer, K. P. Hollingsworth, and P. J. Flynn, “A Survey of Iris Biometrics Research: 2008-2010,” Handbook of Iris Recognition, pp. 15-54, New York: Springer, London, JAN 2013.
[48] M. R. Rajput and G. S. Sable, “IRIS Biometrics Survey 2010-2015,” 2016 IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology, pp. 2028-2033, MAY 2016.
[49] M. D. Zeiler and R. Fergus, “Visualizing and Understanding Convolutional Networks,” European Conference on Computer Vision, pp. 818-833, SPE 2014.
[50] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” International Conference on Learning Representations, APR 2015.
[51] V. Nair and G. E. Hinton, “Rectified Linear Units Improve Restricted Boltzmann Machines,” Proceedings of the 27th International Conference on Machine Learning, pp. 807-814, JUN 2010.
[52] S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” Proceedings of the 32nd International Conference on Machine Learning, vol. 37, pp. 448-456, JUL 2015.
[53] J. Long, E. Shelhamer, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” 2015 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431-3440, JUN 2015.
[54] CASIA Iris Image Database, http://biometrics.idealtest.org/.
指導教授 栗永徽(Yung-Hui Li) 審核日期 2018-8-16
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明