博碩士論文 102525012 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:11 、訪客IP:18.224.30.118
姓名 連翊展(I-Chan Lien)  查詢紙本館藏   畢業系所 軟體工程研究所
論文名稱 AILIS: An Adaptive and Iterative Learning Method for Accurate Iris Segmentation
(AILIS: An Adaptive and Iterative Learning Method for Accurate Iris Segmentation)
相關論文
★ 基於虹膜色彩空間的極端學習機的多類型頭痛分類★ 以多分數加權融合方式進行虹膜影像品質檢定
★ 基於深度學習之工業用智慧型機器視覺系統:以文字定位與辨識為例★ 基於深度學習的即時血壓估測演算法
★ 基於深度學習之工業用智慧型機器視覺系統:以焊點品質檢測為例★ 基於pix2pix深度學習模型之條件式虹膜影像生成架構
★ 以核方法化的相關濾波器之物件追蹤方法 實作眼動儀系統★ 雷射都普勒血流原型機之驗證與校正
★ 以生成對抗式網路產生特定目的影像—以虹膜影像為例★ 一種基於Faster R-CNN的快速虹膜切割演算法
★ 運用深度學習、支持向量機及教導學習型最佳化分類糖尿病視網膜病變症狀★ 應用卷積神經網路的虹膜遮罩預估
★ Collaborative Drama-based EFL Learning with Mobile Technology Support in Familiar Context★ 可用於自動訓練深度學習網路的網頁服務
★ 基於深度學習方法之高精確度瞳孔放大片偵測演算法★ 基於CNN方法之真假人臉識別模型
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 在虹膜辨識系統中,segmentation是其中最為重要的一環,segmentation的品質好壞左右著虹膜辨識最終的成功率。在過去的研究中,已經開發出了許多的segmentation演算法,如neural network、Hough Transform,但是未曾出現過「評估」segmentation品質的演算法,所以也無法對segmentation的正確與否給予一個客觀化的指標。因此,我們開發出了一個方法叫KIRD,它可以針對segmentation的品質給出一個數值化的指標,可以在不需人工介入的情況下,正確的評估segmentation的好壞與否。並且,我們在KIRD的基礎上,開發出了一套叫作AILIS的segmentation演算法,它是一個會在迭代中學習、具有高度應變性的演算法。在每一輪迭代中,AILIS都會根據前一輪的結果自動的學習並優化機器學習模型,以此產生出品質更佳的segmentation。根據實驗結果,AILIS可以將ICE虹膜資料庫(灰階影像)中99.39%的眼部影像成功的生成品質極佳的segmentation,在UBIRIS虹膜資料庫(彩色影像)中也有94.60%的成功率,並且在後續的大規模虹膜辨識實驗也驗證了AILIS的有效性與高度適應性。
摘要(英) Iris segmentation is one of the most important pre-processing stage for an iris recognition system. The quality of iris segmentation results dictates the iris recognition performance. In the past, methods of either learning-based (for example, neural network) or non-learning-based (for example, Hough Transform) have been proposed to deal with this topic. However, there does not exist an objective and quantitative figure of merit in terms of quality assessment for iris segmentation (to judge whether a segmentation hypothesis is accurate or not). Most existing works evaluated their iris segmentation quality by human. In this work, we propose KIRD, a mechanism to fairly judge the correctness of iris segmentation hypotheses. On the foundation of KIRD, we propose AILIS, which is an adaptive and iterative learning method for iris segmentation. AILIS is able to learn from past experience and automatically build machine-learning models for iris segmentation for both gray-scale and colored iris images. Experimental results show that, without any prior training, AILIS can successfully perform iris segmentation on ICE (gray-scale images) and UBIRIS (colored) to the accuracy rate of 99.39% and 94.60%, respectively. Large-scale iris recognition experiments based on AILIS segmentation hypotheses also validated its effectiveness, compared to the state-of-the-art algorithm.
關鍵字(中) ★ 機器學習
★ 虹膜辨識
★ 虹膜分割
關鍵字(英) ★ machine learning
★ iris segmentation
★ iris recognition
論文目次 Abstract
Directory
List of Figures
List of Tables
Chapter 1 Introduction
Chapter 2 Related Work
Chapter 3 Proposed Method
3.1 K-means Integration of Radial Difference (KIRD)
3.1.1 Pre-processing I (K-Means)
3.1.2 Pre-processing II (Principal Component Analysis)
3.1.3 Compute the Integration of Radial Difference
3.1.4 KIRD for Inner Boundary (KIRDI)
3.1.5 KIRD for Outer Boundary (KIRDO)
3.2 AILIS
3.2.1 The learning Perspective toward Iris Segmentation
3.2.2 The Iterative Learning Process
3.2.3 Learning for the Pupil Region
3.2.4 Estimating the Iris Boundaries
Chapter 4 Experimental Description
4.1 Database
4.2 KIRD
4.3 AILIS
Chapter 5 Discussion and Conclusion
References
參考文獻 [1] Y. Du, E. Arslanturk, Z. Zhou, and C. Belcher, “Video-based noncooperative iris image segmentation,” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 41, no. 1, pp. 64-74, 2011.
[2] E. Trucco, and M. Razeto, “Robust iris location in close-up images of the eye,” Pattern analysis and applications, vol. 8, no. 3, pp. 247-255, 2005.
[3] J. Daugman, "High confidence personal identification by rapid video analysis of iris texture." pp. 50-60.
[4] J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 15, no. 11, pp. 1148-1161, 1993.
[5] J. Daugman, “The importance of being random: statistical principles of iris recognition,” Pattern recognition, vol. 36, no. 2, pp. 279-291, 2003.
[6] J. Daugman, “New methods in iris recognition,” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 37, no. 5, pp. 1167-1175, 2007.
[7] R. P. Wildes, J. C. Asmuth, G. L. Green, S. C. Hsu, R. J. Kolczynski, J. R. Matey, and S. E. McBride, "A system for automated iris recognition." pp. 121-128.
[8] R. P. Wildes, J. C. Asmuth, G. L. Green, S. C. Hsu, R. J. Kolczynski, J. R. Matey, and S. E. McBride, “A machine-vision system for iris recognition,” Machine vision and Applications, vol. 9, no. 1, pp. 1-8, 1996.
[9] R. P. Wildes, “Iris recognition: an emerging biometric technology,” Proceedings of the IEEE, vol. 85, no. 9, pp. 1348-1363, 1997.
[10] W.-K. Kong, and D. Zhang, "Accurate iris segmentation based on novel reflection and eyelash detection model." pp. 263-266.
[11] C.-l. Tisse, L. Martin, L. Torres, and M. Robert, "Person identification technique using human iris recognition." pp. 294-299.
[12] Q.-C. Tian, Q. Pan, Y.-M. Cheng, and Q.-X. Gao, "Fast algorithm and application of hough transform in iris segmentation." pp. 3977-3980.
[13] X. Liu, K. W. Bowyer, and P. J. Flynn, "Experiments with an improved iris segmentation algorithm." pp. 118-123.
[14] L. Ma, Y. Wang, and T. Tan, "Iris recognition using circular symmetric filters." pp. 414-417.
[15] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 25, no. 12, pp. 1519-1533, 2003.
[16] N. Ritter, R. Owens, J. Cooper, and P. P. van Saarloos, "Location of the pupil-iris border in slit-lamp images of the cornea." pp. 740-745.
[17] A. Ross, and S. Shah, "Segmenting non-ideal irises using geodesic active contours." pp. 1-6.
[18] S. Shah, and A. Ross, “Iris segmentation using geodesic active contours,” Information Forensics and Security, IEEE Transactions on, vol. 4, no. 4, pp. 824-836, 2009.
[19] J. Huang, Y. Wang, T. Tan, and J. Cui, "A new iris segmentation method for recognition." pp. 554-557.
[20] A. Uhl, and P. Wild, "Weighted adaptive hough and ellipsopolar transforms for real-time iris segmentation." pp. 283-290.
[21] H. Rowley, S. Baluja, and T. Kanade, “Neural network-based face detection,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 20, no. 1, pp. 23-38, 1998.
[22] G. A. Carpenter, and S. Grossberg, “A massively parallel architecture for a self-organizing neural pattern recognition machine,” Computer vision, graphics, and image processing, vol. 37, no. 1, pp. 54-115, 1987.
[23] A. Timmermans, and A. Hulzebosch, “Computer vision system for on-line sorting of pot plants using an artificial neural network classifier,” Computers and electronics in agriculture, vol. 15, no. 1, pp. 41-55, 1996.
[24] L. Zhao, and C. E. Thorpe, “Stereo-and neural network-based pedestrian detection,” Intelligent Transportation Systems, IEEE Transactions on, vol. 1, no. 3, pp. 148-154, 2000.
[25] L. W. Liam, M. Chekima, L. C. Fan, and J. Dargham, "Iris recognition using self-organizing neural network." pp. 169-172.
[26] R. P. Broussard, L. R. Kennell, D. L. Soldan, and R. W. Ives, "Using artificial neural networks and feature saliency techniques for improved iris segmentation." pp. 1283-1288.
[27] R. P. Broussard, and R. W. Ives, "Using artificial neural networks and feature saliency to identify iris measurements that contain the most discriminatory information for iris segmentation." pp. 46-51.
[28] R. H. Abiyev, and K. Altunkaya, “Neural network based biometric personal identification with fast iris segmentation,” International Journal of Control, Automation and Systems, vol. 7, no. 1, pp. 17-23, 2009.
[29] H. Proenca, “Iris recognition: On the segmentation of degraded images acquired in the visible wavelength,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 32, no. 8, pp. 1502-1516, 2010.
[30] J. Thornton, “Matching deformed and occluded iris patterns: a probabilistic model based on discriminative cues,” 2007.
[31] M. A. Figueiredo, and A. K. Jain, “Unsupervised learning of finite mixture models,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 24, no. 3, pp. 381-396, 2002.
[32] P. J. Phillips, K. W. Bowyer, P. J. Flynn, X. Liu, and W. T. Scruggs, "The iris challenge evaluation 2005." pp. 1-8.
[33] H. Proença, and L. A. Alexandre, "UBIRIS: A noisy iris image database," Image Analysis and Processing–ICIAP 2005, pp. 970-977: Springer, 2005.
[34] R. Kerekes, B. Narayanaswamy, J. Thornton, M. Savvides, and B. Kumar, "Graphical model approach to iris matching under deformation and occlusion." pp. 1-6.
[35] J. Thornton, M. Savvides, and V. Kumar, “A Bayesian approach to deformed pattern matching of iris images,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 29, no. 4, pp. 596-606, 2007.
指導教授 栗永徽 審核日期 2016-1-18
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明