博碩士論文 108522093 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:3 、訪客IP:3.143.5.244
姓名 蔡亞嶧(Ya-Yi Tsai)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 應用深度學習神經網路 於多頻譜手掌影像的多模式生物識別
(Apply deep learning neural network to multi-mode biometric verification based on multi-spectrum palm image)
相關論文
★ 基於機器學習進行不同頻譜之目標特徵擷取與識別方法研究★ 應用長短期記憶網路進行雷達目標自動識別
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 在人們越來越重視個人隱私保障的時代,各方無不努力的構思出新的身分認證方式,來確保使用者的身分可以被驗證而不會被盜用,而傳統上,有許多方法是透過每個人本身自帶的生物特徵的唯一性來進行身分辨識,諸如,掌紋、指紋、虹膜等等的單一生物特徵,以單一生物特徵進行身分辨識的準確性仍有成長空間;同時,近年深度學習網路隨著高性能計算機技術的進步,已運用於各領域之研究;因此本文研究應
用深度學習神經網路於多頻譜手掌影像的多模式生物辨識,以期提高身分辨識的準確性。本文以 CAISA 多光譜手掌影像資料集為實驗基礎,每種光譜的每一張手掌影像中包含掌紋、手型、指節紋路及指紋等多模式生物特徵,將手掌的多模式生物特徵輸入深度學習網路,運用手掌影像中每一種生物特徵,以便提高辨識的正確性。為了有效的增進訓練網路模型的精確度,首先嘗試過未經預訓練的網路模型進行訓練及測試,發現成果不如預期時;導入預訓練網路模型作為一種增進辨識精確度的作法,訓練後得到的精確度大幅上升,同時透過各類型不同資料擴增方法來進一步驗證各個預訓練網路模型對於不同變量下的適應能力,於實驗之中,使用了隨機旋轉、平移及亮度變化等常見的攝影環境改變進行資料擴增,從而發現此舉除了增強網路模型的精確度之外,也增強了網路模型適應影像變化的能力,實驗結果驗證了透過使用適當預訓練深度學習神經網路模型,以及合適的資料擴增方法增加資料量,以多光譜之手掌多模式特徵進行身分辨識具有很大的發展潛力。
摘要(英) In an age when people are more and more concerned about personal privacy protection,
all parties are working hard to conceive new ways of identity authentication to ensure that the
user′s identity can be verified and will not be stolen. Therefore, this paper investigates the
application of deep learning neural networks to multi-modal biometric recognition of multispectral palm images in order to improve the accuracy of body recognition. In this paper, we
use the CASIA multi-spectral palm image dataset as the basis of the experiment. Each palm
image of each spectrum contains multi-modal biometric features such as palm print, hand shape,
knuckle pattern, and fingerprint, etc. The multi-modal biometric features of the palm are input
into the deep learning network, and each biometric feature of the palm image is applied to
improve the accuracy of recognition.
In order to effectively improve the accuracy of the trained models, we first tried to train and
test the unpre-trained network models and found that the results were not as good as expected;
we introduced the pre-trained models as a way to improve the accuracy of recognition, and the
accuracy obtained after training increased significantly. In the experiments, data augmentation
was performed using common camera environment changes such as random rotation, panning,
and luminance changes, and it was found that this not only enhanced the accuracy of the model,
but also enhanced the ability of the model to adapt to image changes. The experimental results
validate the potential of using multispectral palm multimodal features for body recognition by
using appropriate pre-training deep learning neural network models and increasing the amount
of data enhancement.
關鍵字(中) ★ 神經網路
★ 深度學習
★ 手掌識別
★ 預訓練
★ 資料擴增
關鍵字(英) ★ Neural Networks
★ deep learning
★ Palm Recognition
★ Pre-training
★ Data Augmentation
論文目次 摘要 i
Abstract ii
目錄 iii
圖目錄 v
表目錄 vii
第一章 緒論 1
1-1研究背景與動機 1
1-2研究目的 1
第二章 相關文獻探討 2
2-1 傳統的掌紋辨識 2
2-2 CASIA多光譜手掌資料集 2
2-3遷移學習及預訓練神經網路模型 3
2-3-1遷移學習 3
2-3-2預訓練神經網路模型 3
2-4 深度學習神經網路 4
2-4-1 VGG16 4
2-4-2 Xception 4
2-4-3 ResNet50 4
2-4-4 ResNet50v2 5
2-4-5 DenseNet121 5
2-5 資料擴增 5
2-6論文架構 6
第三章 研究方法 7
3-1資料集 7
3-2 預訓練神經網路架構 7
3-3 資料擴增 10
3-3-1隨機平移 10
3-3-2隨機旋轉 10
3-3-3隨機亮度 11
3-3-4三種方法結合 11
3-3-5三種方法隨機使用 12
3-3-6再次擴充訓練資料集 12
3-4 研究環境 12
第四章 實驗結果與討論 13
4-1未經過預訓練的網路模型與原始資料集訓練結果 13
4-2原始訓練集訓練結果 17
4-3資料擴增後的訓練集 20
4-3-1 隨機平移 20
4-3-2 隨機旋轉 23
4-3-3 隨機亮度 25
4-3-4 三種方法結合 29
4-3-5三種方法隨機使用 31
4-3-6 12000筆資料訓練結果 33
4-3-7 24000筆資料訓練結果 34
4-4將測試集擴增至2400筆 37
4-5資料擴增的極限 39
4-6 與其他方法的比較 39
第五章 結論與未來工作 40
參考文獻 42
參考文獻 [1] ImageNet :http://image-net.org/
[2] Khan, Zohaib, et al. "Multispectral palmprint encoding and recognition." arXiv preprint arXiv:1402.2941 (2014).
[3] Biometrics Ideal Test http://biometrics.idealtest.org/index.jsp
[4] CASIA Multi-Spectral Palmprint Database http://biometrics.idealtest.org/dbDetailForUser.do?id=6
[5] Weiss, Karl, Taghi M. Khoshgoftaar, and DingDing Wang. "A survey of transfer learning." Journal of Big data 3.1 (2016): 1-40.
[6] Yosinski, Jason, et al. "How transferable are features in deep neural networks?." arXiv preprint arXiv:1411.1792 (2014).
[7] He, Kaiming, Ross Girshick, and Piotr Dollár. "Rethinking imagenet pre-training." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.
[8] Kolesnikov, Alexander, et al. "Big transfer (bit): General visual representation learning." arXiv preprint arXiv:1912.11370 6.2 (2019): 8.
[9] LeCun, Yann, et al. "Gradient-based learning applied to document recognition." Proceedings of the IEEE 86.11 (1998): 2278-2324.
[10] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems 25 (2012): 1097-1105.
[11] Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014).
[12] Szegedy, Christian, et al. "Going deeper with convolutions." Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
[13] Sifre, L., and S. Mallat. "Rigid-Motion Scattering for Image Classification. arXiv 2014." arXiv preprint arXiv:1403.1687.
[14] Howard, Andrew G., et al. "Mobilenets: Efficient convolutional neural networks for mobile vision applications." arXiv preprint arXiv:1704.04861 (2017).
[15] Chollet, François. "Xception: Deep learning with depthwise separable convolutions." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
[16] He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
[17] He, Kaiming, et al. "Identity mappings in deep residual networks." European conference on computer vision. Springer, Cham, 2016.
[18] Huang, Gao, et al. "Densely connected convolutional networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
[19] Wong, Sebastien C., et al. "Understanding data augmentation for classification: when to warp?." 2016 international conference on digital image computing: techniques and applications (DICTA). IEEE, 2016.
[20] Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." The journal of machine learning research 15.1 (2014): 1929-1958.
[21] Kingma, Diederik P., and Jimmy Ba. "Adam: A method for stochastic optimization." arXiv preprint arXiv:1412.6980 (2014).
[22] Janocha, Katarzyna, and Wojciech Marian Czarnecki. "On loss functions for deep neural networks in classification." arXiv preprint arXiv:1702.05659 (2017).
[23] Gong, Weiyong, et al. "Palmprint recognition based on convolutional neural network-Alexnet." 2019 Federated Conference on Computer Science and Information Systems (FedCSIS). IEEE, 2019.
[24] Dong, Xueqiu, Liye Mei, and Junhua Zhang. "Palmprint Recognition Based on Deep Convolutional Neural Networks." International Conference on Computer Science and Intelligent Communication. 2018.

[25] Sun, Z., et al. "Ordinal palmprint representation for personal identification. 2005." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
[26] Wu, Xiangqian, Kuanquan Wang, and David Zhang. "Palmprint texture analysis using derivative of Gaussian filters." 2006 International Conference on Computational Intelligence and Security. Vol. 1. IEEE, 2006.
[27] Kong, AW-K., and David Zhang. "Competitive coding scheme for palmprint verification." Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.. Vol. 1. IEEE, 2004.
[28] Raghavendra, Ramachandra, and Christoph Busch. "Novel image fusion scheme based on dependency measure for robust multispectral palmprint recognition." Pattern recognition 47.6 (2014): 2205-2221.
[29] Thamri, Essia, Kamel Aloui, and Mohamed Saber Naceur. "New approach to extract palmprint lines." 2018 International Conference on Advanced Systems and Electric Technologies (IC_ASET). IEEE, 2018.
指導教授 范國清 林志隆(Kuo-Chin Fan Chih-Lung Lin) 審核日期 2021-8-3
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明