博碩士論文 109522073 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:95 、訪客IP:18.217.138.169
姓名 鄭嘉元(Chia-Yuan Cheng)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 應用於3D手部點雲資料之2D輕量化分類器
(A lightweight 2D classifier for human palms based on 3D cloud points)
相關論文
★ 應用深度學習於結合自動偵測人物的步態辨識
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來人們科技正在進步,使得人類的生活越來越便利,把資料存放置電腦甚至雲端已經是越來越普及的行為,因此對於保護資料的安全成為非常重要的課題,許多系統看中身份認證所具備的身份特性來進行鑒定,其身份特性包含了獨特性、方便性、可靠性與不容易偽造…等, 本次研究將重心放在3D手部資料之身份辨識,手部資料包括指紋、掌紋、指節紋路、手部整體形狀,使用的資料本身是以點雲的型式儲存,利用投影的方式結合3D分類器與2D分類器兩者之優點來實現身份辨識。
考慮到了3D數據的複雜性,為了讓模型能同時具有3D分類器的高準確度與2D分類器的低系統複雜度之優點,我們利用投影的方式將3D資料轉成2D資料型式,將經過投影處理的資料傳入特別設計分類器模型之中,此作法包含了許多的好處,包含了能使用2D的資料增強方法(Mixup、Crop…)與2D 分類器所包含的多樣性,通過實驗顯示利用所提出的投影方式加上所設計出MobileNet-LPB模型相比於3D-PointNet與2D-MoblieNetv2有顯著的性能差異。
摘要(英) As the progress of technology, as well as human life, has become more convenient. It is a prevalent behavior that is uploading personal data to the cloud. So, how to protect them has become an important issue, and human recognition systems play an essential role there. Many methods focus on individual characteristics to identify. These characteristics include uniqueness, convenience, reliability, and not ease to forge. In this paper, we focus on how to classify the 3D-palms data. The data consist of the fingerprint, palm print, knuckle pattern, and overall shape of the hand. Considering the complexity of 3D data, we project the 3D data to 2D for many benefits of 2D classification. Such as various 2D augmentation methods - MixUP or random crop......, diversity of classifier - VGG, ResNet, and others. Finally, the proposed MobileNet-LPB with the proposed projection method has a significant performance gap over the 2D-MoblieNetv2 and 3D-PointNet in the benchmark.
關鍵字(中) ★ 身份認證
★ 3D點雲
★ 輕量化神經網路
★ 仿射轉換
★ 資料增強
關鍵字(英) ★ Human Recognition
★ 3D Point Cloud
★ Lightweight Neural Network
★ Affine Transformation
★ Data Augmentation
論文目次 目錄
摘要 i
Abstract ii
目錄 iii
表目錄 vi
第一章 緒論 1
1-1研究動機 1
1-2研究目的與方法 2
1-3論文架構 3
第二章 相關文獻探討 4
2-1 PointNet 4
2-2 2D分類器 5
2-3 輕量化卷積神經網路 6
2-3-1 Mobilenets 6
2-3-2 MobileNetv2 7
2-4 Squeeze-and-Excitation Networks 8
第三章 研究內容與架構 9
3-1 點雲的性質 10
3-2投影方式 11
3-3資料增強 14
3-4神經網路架構 19
3-4-1 MobileNet-LPB 19
3-4-2 Lightweight Performance Booster 21
3-4-3 損失函數 23
3-4-4 優化器 23
第四章 實驗結果與討論 25
4-1軟硬體設備與研究環境 25
4-2資料庫介紹 26
4-3 實驗說明 28
4-4網路參數 29
4-5 理論速度 31
4-6資料增強分析 34
第五章 結論 35
5-1結論 35
5-2 未來展望 36
參考文獻 37
參考文獻 [1] PolyU Palmprint Database.
[Online].Available:http://www.comp.polyu.edu.hk/~biometrics.
[2] Casia Palmprint Database.
[Online].Available: http://biometrics.idealtest.org/.
[3] Pune-411005(An Autonomous Institute of Government of Maharashtra) College of Engineering. Coep palm print database.
Dataset accessible at: https://www.coep.org.in/resources/coeppalmprintdatabase.
[4] C. R. Qi, H. Su, M. Niessner, A. Dai, M. Yan, and L. J. Guibas, “Volumetric and multi-view CNNs for object classification on 3D data,”in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2016, pp. 5648–5656.
[5] S. Shi, X. Wang, and H. Li, “PointRCNN: 3D object proposal generation and detection from point cloud,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 770–779.
[6] C. Xu, B. Wu, Z. Wang, W. Zhan, P. Vajda, K. Keutzer, and M.Tomizuka, “SqueezeSegV3: Spatially-adaptive convolution for efficient point-cloud segmentation,” in ECCV, 2020.
[7] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “PointNet: Deep learning on point sets for 3D classification and segmentation,” in Proc.IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 652–660.
[8] Y. Chen, V. T. Hu, E. Gavves, T. Mensink, P. Mettes, P. Yang, and C. G. M. Snoek, “Pointmixup: Augmentation for point clouds,” Lecture Notes in Computer Science, 2020, pp. 330–345.
[9] Kim, S., Lee, S., Hwang, D., Lee, J., Hwang, S. J., and Kim, H. J. Point cloud augmentation with weighted local transformations. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , 2021, pp. 548–557.
[10] D. Lee et al., “Regularization strategy for point cloud via rigidly mixed sample,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit.,Jun. 2021, pp. 15900–15909.
[11] S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, and H. Li, “Pvrcnn: Point-voxel feature set abstraction for 3d object detection,” in CVPR, 2020.
[12] H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller, “Multi-view convolutional neural networks for 3D shape recognition,” in Proc. Int. Conf. Computer Vision (ICCV), 2015, pp. 945–953.
[13] C. R. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. J. Guibas,“Volumetric and multi-view CNNs for object classification on 3D data,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016,pp. 5648–5656.
[14] L. Li, S. Zhu, H. Fu, P. Tan, and C.-L. Tai, “End-to-end learning local multi-view descriptors for 3D point clouds,” in Proc. IEEE/CVF Conf.Comput. Vis. Pattern Recognit. (CVPR), 2020, pp. 1919–1928.
[15] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” in Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998.
[16] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. 25th Int. Conf. Neural Inf. Process. Syst., 2013, pp. 1097–1105.
[17] A. G. Howard, et al., (Apr. 2017).“MobileNets: Efficient convolutional neuralnetworks for mobile vision applications.”
[Online]. Available: https://arxiv.org/abs/1704.04861
[18] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov,and L.-C. Chen, “MobileNetV2: Inverted residuals and linear bottlenecks,” in Proc. IEEE/CVF Conf.Comput. Vis. Pattern Recognit., Jun. 2018,
pp. 4510–4520.
[19] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei,
“Imagenet: A large-scale hierarchical image database,” in Proc.
Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2009, pp. 248-255.
[20] A. Krizhevsky and G. Hinton, “Learning Multiple Layers of Features from Tiny Images,” technical report, Univ. of Toronto, 2009.
[21] T.-Y. Lin, et al., “Microsoft COCO: Common objects in context,” in Proc. 13th Eur. Conf. Comput. Vis., 2014, pp. 740–755.
[22] H. Zhang, M. Cisse, Y. Dauphin, and D. Lopez-Paz, “mixup: Beyond Empirical Risk Minimization,” in International Conference on Learning Representations (ICLR), 2018.
[23] HE, K., ZHANG, X., REN, S., AND SUN, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.
[24] M. Lin, Q. Chen, and S. Yan, “Network in network,” in Proc. ICLR, 2014.
[25] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. ICLR, 2015.
[26] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proc.CVPR, 2017.
[27] D. Kingma and J. Lei-Ba. Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2015.
[28] V. Kanhangad, A. Kumar, and D. Zhang, “Contactless and pose invariant biometric identification using hand surface,” IEEE Trans.Image Process., vol. 20, no. 5, pp. 1415–1424, May 2011.
[29] Minolta Vivid 910 Noncontact 3D Digitizer 2008.
[Online] Available: http://www.konicaminolta.com/instruments/products/3d/non-contact/vivid910/index.html
[30] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, ‘‘ShuffleNet V2: Practical guidelines for efficient CNN architecture design,’’ in Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 116–131.
[31] J. Hu, L. Shen, and G. Sun. Squeeze-and-Excitation networks.
arXiv preprint arXiv:1709.01507, 2017.
指導教授 范國清 林志隆(Kuo-Chin Fan Chih-Lung Lin) 審核日期 2022-7-25
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明