博碩士論文 105582004 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:11 、訪客IP:18.189.178.189
姓名 陳宗斌(Tsung-Pin Chen)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於機器學習進行不同頻譜之目標特徵擷取與識別方法研究
(Research on Target Feature Extraction and Recognition Methods for Different Spectrum Based on Machine Learning)
相關論文
★ 應用深度學習神經網路 於多頻譜手掌影像的多模式生物識別★ 應用長短期記憶網路進行雷達目標自動識別
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來,隨著硬體的計算能力快速提升,建置成本下降以及大數據的興起,使用機器學習與深度學習技術的應用越來越普及。機器學習與深度學習技術最常運用的用途就是識別與預測。觀察大自然界,一個物體的電磁波頻譜,指的是這物體所發射或吸收的電磁波的特徵頻率分佈。電磁波譜頻率從低到高分別為無線電波、微波、紅外光、可見光、紫外線、X射線和伽瑪射線,在這廣泛的頻譜分布範圍裡,值得我們探討的標的識別與分類的議題很多,其中,我們探索兩個議題作為分別基於機器學習和深度學習方法的識別應用研究。
第一個研究議題是掌紋識別,掌紋通常透過可見光或紅外光進行擷取,相關的研究雖然已經很多,其中,關於非接觸式掌紋識別與多光譜掌紋識別是較新的議題;使用非接觸式設備收集掌紋圖像具有掌紋識別的多個優點,例如: 用戶友善性,衛生性和抗偽造性。而多光譜掌紋由於捕獲的光譜帶不同,多光譜掌紋將獲得不同的特徵。因此,這兩者對研究人員來說更具吸引力。本文提出了一種基於多光譜掌紋識別的多階層融合方法,用於個人識別。首先,不需要關於多光譜圖像的先驗知識,並且可以自動設置所使用的參數。其次,由於使用的是非接觸場景中擷取之掌紋圖像,因此,所提出的方法增加了用戶友善性,安全性和衛生性。第三,它可以恢復非接觸式掌紋圖像的幾何變形,並自動對齊並裁剪掌紋圖像上的感興趣區域。第四,介紹了一種分層融合方案,包括數據級和特徵級的融合。數據級融合使用離散小波變換將感興趣圖像分解為四個波段,並使用反向離散小波變換數據級別融合四個波段的圖像。本文還提出了一種係數合併方案,用於合併由離散小波變換從四個頻帶圖像分解的四個係數矩陣。第五,通過使用加伯濾波器從融合的感興趣圖像中提取基於紋理的特徵。第六,通過使用多分辨率分析獲得多個特徵,該多分辨率分析應用了多個多分辨率濾鏡從融合的感興趣圖像中提取多個特徵。最後,將每張融合的感興趣圖像的高維特徵矩陣轉換為一維特徵向量,該一維特徵向量用作支持向量機分類器的輸入數據。支持向量機用於在特徵級別融合多個特徵,同時它也用作分類器。
第二個研究議題是船艦識別,船艦的特徵信息通常可透過雷達的微波蒐集到相關的回波資訊,目前,可以運用於識別雷達目標的特徵有很多,包括高解析度距離輪廓圖、合成孔徑雷達微波影像、逆合成孔徑雷達微波影像…等。其中,高解析度距離輪廓圖是一種方便又容易運用的目標資訊,它的數據相對較小,因此,基於高解析度距離輪廓圖的雷達自動目標識別技術一直受到相關領域的專家學者關注。現有的研究已存在了許多傳統圖形識別的方法,運用深度學習的方法相對較少,本研究主要的貢獻除了自行蒐集並建構一個真實的高解析度距離輪廓圖船艦資料庫,而且還專注於運用深度學習方法進行船艦目標的識別與分類,包括了卷積神經網路、長短期記憶網路、雙向長短期記憶網路以及本文所提出的使用雙通道卷積神經網路與雙向長短期記憶網路組合的模型。在傳統的雷達高解析度距離輪廓圖目標識別方法中,雷達的先驗知識對於目標識別必不可少。深度學習方法在高解析度距離輪廓圖中的應用始於近年來,並且大多數是卷積神經網絡及其變體,而遞歸神經網路以及遞歸神經網路和卷積神經網路的組合則相對較少使用。雷達發出的連續脈衝擊中了艦船目標,接收到的反射波的高解析度距離輪廓圖似乎提供了艦船目標結構的幾何特徵。當雷達脈衝發送到船上時,船上的不同位置具有不同的結構,因此高解析度距離輪廓圖中反射的回波的每個測距單元將不同,並且相鄰結構也應具有連續的關係特性。這啟發了作者提出了一個模型,以將雙通道卷積神經網路提取的特徵與雙向長短期記憶串聯起來。雙通道卷積神經網路中使用了不同的濾波器來提取更多的深層特徵,並將其饋入後續的雙向長短期記憶網路。雙向長短期記憶網路模型可以有效地保留關鍵信息並實現雙向時序依賴性。因此,相鄰距離單元之間的雙向空間關係可用於模型中作為具鑑別度的識別特徵。實驗結果表明,該方法對船艦識別具有魯棒性和有效性。
摘要(英) In recent years, with the rapid increase in hardware computing power, the decrease in construction costs and the rise of big data, the application of machine learning and deep learning technology has become more and more popular. The most common use of machine learning technology is recognition and prediction. Observing the natural world, the electromagnetic spectrum of an object refers to the characteristic frequency distribution of electromagnetic waves emitted or absorbed by the object. The electromagnetic spectrum frequencies are listed from low to high as radio waves, microwaves, infrared, visible light, ultraviolet rays, X-rays and gamma rays. Therefore, in this wide spectrum distribution range, there are many topics worthy of research on target recognition and classification. Among them, we explore two topics as recognition application research based on machine learning and deep learning methods respectively.
The first is palmprint recognition. It is usually captured by visible light or infrared light. Although there have been many related studies, contactless palmprint recognition and multispectral palmprint recognition are relatively new topics. Collecting palmprint images using contactless equipment has multiple advantages of palmprint recognition, such as user-friendliness, hygiene and anti-counterfeiting. Multispectral palmprints will acquire different features because of the different spectral bands captured. Therefore, these two are more interesting for researchers. This paper proposes a reliable and robust biometrics based on multispectral palmprint images for personal recognition. Firstly, no prior knowledge about the multispectral images is necessary and the used parameters can be set automatically. Secondly, the palmprint images are captured in contactless scenarios. It is without any docking device. Thus, the proposed approach increases the user-friendliness, security and sanitation. Thirdly, it can restore the geometric deformations of contactless palmprint images, and align and crop the region of interest (ROI) on palmprint images automatically. Fourthly, a hierarchical fusion scheme, including data-level and feature-level fusion, is introduced in this paper. The data-level fusion uses discrete wavelet transform (DWT) to decompose the ROI image into four bands and inverse discrete wavelet transform (IDWT) to fuse four bands images at data level. This paper also derives a coefficient merging scheme to merge the four coefficient matrices which are decomposed by DWT from four band images. Fifthly, texture-based features are extracted from the fused ROI images by using Gabor filter. Sixthly, multiple features are obtained by using multiresolution analysis (MRA) that applies multiple multiresolution filters (MRFs) to extract multiple features from fused ROI images. Finally, the high dimensional feature matrixes of each one fused ROI image are reshaped and concatenated to a one-dimensional feature vector which is used as the input data to support vector machine (SVM) classifier. SVM is used to fuse multiple features at feature level. It also is used as a classifier.
The second is ship recognition. Features information of ships can usually be collected through radar microwaves to collect relevant echo information. At present, we can see that there are many features that can be used to recognize radar targets, such as high-resolution range profiles (HRRP), synthetic aperture radar (SAR) microwave images, and inverse synthetic aperture radar (ISAR) microwave images. Among them, HRRP is very convenient, which has relatively small data. Therefore, radar automatic target recognition (RATR) based on HRRP has always received extensive attention from experts and scholars engaged in RATR research. There are many conventional pattern recognition methods in existing research, and there are relatively few deep learning methods applied in this field.
The main contribution of this research is not only to collect and construct a real-life HRRP ship dataset, but also to focus on the use of deep learning methods to recognize ship targets, including convolutional neural network (CNN), long short-term memory (LSTM), bidirectional long short-term memory (BiLSTM) and the proposed model using a combination of two-channel CNN and BiLSTM. Radar HRRPs describe the radar characteristics of a target, that is, the characteristics of the target that is reflected by the microwave emitted by the radar are implicit in it. In conventional radar HRRP target recognition methods, prior knowledge of the radar is necessary for target recognition. The application of deep learning methods in HRRPs began in recent years, and most of them are CNN and its variants, and recurrent neural network (RNN) and the combination of RNN and CNN are relatively rarely used. The continuous pulses emitted by the radar hit the ship target, and the received HRRPs of the reflected wave seem to provide the geometric characteristics of the ship target structure. When the radar pulses are transmitted to the ship, different positions on the ship have different structures, so each range cell of the echo reflected in the HRRP will be different, and adjacent structures should also have continuous relational characteristics. This inspired the authors to propose a model to concatenate the features extracted by the two-channel CNN with BiLSTM. Various filters are used in two-channel CNN to extract deep features and fed into the following BiLSTM. The BiLSTM model can effectively capture long-distance dependence, because BiLSTM can be trained to retain critical information and achieve two-way timing dependence. Therefore, the two-way spatial relationship between adjacent range cells can be used to obtain excellent recognition performance. The experimental results revealed that the proposed method is robust and effective for ship recognition.
關鍵字(中) ★ 多光譜掌紋識別
★ 小波轉換
★ 加伯濾波器
★ 支持向量機
★ 雷達自動目標 識別
★ 高解析度距離輪廓圖
關鍵字(英) ★ multispectral palmprint recognition
★ wavelet transform
★ Gabor filter
★ support vector machine (SVM)
★ radar automatic target recognition (RATR)
★ highresolution range profile (HRRP)
論文目次 摘要 I
Abstract V
Acknowledgements IX
Table of Contents XI
List of Figures XIII
List of Tables XVI
Chapter 1:Introduction 1
1.1 Background 1
1.2 Motivation 2
1.3 Organization of the Dissertation 6
Chapter 2:Palmprint Recognition 8
2.1 Related Works for Palmprint Recognition 10
2.2 The Proposed Method for Palmprint Recognition 12
2.2.1 Deformed Palm Image Restoration 14
2.2.2 ROI Alignment and Cropping 17
2.2.3 Data Level Fusion with Wavelet Transform 18
2.2.3.1 Discrete Wavelet Transform 19
2.2.3.2 Coefficient Merging Scheme 20
2.2.3.3 Inverse Discrete Wavelet Transform 21
2.2.4 Texture-based Feature Extraction 24
2.2.5 Multiresolution Feature Matrix Extraction 25
2.2.6 Recognition by Using SVM 32
Chapter 3:Ship Recognition 35
3.1 Related Works for Ship Recognition 37
3.2 Collection and Construction of Dataset 42
3.3 Preprocessing 47
3.3.1 Non-coherent Integration 47
3.3.2 Eliminate Noisy Range Cells 48
3.3.3 Data Format Transformation 48
3.3.3.1 One-dimensional HRRP Data Format 48
3.3.3.2 Two-dimensional Gray-scale HRRP Data Format 49
3.3.3.3 Two-dimensional Binary-map HRRP Data Format 50
3.4 Theory of Relevant Neural Networks 51
3.4.1 Convolutional Neural Network 51
3.4.1.1 Convolutional Layer 52
3.4.1.2 Pooling Layer 53
3.4.1.3 Fully-connected Layer 53
3.4.1.4 Output Layer 54
3.4.2 Bidirectional Long Short-Term Memory Network 54
3.4.3 Activation Function 57
3.4.4 Loss Function 58
3.4.5 Optimizer 59
3.4.6 Batch Normalization 61
3.5 The Proposed Method for Ship Recognition 63
3.5.1 The Proposed 3Conv+2FullyConnected CNN Model 63
3.5.2 The Proposed Two-channel CNN+BiLSTM Model 65
Chapter 4:Experiments 70
4.1 Experiments for Palmprint Recognition 70
4.1.1 Palmprint Database 70
4.1.2 Experimtnts 71
4.1.2.1 Optimized SVM Parameters 72
4.1.2.2 Impact of Different Number of Training Samples for Each Individual 77
4.1.2.3 Influence of the Number of Individuals 79
4.1.2.4 The Proposed Approach Vs. With Only Gabor Filter 81
4.2 Experiments for Ship Recognition 84
4.2.1 HRRP Ship Database 84
4.2.2 Experiments 85
4.2.2.1 Experiments of the Proposed CNN 85
4.2.2.2 Experiments of the Proposed 2CNN+BiLSTM 93
4.2.3 Comparison of State-of-the-art Approaches 104
Chapter 5:Conclusions and Future Works 106
5.1 Conclusions 106
5.2 Future Works 109
References 111
Publication List 119
Appendix A 121
參考文獻 [1] Broken Promises & Empty Threats: The Evolution of AI in the USA, 1956-1996. Available online: https://www.technologystories.org/ai-evolution/
[2] Wikipedia: Dartmouth workshop. Available online: https://en.wikipedia.org/wiki/ Dartmouth_workshop
[3] Peng Xinrong; Tian Yangmeng; Wang Jiaqiang, “A Survey of Palmprint Feature Extraction Algorithms,” In Proceedings of the IEEE International Conference on Intelligent Systems Design and Engineering Applications, Zhangjiajie, China, 6-7 November 2013.
[4] Lan Du; Hongwei Liu; Zheng Bao; Junying Zhang, “A two-distribution compounded statistical model for Radar HRRP target recognition,” IEEE Transactions on Biomedical Engineering, Vol. 45, 2006, pp. 2226–2238.
[5] A. K. Jain, R. Bolle and S. Pankanti, “Biometrics Personal Identification in Networked Society,” Kluwer Academic Publishers, Massachusetts, 2001.
[6] Y. Yoshitomi, T. Miyaura, S. Tomita. and S. Kimura, “Face Identification Thermal Image Processing,” Proceeding 6th IEEE International Workshop on Robot and Human Communication, RO-MAN′ 97 SENDAI, 1997.
[7] C. L. Lin, and K. C. Fan, “Biometric Verification Using Thermal Images of Palm-dorsa Vein-patterns,” IEEE Transactions on Circuits and Systems for Video Technology, 14(2) (2004) 199-213.
[8] C. L. Lin, T.C. Chuang, K.C. Fan, “Palmprint Verification Using Hierarchical Decomposition,” Pattern Recognit. 38(12) (2005) 2639–2652.
[9] C. C. Han, P. C. Chang, and C. C. Hsu, “Personal identification using hand geometry and palmprint.” Fourth Asian Conference on Computer Vision (ACCV), 2000, pp.747-752.
[10] D. Zhang and W. Shu, “Two novel characteristics in palmprint verification: datum point invariance and line feature matching,” Pattern Recognition, 32(4) (1999) 691-702.
[11] J. Chen, C. Zhang and G. Rong, “Palmprint recognition using crease,” International Conference on Image Processing, vol. 3, 2001, pp. 234 –237.
[12] W. K. Kong and D. Zhang, “Palmprint texture analysis based on low-resolution images for personal authentication,” 16th International Conference on Pattern Recognition, vol. 3, 2002, pp. 807 –810.
[13] X. Wu, K. Wang and D. Zhang, “Fuzzy directional element energy feature (FDEEF) based palmprint identification,” 16th International Conference on Pattern Recognition, vol. 1, 2002, pp. 95 –98.
[14] N. Duta, A. K. Jain and K. V. Mardia, “Matching of palmprints, Pattern Recognition Letters,” 23(4) (2002) 477-485.
[15] J. You, W. Li and D. Zhang, “Hierarchical palmprint identification via multiple feature extraction,” Pattern Recognition, 35(4) (2002) 847-859.
[16] G. Lu, D. Zhang and K. Wang, “Palmprint recognition using eigenpalms features,” Pattern Recognition Letters, 24(9-10) (2003) 1463-1467.
[17] C. C. Han, H. L. Cheng, C. L. Lin, K. C. Fan, “Personal Authentication Using Palmprint Features,” Pattern Recognit. 36(2) (2003) 371–381.
[18] Z. Sun, T. Tan, Y. Wang and S. Z. Li, “Ordinal palmprint representation for personal identification,” Comput. Vision Pattern Recognition 1(20–25) (2005) 279-284.
[19] A. K. Jain and J. Feng, “Latent palmprint matching,” IEEE Trans. PAMI. 31(2), (2009) 1032–1047.
[20] C. L. Lin, S. H. Wang, H. Y. Cheng, K. C. Fan, W. L. Hsu and C. R. Lai, “Bimodal biometric verification using the fusion of palmprint and infrared palm-dorsum vein images,” Sensors, 15(12) (2015) 31339-31361.
[21] G. Zheng, C. J. Wang and T.E. Boult, “Application of projective invariants in hand geometry biometrics,” IEEE Trans. Inf. Forensics Secur. 2(4) (2007) 758-768.
[22] C. L. Lin, H. F. Chiang, H. Y. Cheng, C. W. Kuo and J. W. Hsieh, “An approach to aligning regions of interest on infrared palm-dorsum images captured in contactless scenarios,” Journal of Marine Science and Technology, 26(1) (2018) 73-83.
[23] Edmundoptics. Available online: https://www.edmundoptics.com/knowledge-center/application-notes/imaging/what-is-swir/ (accessed on 10 April 2021)
[24] Dreamstime. Available online: https://www.dreamstime.com/illustration/light-wavelength.html (accessed on 10 April 2021)
[25] D. Zhang, Z. H. Guo, G. M. Lu, L. Zhang, and W. M. Zuo, “An online system of multispectral palmprint verification,” IEEE Transactions on Instrumentation and Measurement, 59(2) (2010) 480-490.
[26] J. R. Cui, “Multispectral Fusion For Palmprint Recognition,” Optik - International Journal for Light and Electron Optics, 124 (2013) 3067-3071.
[27] D. F. Hong, W. Q. Liu, J. Su, Z. K. Pan and G. D. Wang, “A novel hierarchical approach for multispectral palmprint recognition,” Neurocomputing, 151(1) (2014) 511–521.
[28] R. Raghavendra, C. Busch, “Novel image fusion scheme based on dependency measure for robust multispectral palmprint recognition,” Pattern Recognit. 47(6) (2014) 2205–2221.
[29] Xuebin Xu, Longbin Lu, Xinman Zhang, Huimin Lu, and Wanyu Deng, “Multispectral palmprint recognition using multiclass projection extreme learning machine and digital shearlet transform,” Neural Computing and Applications, 27(1) (2016) 143–153.
[30] T. Stathaki, “Image fusion-algorithms and applications,” Academic Press, 2008.
[31] G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern recognition, 37(9) (2004) 1855-1872.
[32] S. N. Shridhar, “Recognition of handwritten and machine printed text of postal address interpretation,” Pattern Recognition Letters, 14 (1993) 291-302.
[33] J. Cao, M. Ahmadi and M. Shridhar, “Recognition of handwritten numerals with multiple feature and multistrage classifier,” Pattern Recognition, 28 (1995) 153-160.
[34] J. koenderink, “The structure of images,” in Biological Cybernetics, Springer Verlag, New York ,1984.
[35] Y. Shinagawa and T. L. Kunil, “Unconstrained automatic images matching using multiresolutional critical-point filter,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 20 (1998) 994-1010.
[36] Y. Qi and B. R. Hunt, “A multiresolution approach to computer verification of handwritten signatures,” IEEE Trans. Image Processing, 4 (1995) 870-874.
[37] J. You and P. Bhattacharya, “A wavelet-based coarse-to-fine image matching scheme in a parallel virtual machine environment,” IEEE Trans. Image Processing, 9 (2000) 1547-1559.
[38] Vladimir N. Vapnik, “The Nature of Statistical Learning Theory,” Springer, 1995.
[39] C. C. Chang and C. J. Lin, “LIBSVM: A Library for Support Vector Machines,” 2001. http://www.csie.ntu.edu.tw/verb~cjlin/libsvm
[40] Xingpeng Xu , Zhenhua Guo , “Multispectral Palmprint Recognition Using Quaternion Principal Component Analysis,” 2010 International Workshop on Emerging Techniques and Challenges for Hand-Based Biometrics, (2010) 1-5.
[41] CASIA-MS-PalmprintV1, Available online: http://biometrics.idealtest.org/
[42] Ayad, H., George, L. E., Mohammed, M. J., Hadi, R. A., & Abdullah, S. N. H. S., “An Efficient Approach for Visual Object Categorization based on Enhanced Generalized Gabor Filter and SVM Classifier,” Journal of Engineering and Applied Sciences, 14(16) (2019) 5753-5761.
[43] Hu, M.K., “Visual Pattern Recognition by Moment Invariants,” IRE Transaction of Information Theory, 8 (1962) 179-187
[44] Chao Li, Yannick Benezeth, Keisuke Nakamura, Randy Gomez, Fan Yang, “An Embedded Solution for Multispectral Palmprint Recognition,” 25th European Signal Processing Conference, 2017, pp. 1384-1388. DOI: 10.23919/EUSIPCO.2017. 8081427
[45] Essia Thamri, Kamel Aloui and Mohamed Saber Naceur , “New approach to extract palmprint lines,” International Conference on Advanced Systems and Electric Technologies, 2018. DOI: 10.1109/ASET.2018.8379895
[46] H.-J. Li and S.-H. Yang, "Using range profiles as feature vectors to identify aerospace objects," IEEE Transactions on Antennas and Propagation, Vol. 41, 1993, pp. 261-268.
[47] A. Zyweck and R. E. Bogner, "Radar target classification of commercial aircraft," IEEE Transactions on Aerospace and Electronic Systems, Vol. 32, 1996, pp. 598-606.
[48] B. Pei and Z. Bao, "Multi-aspect radar target recognition method based on scattering centers and HMMs classifiers," IEEE Transactions on Aerospace and Electronic Systems, Vol. 41, 2005, pp. 1067–1074.
[49] Lan Du; Hongwei Liu; Zheng Bao; Mengdao Xing, “Radar HRRP target recognition based on higher order spectra,” IEEE Transactions Signal Process. 2005, 53, 2359–2368.
[50] Luo, S.; Li, S., “Automatic target recognition of radar HRRP based on high order central moments features,” J. Electron. (China) 2009, 26, 184–190.
[51] Lu, J.; Xi, Z.; Yuan, X.; Yu, G.; Zhang, M., “Ship target recognition using high resolution range profiles based on FMT and SVM,” In Proceedings of the IEEE CIE International Conference on Radar, Chengdu, China, 24-27 October 2011.
[52] Zhou, D.; Shen, X.; Liu, Y., “Nonlinear subprofile space for radar HRRP recognition,” PIER Letters. 2012, 33, 91–100.
[53] Feng, B.; Du, L.; Shao, C.; Wang, P.; Liu, H., “Radar HRRP target recognition based on robust dictionary learning with small training data size,” In Proceedings of the IEEE Radar Conference, Ottawa, ON, Canada, 29 April-3 May May 2013.
[54] D. Zhou, X. Shen, and W. Yang, "Radar target recognition based on fuzzy optimal transformation using high-resolution range profile," Pattern Recognition Letters, Vol. 34, 2013, pp. 256-264.
[55] D. Zhou, "Radar target HRRP recognition based on reconstructive and discriminative dictionary learning," Signal Processing, Vol. 126, 2016, pp. 52-64.
[56] Jianbin Lu, Shusen Tian and Zemin Xi, "Frame segmentation and recognition algorithm for ship′s HRRPs based on hypothesis testing," in Proceedings of CIE International Conference on Radar, 2016.
[57] Liu, J.; Fang, N.; Xie, Y.J.; Wang, B.F., “Multi-scale feature-based fuzzy-support vector machine classification using radar range profiles,” IET Radar Sonar Navig. 2016, 10, 370–378.
[58] Du, L.; He, H.; le Zhao; Wang, P. “Noise robust radar HRRP targets recognition based on scatterer matching algorithm,” IEEE Sens. J. 2016, 16, 1743–1753.
[59] Y. Wang, Y. Jiang, Y.-H. Wang, Y. Li and J. Xu, "Scattering center estimation of HRRP via atomic norm minimization," in Proceedings of the International Conference on IEEE Radar, 2017, pp. 135-139.
[60] Lundén, J.; Koivunen, V., “Deep learning for HRRP-based target recognition in multistatic radar systems,” In Proceedings of the IEEE Radar Conference, Philadelphia, PA, USA, 2–6 May 2016; pp. 1–6.
[61] Yuan, L. “A time-frequency feature fusion algorithm based on neural network for HRRP,” Prog. Electromagn. Res. 2017, 55, 63–71.
[62] B. Feng, B. Chen, and H. Liu, "Radar HRRP target recognition with deep networks," Pattern Recognition, Vol. 61, 2017, pp. 379–393.
[63] Karabayır, O.; Yücedağ, O.M.; Kartal, M.Z.; Serim, H.A., “Convolutional neural networks-based ship target recognition using high resolution range profiles,” In Proceedings of the International Radar Symposium, Prague, Czech Republic, 28–30 June 2017.
[64] Liao, K.; Si, J.; Zhu, F.; He, X., “Radar HRRP Target Recognition Based on Concatenated Deep Neural Networks,” IEEE Access. 2018, 6, 29211–29218.
[65] Song, J.; Wang, Y.; Chen, W.; Li, Y.; Wang, J., “Radar HRRP recognition based on CNN,” IEEE IET 2019, 7766–7769, doi:10.1049/joe.2019.0725.
[66] Zhang, Q.; Lu, J.; Liu, T.; Zhang, P.; Liu, Q., “Ship HRRP Target Recognition Based on CNN and ELM,” In Proceedings of the IEEE ICECTT Conference, Guilin, China, 26-28 April 2019.
[67] Wan, J.; Chen, B.; Liu, Y.; Yuan, Y.; Liu, H.; Jin, L., “Recognizing the HRRP by Combining CNN and BiRNN with Attention Mechanism,” IEEE Access. 2020, 8, 20828–20837.
[68] Chen, T.-P.; Lin, C.-L.; Fan, K.-C.; Lin, W.-Y.; Kao, C.-W., “Apply convolutional neural network to radar automatic target recognition based on real-life radar high-resolution range profile of ship target,” In Proceedings of the CVGIP, Hsinchu, Taiwan, 16-18 August 2020.
[69] LeCun, Y.; Bengio, Y., “Convolutional networks for images, speech, and time series,” In The Handbook of Brain Theory and Neural Networks; Arbib, M.A., Ed.; MIT Press: Cambridge, MA, USA, 1995.
[70] Hochreiter, S.; Schmidhuber, J., “Long short-term memory,” Neural Comput. 1997, 8, 1735–1780.
[71] Softmax function, Available online: https://en.wikipedia.org/wiki/Softmax_ function (accessed on 5 January 2020).
[72] R. Parmar, "Common Loss functions in machine learning," 2 Sep. 2018. Available: https://towardsdatascience.com/common-loss-functions-in-machine-learning-46af0ffc4d23.
[73] Diederik P. Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization," in Proceedings of International Conference for Learning Representations, 2015.
[74] Sergey Ioffe and Christian Szegedy. “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” ICML, Vol. 37, July 2015, pp. 448–456.
[75] Colaboratory Frequently Questions. Available online: https://research.google.com /colaboratory/faq.html (accessed on 13 November 2020).
指導教授 范國清 林志隆(Kuo-Chin Fan Chih -Lung Lin) 審核日期 2021-6-22
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明