dc.description.abstract | In recent years, with the rapid increase in hardware computing power, the decrease in construction costs and the rise of big data, the application of machine learning and deep learning technology has become more and more popular. The most common use of machine learning technology is recognition and prediction. Observing the natural world, the electromagnetic spectrum of an object refers to the characteristic frequency distribution of electromagnetic waves emitted or absorbed by the object. The electromagnetic spectrum frequencies are listed from low to high as radio waves, microwaves, infrared, visible light, ultraviolet rays, X-rays and gamma rays. Therefore, in this wide spectrum distribution range, there are many topics worthy of research on target recognition and classification. Among them, we explore two topics as recognition application research based on machine learning and deep learning methods respectively.
The first is palmprint recognition. It is usually captured by visible light or infrared light. Although there have been many related studies, contactless palmprint recognition and multispectral palmprint recognition are relatively new topics. Collecting palmprint images using contactless equipment has multiple advantages of palmprint recognition, such as user-friendliness, hygiene and anti-counterfeiting. Multispectral palmprints will acquire different features because of the different spectral bands captured. Therefore, these two are more interesting for researchers. This paper proposes a reliable and robust biometrics based on multispectral palmprint images for personal recognition. Firstly, no prior knowledge about the multispectral images is necessary and the used parameters can be set automatically. Secondly, the palmprint images are captured in contactless scenarios. It is without any docking device. Thus, the proposed approach increases the user-friendliness, security and sanitation. Thirdly, it can restore the geometric deformations of contactless palmprint images, and align and crop the region of interest (ROI) on palmprint images automatically. Fourthly, a hierarchical fusion scheme, including data-level and feature-level fusion, is introduced in this paper. The data-level fusion uses discrete wavelet transform (DWT) to decompose the ROI image into four bands and inverse discrete wavelet transform (IDWT) to fuse four bands images at data level. This paper also derives a coefficient merging scheme to merge the four coefficient matrices which are decomposed by DWT from four band images. Fifthly, texture-based features are extracted from the fused ROI images by using Gabor filter. Sixthly, multiple features are obtained by using multiresolution analysis (MRA) that applies multiple multiresolution filters (MRFs) to extract multiple features from fused ROI images. Finally, the high dimensional feature matrixes of each one fused ROI image are reshaped and concatenated to a one-dimensional feature vector which is used as the input data to support vector machine (SVM) classifier. SVM is used to fuse multiple features at feature level. It also is used as a classifier.
The second is ship recognition. Features information of ships can usually be collected through radar microwaves to collect relevant echo information. At present, we can see that there are many features that can be used to recognize radar targets, such as high-resolution range profiles (HRRP), synthetic aperture radar (SAR) microwave images, and inverse synthetic aperture radar (ISAR) microwave images. Among them, HRRP is very convenient, which has relatively small data. Therefore, radar automatic target recognition (RATR) based on HRRP has always received extensive attention from experts and scholars engaged in RATR research. There are many conventional pattern recognition methods in existing research, and there are relatively few deep learning methods applied in this field.
The main contribution of this research is not only to collect and construct a real-life HRRP ship dataset, but also to focus on the use of deep learning methods to recognize ship targets, including convolutional neural network (CNN), long short-term memory (LSTM), bidirectional long short-term memory (BiLSTM) and the proposed model using a combination of two-channel CNN and BiLSTM. Radar HRRPs describe the radar characteristics of a target, that is, the characteristics of the target that is reflected by the microwave emitted by the radar are implicit in it. In conventional radar HRRP target recognition methods, prior knowledge of the radar is necessary for target recognition. The application of deep learning methods in HRRPs began in recent years, and most of them are CNN and its variants, and recurrent neural network (RNN) and the combination of RNN and CNN are relatively rarely used. The continuous pulses emitted by the radar hit the ship target, and the received HRRPs of the reflected wave seem to provide the geometric characteristics of the ship target structure. When the radar pulses are transmitted to the ship, different positions on the ship have different structures, so each range cell of the echo reflected in the HRRP will be different, and adjacent structures should also have continuous relational characteristics. This inspired the authors to propose a model to concatenate the features extracted by the two-channel CNN with BiLSTM. Various filters are used in two-channel CNN to extract deep features and fed into the following BiLSTM. The BiLSTM model can effectively capture long-distance dependence, because BiLSTM can be trained to retain critical information and achieve two-way timing dependence. Therefore, the two-way spatial relationship between adjacent range cells can be used to obtain excellent recognition performance. The experimental results revealed that the proposed method is robust and effective for ship recognition. | en_US |