dc.description.abstract | Music is a part of people’s life nowadays. The familiar melodies can be heard everywhere. Sometimes, we would hum the melody which is similar to the unknown but familiar melody appearing in our mind in order to find out the song including that melody. Thus, the query of singing/humming (QbSH) system is developed. According to where the features are extracted from, we propose two QbSH systems, called Dai-ChouNet27 and QBSHNet03. Dai-ChouNet27, designed with reference to the architecture of DaiNet34 which outperforms other models for the environmental sound recognition task, is almost fully convolutional neural network and the last two layers are fully-connected layers. The first layer of Dai-ChouNet27 with large size of kernel is used to filter out the noise in raw waveforms. Several convolutional layers are used to extract high-level features from raw waveforms except the first convolutional layer. Then, the last two layers are fully-connected layers used to classify the features and gain the results.
QBSHNet03 is a QbSH system that combines Shazam algorithm and convolutional neural network (CNN). In QBSHNet03, the time-domain waveforms are filtered by ConvRBM in order to eliminate the noise in waveform. Features including frequency and time difference are extracted from the spectrograms translated with Short-time Fourier transform (STFT) by Shazam algorithm. After extracting features, several convolutional layers and two fully-connected layers are used to classify the features to obtain the results.
There are three different datasets used to train and test QBSHNet03, Dai-ChouNet27, and DaiNet34. The three different datasets are MIR-QbSH dataset, dataset of Taiwan’s common children songs, and dataset of classical English songs. In MIR-QbSH dataset, the performance of Dai-ChouNet27 is much better than the performance of QBSHNet03 and DaiNet34. The training accuracy and MRR of Dai-ChouNet27 are up to 99% and 0.99, respectively. Moreover, the testing accuracy, MRR, precision, and recall of Dai-ChouNet27 are up to 84%, 0.88, 0.78, and 0.74, respectively. According to the results, for the QbSH task, the features extracted directly from raw waveforms are more suitable than the features extracted from spectrograms. After comparing the results of different length of clips and variable levels of SNR in the three datasets, Dai-ChouNet27 achieves outstanding performance if the datasets are large enough. If Dai-ChouNet27 is trained and tested with suitable length of clips and the level of SNR that Dai-ChouNet27 can still achieve better performance, the accuracy and MRR of training and testing are up to 84% and 0.87, respectively, moreover, the testing precision/recall are up to 0.7. | en_US |