參考文獻 |
[1]A. K. Nandi and E. E. Azzouz, “Algorithms for automatic modulation recognition of communication signals,” IEEE Transactions on Communications, vol. 46, no. 4, pp. 431-436, Apr 1998.
[2]林聰岷,《使用高階統計法則實現相位鍵移調變訊號分類作業》,碩士論文,國立臺灣大學電信工程學研究所,2012。
[3]C. M. Spooner, “On the utility of sixth-order cyclic cumulants for RF signal classification,” Conference Record of Thirty-Fifth Asilomar Conference on Signals, Systems and Computers, vol. 1, pp. 890-897, 2001.
[4]R. M. Al-Makhlasawy, M.M.A. Elnaby, H. A. Al-Khobby, E. M. El-Rabaie and F. E. A. El-samie, “Automatic modulation recognition in OFDM systems using cepstral analysis and support vector machines,” J. Telecomm. Syst. Manag., vol. 1, no. 3, pp.1-7, 2012
[5]K. Triantafyllakis, M. Surligas, G. Vardakis and S. Papadakis, “Phasma: An automatic modulation classification system based on Random Forest,” 2017 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), pp. 1-3, 2017
[6]J.C. Lin and H.Y. Hsu, “Timing-delay and frequency-offset estimations for initial synchronisation on time-varying Rayleigh fading channels,” IET Commun., vol. 7, iss. 6, pp. 562-576, 2013.
[7]J.C. Lin, “A frequency offset estimation technique based on frequency error characterization for OFDM communication on time-varying multipath fading channels,” IEEE Trans. Vehic. Technol., vol. 56, no. 3, pp. 1209-1222, May 2007.
[8]K.P. Chou, J.-C. Lin and H. V. Poor, “Disintegrated channel estimation in filter-and-forward relay networks,” IEEE Trans. Commun., vol. 64, no. 7, pp. 2835-2847, Jul. 2016.
[9]J.C. Lin, “Least-squares channel estimation for mobile OFDM communication on time-varying frequency-selective fading channels,” IEEE Trans. Vehic. Technol., vol. 57, no. 6, pp. 3538-3550, Nov. 2008.
[10]J.C. Lin, “Least-squares channel estimation assisted by self-interference cancellation for mobile PRP-OFDM applications,” IET Commun., vol. 3, iss. 12, pp.1907-1918, Dec. 2009.
[11]T. J. O’Shea, J. Corgan and T. C. Clancy, “Convolutional radio modulation recognition networks,” International Conference on Engineering Applications of Neural Networks, vol. 629, pp. 213-226, 2016
[12]W. Yongshi, G. Jie, L. Hao, L. Li, W. Zhigang and W. Houjun, “CNN-based modulation classification in the complicated communication channel,” 2017 13th IEEE International Conference on Electronic Measurement & Instruments (ICEMI), pp. 512-516, 2017
[13]J. Zhang, Y. Li and J. Yin, “Modulation classification method for frequency modulation signals based on the time–frequency distribution and CNN,” IET Radar, Sonar & Navigation, vol.12, pp. 244-249, 2017
[14]M. Kulin, T. Kazaz, I. Moerman and E. d. Poorter, “End-to-end Learning from Spectrum Data: A Deep Learning approach for Wireless Signal Identification in Spectrum Monitoring applications,” IEEE Access, vol. 6, pp.18484-48501, 2018
[15]T. J. O’Shea, N. West, M. Vondal and T. C. Clancy, "”emi-supervised radio signal identification,” 2017 19th International Conference on Advanced Communication Technology (ICACT), pp. 33-38, 2017.
[16]D. Hong, Z. Zhang and X. Xu, “Automatic modulation classification using recurrent neural networks,” 2017 3rd IEEE International Conference on Computer and Communications (ICCC), pp. 695-700, 2017
[17]S. Rajendran, W. Meert and D. Giustiniano, V. Lenders and S. Pollin, “Distributed deep learning models for wireless signal classification with low-cost spectrum sensors,” arXiv:1707.08908, 2017
[18]N.E. West and T. O′Shea, "Deep Architectures for Modulation Recognition," 2017 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), pp. 1-6, 2017.
[19]X. Liu, D. Yang and A. E. Gamal, “Deep Neural Network Architectures for Modulation Classification,” arXiv:1712.00443, 2017
[20]Y. S. Abu-Mostafa, M. Magdon-Ismail and H.T. Lin, (2016, Dec 26).Machine Learning Foundations. [Online]. Available:https://www.csie.ntu.edu.tw/~htlin/mooc/doc/10_present.pdf
[21]V. Nair and G.E. Hinton, "Rectified Linear Units Improve Restricted Boltzmann Machines," Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807-814, 2010.
[22]Y. LeCun, C. Cortes and C.J.C. Burges. (1998).The MNIST Database. [Online]. Available: http://yann.lecun.com/exdb/mnist
[23]A Krizhevsky, V Nair and G Hinton. (2009, Nov 23).The CIFAR-10 dataset. [Online]. Available: https://www.cs.toronto.edu/~kriz/cifar.html.
[24]Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[25]H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng, “Unsupervised learning of hierarchical representations with convolutional deep belief networks,” Communications of the ACM, vol. 54, no. 10, pp. 95-103, 2011.
[26]F. F. Li, J. Johnson and S. Yeung. (2017, May 4). Recurrent Neural Networks. [Online]. Available: http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture10.pdf
[27]R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Ng and C Potts, “Recursive deep models for semantic compositionality over a sentiment treebank,” Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1631-1642, 2013.
[28]R. Socher, K. Clark and A. See. (2018, Feb 2). Natural Language Processing with Deep Learning. [Online]. Available:http://cse.iitkgp.ac.in/~sudeshna/courses/DL18/nlp1-9April2018.pdf
[29]D. Jurafsky and J. H. Martin. (2017, Aug 28). Speech and Language Processing. [Online]. Available: https://web.stanford.edu/~jurafsky/slp3/ed3book.pdf
[30]S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[31]C. Olah. (2015, Aug 27). Understanding LSTM Networks. [Online]. Available: http://colah.github.io/posts/2015-08-Understanding-LSTMs
[32]T. O′Shea and J. Shea. (2017, Dec 13). Open Radio Machine Learning Datasets for Open Science. [Online]. Available: https://www.deepsig.io/datasets
[33]T. J. O’Shea and N. West, “Radio machine learning dataset generation with gnu radio,” Proceedings of the GNU Radio Conference, vol. 1, no. 1, 2016.
[34]K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv:1409.1556, 2014.
[35]N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever and R. Salakhutdinov, "Dropout: A simple way to prevent neural networks from overfitting," The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929-1958, 2014.
[36]D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv:1412.6980, 2014
[37]S. J. Pan and Q. Yang. “A survey on transfer learning,” IEEE Transactions on knowledge and data engineering, vol. 22, no. 10, pp. 1345-1359, 2010. |