參考文獻 |
[1] R. Hackett, “LinkedIn Data Breach: 117 Million Emails and Passwords Leaked | Fortune,” 2016. [Online]. Available: http://fortune.com/2016/05/18/linkedin-data-breach-email-password/. [Accessed: 21-Aug-2018].
[2] L. Matthews, “File With 1.4 Billion Hacked And Leaked Passwords Found On The Dark Web,” 2017. [Online]. Available: https://www.forbes.com/sites/leemathews/2017/12/11/billion-hacked-passwords-dark-web/#35c5613421f2. [Accessed: 21-Aug-2018].
[3] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
[4] A. Ioannidou, E. Chatzilari, S. Nikolopoulos, and I. Kompatsiaris, “Deep learning advances in computer vision with 3d data: A survey,” ACM Comput. Surv., vol. 50, no. 2, p. 20, 2017.
[5] T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” arXiv Prepr. arXiv1708.02709, 2017.
[6] Y. Zhang, M. Pezeshki, P. Brakel, S. Zhang, C. L. Y. Bengio, and A. Courville, “Towards end-to-end speech recognition with deep convolutional neural networks,” arXiv Prepr. arXiv1701.02720, 2017.
[7] R. Miotto, F. Wang, S. Wang, X. Jiang, and J. T. Dudley, “Deep learning for healthcare: review, opportunities and challenges,” Brief. Bioinform., 2017.
[8] J. C. B. Gamboa, “Deep learning for time-series analysis,” arXiv Prepr. arXiv1701.01887, 2017.
[9] K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, “Deep Reinforcement Learning: A Brief Survey,” IEEE Signal Process. Mag., vol. 34, no. 6, pp. 26–38, Nov. 2017.
[10] K. Valev, A. Schumann, L. Sommer, and J. Beyerer, “A systematic evaluation of recent deep learning architectures for fine-grained vehicle classification,” in Pattern Recognition and Tracking XXIX, 2018, vol. 10649, p. 1064902.
[11] J. Han, D. Zhang, G. Cheng, N. Liu, and D. Xu, “Advanced Deep-Learning Techniques for Salient and Category-Specific Object Detection: A Survey,” IEEE Signal Process. Mag., vol. 35, no. 1, pp. 84–100, Jan. 2018.
[12] A. King, S. M. Bhandarkar, and B. M. Hopkinson, “A Comparison of Deep Learning Methods for Semantic Segmentation of Coral Reef Survey Images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 1394–1402.
[13] A. Brunetti, D. Buongiorno, G. F. Trotta, and V. Bevilacqua, “Computer vision and deep learning techniques for pedestrian detection and tracking: A survey,” Neurocomputing, vol. 300, pp. 17–33, 2018.
[14] C. Wang, H. Yang, and C. Meinel, “Image Captioning with Deep Bidirectional LSTMs and Multi-Task Learning,” ACM Trans. Multimed. Comput. Commun. Appl., vol. 14, no. 2s, p. 40, 2018.
[15] Q. Wu, D. Teney, P. Wang, C. Shen, A. Dick, and A. van den Hengel, “Visual question answering: A survey of methods and datasets,” Comput. Vis. Image Underst., vol. 163, pp. 21–40, 2017.
[16] J. Lemley, S. Bazrafkan, and P. Corcoran, “Smart Augmentation Learning an Optimal Data Augmentation Strategy,” IEEE Access, vol. 5, pp. 5858–5869, 2017.
[17] A. Antoniou, A. Storkey, and H. Edwards, “Data Augmentation Generative Adversarial Networks,” 2018.
[18] J. Li, G. Liu, H. W. F. Yeung, J. Yin, Y. Y. Chung, and X. Chen, “A novel stacked denoising autoencoder with swarm intelligence optimization for stock index prediction,” in 2017 International Joint Conference on Neural Networks (IJCNN), 2017, pp. 1956–1961.
[19] W. Bae, J. Yoo, and J. C. Ye, “Beyond Deep Residual Learning for Image Restoration: Persistent Homology-Guided Manifold Simplification,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, pp. 1141–1149.
[20] J. Cho, S. Yun, K. Lee, and J. Y. Choi, “PaletteNet: Image Recolorization with Given Color Palette,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, pp. 1058–1066.
[21] M. Sharma, S. Chaudhury, and B. Lall, “Deep learning based frameworks for image super-resolution and noise-resilient super-resolution,” in 2017 International Joint Conference on Neural Networks (IJCNN), 2017, pp. 744–751.
[22] Y. Liao, S. Fu, X. Lu, C. Zhang, and Z. Tang, “Deep-learning-based object-level contour detection with CCG and CRF optimization,” in 2017 IEEE International Conference on Multimedia and Expo (ICME), 2017, pp. 859–864.
[23] S. M. A. Sharif, N. Mohammed, S. Momen, and N. Mansoor, “Classification of Bangla Compound Characters Using a HOG-CNN Hybrid Model,” in Proceedings of the International Conference on Computing and Communication Systems, 2018, pp. 403–411.
[24] T. Connie, M. Al-Shabi, W. P. Cheah, and M. Goh, “Facial Expression Recognition Using a Hybrid CNN--SIFT Aggregator,” in International Workshop on Multi-disciplinary Trends in Artificial Intelligence, 2017, pp. 139–149.
[25] T. Elsken, J.-H. Metzen, and F. Hutter, “Simple and efficient architecture search for Convolutional Neural Networks,” arXiv Prepr. arXiv1711.04528, 2017.
[26] A. Al-Hyari and S. Areibi, “Design space exploration of Convolutional Neural Networks based on Evolutionary Algorithms,” J. Comput. Vis. Imaging Syst., vol. 3, no. 1, 2017.
[27] M. Suganuma, S. Shirakawa, and T. Nagao, “A genetic programming approach to designing convolutional neural network architectures,” in Proceedings of the Genetic and Evolutionary Computation Conference, 2017, pp. 497–504.
[28] H. Cai, T. Chen, W. Zhang, Y. Yu, and J. Wang, “Efficient architecture search by network transformation,” 2018.
[29] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
[30] Q. Lu, C. Liu, Z. Jiang, A. Men, and B. Yang, “G-CNN: Object Detection Via Grid Convolutional Neural Network,” IEEE Access, vol. PP, no. 99, p. 1, 2017.
[31] F. S. Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr, and T. M. Hospedales, “Learning to compare: Relation network for few-shot learning,” 2018.
[32] D. Chung, K. Tahboub, and E. J. Delp, “A two stream siamese convolutional neural network for person re-identification,” in The IEEE international conference on computer vision (ICCV), 2017.
[33] Z. Bukovčiková, D. Sopiak, M. Oravec, and J. Pavlovičová, “Face verification using convolutional neural networks with Siamese architecture,” in ELMAR, 2017 International Symposium, 2017, pp. 205–208.
[34] S. Dey, A. Dutta, J. I. Toledo, S. K. Ghosh, J. Llados, and U. Pal, “SigNet: Convolutional Siamese Network for Writer Independent Offline Signature Verification,” ArXiv e-prints, p. arXiv:1707.02131, Jul. 2017.
[35] C. Lin and A. Kumar, “Multi-Siamese networks to accurately match contactless to contact-based fingerprint images,” in Biometrics (IJCB), 2017 IEEE International Joint Conference on, 2017, pp. 277–285.
[36] D. J. Rao, S. Mittal, and S. Ritika, “Siamese Neural Networks for One-shot detection of Railway Track Switches,” ArXiv e-prints, p. arXiv:1712.08036, Dec. 2017.
[37] Y. Huang, S. Liu, J. Hu, and W. Deng, “Metric-Promoted Siamese Network for Gender Classification,” in 2017 12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017), 2017, pp. 961–966.
[38] E.-J. Ong, S. Husain, and M. Bober, “Siamese Network of Deep Fisher-Vector Descriptors for Image Retrieval,” ArXiv e-prints, p. arXiv:1702.00338, Feb. 2017.
[39] C. M. Lee, S.-Y. Yoon, X. Wang, M. Mulholland, I. Choi, and K. Evanini, “Off-Topic Spoken Response Detection Using Siamese Convolutional Neural Networks,” Proc. Interspeech 2017, pp. 1427–1431, 2017.
[40] L. V Utkin and M. A. Ryabinin, “A Siamese Deep Forest,” ArXiv e-prints, p. arXiv:1704.08715, Apr. 2017.
[41] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
[42] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
[43] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, 2017, pp. 5987–5995.
[44] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826.
[45] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258.
[46] N. I. Forrest, H. Song, W. Matthew, A. Khalid, and J. W. Dally, “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size,” in ICLR’17 conference proceedings, 2017, pp. 207–212.
[47] A. G. Howard et al., “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv Prepr. arXiv1704.04861, 2017.
[48] X. Zhang, X. Zhou, M. Lin, and J. Sun, “Shufflenet: An extremely efficient convolutional neural network for mobile devices,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6848–6856.
[49] A. Gholami et al., “Squeezenext: Hardware-aware neural network design,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 1638–1647.
[50] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4510–4520, 2018.
[51] G. Huang, S. Liu, L. der Maaten, and K. Q. Weinberger, “Condensenet: An efficient densenet using learned group convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2752–2761.
[52] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design,” in Computer Vision--ECCV 2018, Springer, 2018, pp. 122–138.
[53] E. Real, A. Aggarwal, Y. Huang, and Q. V Le, “Regularized evolution for image classifier architecture search,” arXiv Prepr. arXiv1802.01548, 2018.
[54] M. Tan et al., “Mnasnet: Platform-aware neural architecture search for mobile,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 2820–2828.
[55] P. Ramachandran, B. Zoph, and Q. V Le, “Searching for activation functions,” arXiv Prepr. arXiv1710.05941, 2017.
[56] A. Howard et al., “Searching for MobileNetV3,” arXiv Prepr. arXiv1905.02244, 2019.
[57] M. Tan and Q. V Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” arXiv Prepr. arXiv1905.11946, 2019.
[58] C.-J. Hsieh, “Scalable Machine Learning,” 2016. [Online]. Available: http://www.stat.ucdavis.edu/~chohsieh/teaching/ECS289G_Fall2016/lecture12.pdf.
[59] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, p. 436, May 2015.
[60] H. Kuwajima, “Backpropagation in Convolutional Neural Network,” 2014. [Online]. Available: https://www.slideshare.net/kuwajima/cnnbp.
[61] J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “Arcface: Additive angular margin loss for deep face recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 4690–4699.
[62] S. Amidi, “CS 230 ― Deep Learning,” 2018. [Online]. Available: https://stanford.edu/~shervine/teaching/cs-230/.
[63] R. Ravimohan, “Deep Learning - Image Processing and Speech Recognition (CMU 2016),” 2016. [Online]. Available: http://raksharavimohan.com/cnn.html.
[64] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph Attention Networks,” 2018. [Online]. Available: http://petar-v.com/GAT/.
[65] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
[66] F.-F. Li, A. Karpathy, and J. Johnson, “CS231n: Convolutional Neural Networks for Visual Recognition - Stanford University,” 2016. [Online]. Available: http://cs231n.stanford.edu/slides/2016/.
[67] S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” in International Conference on Machine Learning, 2015, pp. 448–456.
[68] Y. Wu and K. He, “Group normalization,” in European Conference on Computer Vision, 2018, vol. 11217 LNCS, pp. 3–19.
[69] J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv Prepr. arXiv1607.06450, 2016.
[70] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” arXiv Prepr. arXiv1607.08022, 2016.
[71] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” J. Mach. Learn. Res., vol. 15, pp. 1929–1958, 2014.
[72] G. Ghiasi, T.-Y. Lin, and Q. V Le, “DropBlock: A regularization method for convolutional networks,” in Advances in Neural Information Processing Systems, 2018, pp. 10750–10760.
[73] Y. Chen et al., “Drop an octave: Reducing spatial redundancy in convolutional neural networks with octave convolution,” arXiv Prepr. arXiv1904.05049, 2019.
[74] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.
[75] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.
[76] W. Liu et al., “Ssd: Single shot multibox detector,” in European conference on computer vision, 2016, pp. 21–37.
[77] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint face detection and alignment using multitask cascaded convolutional networks,” IEEE Signal Process. Lett., vol. 23, no. 10, pp. 1499–1503, 2016.
[78] S. Woo, J. Park, J.-Y. Lee, and I. So Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 3–19.
[79] Z. Huang, S. Liang, M. Liang, and H. Yang, “DIANet: Dense-and-Implicit Attention Network,” arXiv Prepr. arXiv1905.10671, 2019.
[80] X. Pan, P. Luo, J. Shi, and X. Tang, “Two at once: Enhancing learning and generalization capacities via ibn-net,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 464–479.
[81] H. Zhong, X. Liu, Y. He, Y. Ma, and K. Kitani, “Shift-based Primitives for Efficient Convolutional Neural Networks,” arXiv Prepr. arXiv1809.08458, 2018. |