博碩士論文 107522627 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:16 、訪客IP:18.217.220.114
姓名 哈帝恩(Fattah Azzuhry Rahadian)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 用於人臉驗證的緊湊且低成本的卷積神經網路
(Compact and Low-Cost CNN for Face Verification)
相關論文
★ Single and Multi-Label Environmental Sound Recognition with Gaussian Process★ 波束形成與音訊前處理之嵌入式系統實現
★ 語音合成及語者轉換之應用與設計★ 基於語意之輿情分析系統
★ 高品質口述系統之設計與應用★ 深度學習及加速強健特徵之CT影像跟骨骨折辨識及偵測
★ 基於風格向量空間之個性化協同過濾服裝推薦系統★ RetinaNet應用於人臉偵測
★ 金融商品走勢預測★ 整合深度學習方法預測年齡以及衰老基因之研究
★ 漢語之端到端語音合成研究★ 基於 ARM 架構上的 ORB-SLAM2 的應用與改進
★ 基於深度學習之指數股票型基金趨勢預測★ 探討財經新聞與金融趨勢的相關性
★ 基於卷積神經網路的情緒語音分析★ 運用深度學習方法預測阿茲海默症惡化與腦中風手術存活
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 ( 永不開放)
摘要(中) 近年來,人臉驗證已廣泛用於保護網際網路上的各種交易行為。人臉驗證最先進的技術為卷積神經網絡(CNN)。然而,雖然CNN有極好的效果,將其佈署於行動裝置與嵌入式設備上仍具有挑戰性,因為這些設備僅有受限的可用計算資源。在本論文中,我們提出了一種輕量級CNN,並使用多種方法進行人臉驗證。首先,我們提出ShuffleNet V2的修改版本ShuffleHalf,並將其做為FaceNet算法的骨幹網路。其次,使用Reuse Later以及Reuse ShuffleBlock方法來重用模型中的特徵映射圖。Reuse Later通過將特徵直接與全連接層相連來重用可能未使用的特徵。同時,Reuse ShuffleBlock重用ShuffleNet V2(ShuffleBlock)的基本構建塊中第一個1x1卷積層輸出的特徵映射圖。由於1x1卷積運算在計算上很昂貴,此方法用於降低模型中1x1卷積的比率。第三,隨著通道數量的增加,卷積核大小增加,以獲得相同的感知域大小,同時計算複雜度更低。第四,深度卷積運算用於替換一些ShuffleBlocks。第五,將其他現有的現有算法與所提出的方法相結合,以查看它們是否可以提高所提出方法的性能 - 效率權衡。
在五個人臉驗證測試數據集的實驗結果表明,ShuffleHalf比其他所有方法都具有更高的準確度,並且只需要目前最先進的算法MobileFaceNet的48% FLOPs。通過Reuse ShuffleBlock重用特徵技術,ShuffleHalf的準確性得到進一步提高。該方法將計算複雜度降低到僅為MobileFaceNet的42% FLOPs。同時,改變卷積核大小和使用depthwise repetition都可以進一步降低計算複雜度,使MobileFaceNet的FLOPs只剩下38%,但效果依然優於MobileFaceNet。與一些現有方法的組合不會增加模型的準確性和性能 - 效率權衡。但是,添加shortcut連接和使用Swish激發函數可以提高模型的準確性,而不會顯著增加計算複雜度。
摘要(英) In recent years, face verification has been widely used to secure various transactions on the internet. The current state-of-the-art in face verification is convolutional neural network (CNN). Despite the performance of CNN, deploying CNN in mobile and embedded devices is still challenging because the available computational resource on these devices is constrained. In this paper, we propose a lightweight CNN for face verification using several methods. First, a modified version of ShuffleNet V2 called ShuffleHalf is used as the backbone network for the FaceNet algorithm. Second, the feature maps in the model are reused using two proposed methods called Reuse Later and Reuse ShuffleBlock. Reuse Later works by reusing the potentially unused features by connecting the features directly to the fully connected layer. Meanwhile, Reuse ShuffleBlock works by reusing the feature maps output of the first 1x1 convolution in the basic building block of ShuffleNet V2 (ShuffleBlock). This method is used to reduce the percentage of 1x1 convolution in the model because 1x1 convolution operation is computationally expensive. Third, kernel size is increased as the number of channels increases to obtain the same receptive field size with less computational complexity. Fourth, the depthwise convolution operations are used to replace some ShuffleBlocks. Fifth, other existing previous state-of-the-art algorithms are combined with the proposed method to see if they can increase the performance-efficiency tradeoff of the proposed method.
Experimental results on five testing datasets show that ShuffleHalf achieves better accuracy than all other baselines with only 48% FLOPs of the previous state-of-the-art algorithm, MobileFaceNet. The accuracy of ShuffleHalf is further improved by reusing the feature. This method can also reduce the computational complexity to only 42% FLOPs of MobileFaceNet. Meanwhile, both changing kernel size and using depthwise repetition can further decrease computational complexity to only 38% FLOPs of MobileFaceNet with better performance than MobileFaceNet. Combination with some existing methods does not increase the accuracy nor performance-efficiency tradeoff of the model. However, adding shortcut connections and using Swish activation function can improve the accuracy of the model without any noticeable increase in the computational complexity.
關鍵字(中) ★ 人臉驗證
★ 輕量級
★ 卷積神經網絡
★ 複雜度
關鍵字(英) ★ face verification
★ lightweight
★ convolutional neural network
★ complexity
論文目次 抽象 i
Abstract ii
Acknowledgments iii
Contents iv
List of Figures vii
List of Tables viii
一、Introduction 1
1.1 Background 1
1.2 Problem Formulation 2
1.3 Research Objectives 2
1.4 Research Originality 3
二、Literature Review 4
三、Theoretical Basis 9
3.1 Computer Vision 9
3.2 Face Recognition 9
3.3 Artificial Neural Network 9
3.3.1 Perceptron 10
3.3.2 Activation function 11
3.3.3 Structure 12
3.3.4 Training method: forward propagation and backpropagation 13
3.3.5 Parameter initialization 15
3.3.6 Loss function 16
3.3.7 Optimizer 17
3.4 Convolutional Neural Network 18
3.4.1 Lightweight CNN (manual and automatic) 22
3.5 Normalization 25
3.6 Regularization 26
3.6.1 Dropout 27
3.6.2 DropBlock 28
3.7 Data Augmentation 29
3.8 Skip Connections 29
3.9 Squeeze-Excitation Module 30
3.10 Octave Convolution 31
3.11 Siamese Network 32
3.12 Object Detection 33
3.13 Model Evaluation 34
四、Research Methodology 36
4.1 Literature Study 36
4.2 Tools and Materials 36
4.2.1 Tools 36
4.2.2 Materials 37
4.3 Research Procedure 37
4.3.1 Research activities 37
4.3.2 General description of the model 38
4.3.3 Preprocessing 38
4.3.4 Architecture design 40
4.4 Additional Proposed Methods 45
4.4.1 Feature reuse 45
4.4.2 Kernet 48
4.4.3 Depthwise repetition 49
4.4.4 Other methods 49
4.5 Model Evaluation 49
五、Results and Discussion 51
5.1 Experiment Results for Baselines and ShuffleHalf 51
5.2 Experiment Results for Feature Reuse on ShuffleHalf 52
5.3 Experiment Results for Kernet, Depthwise Repetition, and Swish 55
5.4 Experiment Results for Other Methods 57
六、Conclusion 60
Bibliographies 62
參考文獻 [1] R. Hackett, “LinkedIn Data Breach: 117 Million Emails and Passwords Leaked | Fortune,” 2016. [Online]. Available: http://fortune.com/2016/05/18/linkedin-data-breach-email-password/. [Accessed: 21-Aug-2018].
[2] L. Matthews, “File With 1.4 Billion Hacked And Leaked Passwords Found On The Dark Web,” 2017. [Online]. Available: https://www.forbes.com/sites/leemathews/2017/12/11/billion-hacked-passwords-dark-web/#35c5613421f2. [Accessed: 21-Aug-2018].
[3] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
[4] A. Ioannidou, E. Chatzilari, S. Nikolopoulos, and I. Kompatsiaris, “Deep learning advances in computer vision with 3d data: A survey,” ACM Comput. Surv., vol. 50, no. 2, p. 20, 2017.
[5] T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” arXiv Prepr. arXiv1708.02709, 2017.
[6] Y. Zhang, M. Pezeshki, P. Brakel, S. Zhang, C. L. Y. Bengio, and A. Courville, “Towards end-to-end speech recognition with deep convolutional neural networks,” arXiv Prepr. arXiv1701.02720, 2017.
[7] R. Miotto, F. Wang, S. Wang, X. Jiang, and J. T. Dudley, “Deep learning for healthcare: review, opportunities and challenges,” Brief. Bioinform., 2017.
[8] J. C. B. Gamboa, “Deep learning for time-series analysis,” arXiv Prepr. arXiv1701.01887, 2017.
[9] K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, “Deep Reinforcement Learning: A Brief Survey,” IEEE Signal Process. Mag., vol. 34, no. 6, pp. 26–38, Nov. 2017.
[10] K. Valev, A. Schumann, L. Sommer, and J. Beyerer, “A systematic evaluation of recent deep learning architectures for fine-grained vehicle classification,” in Pattern Recognition and Tracking XXIX, 2018, vol. 10649, p. 1064902.
[11] J. Han, D. Zhang, G. Cheng, N. Liu, and D. Xu, “Advanced Deep-Learning Techniques for Salient and Category-Specific Object Detection: A Survey,” IEEE Signal Process. Mag., vol. 35, no. 1, pp. 84–100, Jan. 2018.
[12] A. King, S. M. Bhandarkar, and B. M. Hopkinson, “A Comparison of Deep Learning Methods for Semantic Segmentation of Coral Reef Survey Images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 1394–1402.
[13] A. Brunetti, D. Buongiorno, G. F. Trotta, and V. Bevilacqua, “Computer vision and deep learning techniques for pedestrian detection and tracking: A survey,” Neurocomputing, vol. 300, pp. 17–33, 2018.
[14] C. Wang, H. Yang, and C. Meinel, “Image Captioning with Deep Bidirectional LSTMs and Multi-Task Learning,” ACM Trans. Multimed. Comput. Commun. Appl., vol. 14, no. 2s, p. 40, 2018.
[15] Q. Wu, D. Teney, P. Wang, C. Shen, A. Dick, and A. van den Hengel, “Visual question answering: A survey of methods and datasets,” Comput. Vis. Image Underst., vol. 163, pp. 21–40, 2017.
[16] J. Lemley, S. Bazrafkan, and P. Corcoran, “Smart Augmentation Learning an Optimal Data Augmentation Strategy,” IEEE Access, vol. 5, pp. 5858–5869, 2017.
[17] A. Antoniou, A. Storkey, and H. Edwards, “Data Augmentation Generative Adversarial Networks,” 2018.
[18] J. Li, G. Liu, H. W. F. Yeung, J. Yin, Y. Y. Chung, and X. Chen, “A novel stacked denoising autoencoder with swarm intelligence optimization for stock index prediction,” in 2017 International Joint Conference on Neural Networks (IJCNN), 2017, pp. 1956–1961.
[19] W. Bae, J. Yoo, and J. C. Ye, “Beyond Deep Residual Learning for Image Restoration: Persistent Homology-Guided Manifold Simplification,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, pp. 1141–1149.
[20] J. Cho, S. Yun, K. Lee, and J. Y. Choi, “PaletteNet: Image Recolorization with Given Color Palette,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, pp. 1058–1066.
[21] M. Sharma, S. Chaudhury, and B. Lall, “Deep learning based frameworks for image super-resolution and noise-resilient super-resolution,” in 2017 International Joint Conference on Neural Networks (IJCNN), 2017, pp. 744–751.
[22] Y. Liao, S. Fu, X. Lu, C. Zhang, and Z. Tang, “Deep-learning-based object-level contour detection with CCG and CRF optimization,” in 2017 IEEE International Conference on Multimedia and Expo (ICME), 2017, pp. 859–864.
[23] S. M. A. Sharif, N. Mohammed, S. Momen, and N. Mansoor, “Classification of Bangla Compound Characters Using a HOG-CNN Hybrid Model,” in Proceedings of the International Conference on Computing and Communication Systems, 2018, pp. 403–411.
[24] T. Connie, M. Al-Shabi, W. P. Cheah, and M. Goh, “Facial Expression Recognition Using a Hybrid CNN--SIFT Aggregator,” in International Workshop on Multi-disciplinary Trends in Artificial Intelligence, 2017, pp. 139–149.
[25] T. Elsken, J.-H. Metzen, and F. Hutter, “Simple and efficient architecture search for Convolutional Neural Networks,” arXiv Prepr. arXiv1711.04528, 2017.
[26] A. Al-Hyari and S. Areibi, “Design space exploration of Convolutional Neural Networks based on Evolutionary Algorithms,” J. Comput. Vis. Imaging Syst., vol. 3, no. 1, 2017.
[27] M. Suganuma, S. Shirakawa, and T. Nagao, “A genetic programming approach to designing convolutional neural network architectures,” in Proceedings of the Genetic and Evolutionary Computation Conference, 2017, pp. 497–504.
[28] H. Cai, T. Chen, W. Zhang, Y. Yu, and J. Wang, “Efficient architecture search by network transformation,” 2018.
[29] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
[30] Q. Lu, C. Liu, Z. Jiang, A. Men, and B. Yang, “G-CNN: Object Detection Via Grid Convolutional Neural Network,” IEEE Access, vol. PP, no. 99, p. 1, 2017.
[31] F. S. Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr, and T. M. Hospedales, “Learning to compare: Relation network for few-shot learning,” 2018.
[32] D. Chung, K. Tahboub, and E. J. Delp, “A two stream siamese convolutional neural network for person re-identification,” in The IEEE international conference on computer vision (ICCV), 2017.
[33] Z. Bukovčiková, D. Sopiak, M. Oravec, and J. Pavlovičová, “Face verification using convolutional neural networks with Siamese architecture,” in ELMAR, 2017 International Symposium, 2017, pp. 205–208.
[34] S. Dey, A. Dutta, J. I. Toledo, S. K. Ghosh, J. Llados, and U. Pal, “SigNet: Convolutional Siamese Network for Writer Independent Offline Signature Verification,” ArXiv e-prints, p. arXiv:1707.02131, Jul. 2017.
[35] C. Lin and A. Kumar, “Multi-Siamese networks to accurately match contactless to contact-based fingerprint images,” in Biometrics (IJCB), 2017 IEEE International Joint Conference on, 2017, pp. 277–285.
[36] D. J. Rao, S. Mittal, and S. Ritika, “Siamese Neural Networks for One-shot detection of Railway Track Switches,” ArXiv e-prints, p. arXiv:1712.08036, Dec. 2017.
[37] Y. Huang, S. Liu, J. Hu, and W. Deng, “Metric-Promoted Siamese Network for Gender Classification,” in 2017 12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017), 2017, pp. 961–966.
[38] E.-J. Ong, S. Husain, and M. Bober, “Siamese Network of Deep Fisher-Vector Descriptors for Image Retrieval,” ArXiv e-prints, p. arXiv:1702.00338, Feb. 2017.
[39] C. M. Lee, S.-Y. Yoon, X. Wang, M. Mulholland, I. Choi, and K. Evanini, “Off-Topic Spoken Response Detection Using Siamese Convolutional Neural Networks,” Proc. Interspeech 2017, pp. 1427–1431, 2017.
[40] L. V Utkin and M. A. Ryabinin, “A Siamese Deep Forest,” ArXiv e-prints, p. arXiv:1704.08715, Apr. 2017.
[41] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
[42] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
[43] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, 2017, pp. 5987–5995.
[44] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826.
[45] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258.
[46] N. I. Forrest, H. Song, W. Matthew, A. Khalid, and J. W. Dally, “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size,” in ICLR’17 conference proceedings, 2017, pp. 207–212.
[47] A. G. Howard et al., “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv Prepr. arXiv1704.04861, 2017.
[48] X. Zhang, X. Zhou, M. Lin, and J. Sun, “Shufflenet: An extremely efficient convolutional neural network for mobile devices,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6848–6856.
[49] A. Gholami et al., “Squeezenext: Hardware-aware neural network design,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 1638–1647.
[50] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4510–4520, 2018.
[51] G. Huang, S. Liu, L. der Maaten, and K. Q. Weinberger, “Condensenet: An efficient densenet using learned group convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2752–2761.
[52] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design,” in Computer Vision--ECCV 2018, Springer, 2018, pp. 122–138.
[53] E. Real, A. Aggarwal, Y. Huang, and Q. V Le, “Regularized evolution for image classifier architecture search,” arXiv Prepr. arXiv1802.01548, 2018.
[54] M. Tan et al., “Mnasnet: Platform-aware neural architecture search for mobile,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 2820–2828.
[55] P. Ramachandran, B. Zoph, and Q. V Le, “Searching for activation functions,” arXiv Prepr. arXiv1710.05941, 2017.
[56] A. Howard et al., “Searching for MobileNetV3,” arXiv Prepr. arXiv1905.02244, 2019.
[57] M. Tan and Q. V Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” arXiv Prepr. arXiv1905.11946, 2019.
[58] C.-J. Hsieh, “Scalable Machine Learning,” 2016. [Online]. Available: http://www.stat.ucdavis.edu/~chohsieh/teaching/ECS289G_Fall2016/lecture12.pdf.
[59] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, p. 436, May 2015.
[60] H. Kuwajima, “Backpropagation in Convolutional Neural Network,” 2014. [Online]. Available: https://www.slideshare.net/kuwajima/cnnbp.
[61] J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “Arcface: Additive angular margin loss for deep face recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 4690–4699.
[62] S. Amidi, “CS 230 ― Deep Learning,” 2018. [Online]. Available: https://stanford.edu/~shervine/teaching/cs-230/.
[63] R. Ravimohan, “Deep Learning - Image Processing and Speech Recognition (CMU 2016),” 2016. [Online]. Available: http://raksharavimohan.com/cnn.html.
[64] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph Attention Networks,” 2018. [Online]. Available: http://petar-v.com/GAT/.
[65] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
[66] F.-F. Li, A. Karpathy, and J. Johnson, “CS231n: Convolutional Neural Networks for Visual Recognition - Stanford University,” 2016. [Online]. Available: http://cs231n.stanford.edu/slides/2016/.
[67] S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” in International Conference on Machine Learning, 2015, pp. 448–456.
[68] Y. Wu and K. He, “Group normalization,” in European Conference on Computer Vision, 2018, vol. 11217 LNCS, pp. 3–19.
[69] J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv Prepr. arXiv1607.06450, 2016.
[70] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” arXiv Prepr. arXiv1607.08022, 2016.
[71] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” J. Mach. Learn. Res., vol. 15, pp. 1929–1958, 2014.
[72] G. Ghiasi, T.-Y. Lin, and Q. V Le, “DropBlock: A regularization method for convolutional networks,” in Advances in Neural Information Processing Systems, 2018, pp. 10750–10760.
[73] Y. Chen et al., “Drop an octave: Reducing spatial redundancy in convolutional neural networks with octave convolution,” arXiv Prepr. arXiv1904.05049, 2019.
[74] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.
[75] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.
[76] W. Liu et al., “Ssd: Single shot multibox detector,” in European conference on computer vision, 2016, pp. 21–37.
[77] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint face detection and alignment using multitask cascaded convolutional networks,” IEEE Signal Process. Lett., vol. 23, no. 10, pp. 1499–1503, 2016.
[78] S. Woo, J. Park, J.-Y. Lee, and I. So Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 3–19.
[79] Z. Huang, S. Liang, M. Liang, and H. Yang, “DIANet: Dense-and-Implicit Attention Network,” arXiv Prepr. arXiv1905.10671, 2019.
[80] X. Pan, P. Luo, J. Shi, and X. Tang, “Two at once: Enhancing learning and generalization capacities via ibn-net,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 464–479.
[81] H. Zhong, X. Liu, Y. He, Y. Ma, and K. Kitani, “Shift-based Primitives for Efficient Convolutional Neural Networks,” arXiv Prepr. arXiv1809.08458, 2018.
指導教授 王家慶(Jia-Ching Wang) 審核日期 2019-7-29
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明