中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/81221
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 78818/78818 (100%)
Visitors : 34473878      Online Users : 1902
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/81221


    Title: 用於人臉驗證的緊湊且低成本的卷積神經網路;Compact and Low-Cost CNN for Face Verification
    Authors: 哈帝恩;Rahadian, Fattah Azzuhry
    Contributors: 資訊工程學系
    Keywords: 人臉驗證;輕量級;卷積神經網絡;複雜度;face verification;lightweight;convolutional neural network;complexity
    Date: 2019-07-29
    Issue Date: 2019-09-03 15:39:45 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 近年來,人臉驗證已廣泛用於保護網際網路上的各種交易行為。人臉驗證最先進的技術為卷積神經網絡(CNN)。然而,雖然CNN有極好的效果,將其佈署於行動裝置與嵌入式設備上仍具有挑戰性,因為這些設備僅有受限的可用計算資源。在本論文中,我們提出了一種輕量級CNN,並使用多種方法進行人臉驗證。首先,我們提出ShuffleNet V2的修改版本ShuffleHalf,並將其做為FaceNet算法的骨幹網路。其次,使用Reuse Later以及Reuse ShuffleBlock方法來重用模型中的特徵映射圖。Reuse Later通過將特徵直接與全連接層相連來重用可能未使用的特徵。同時,Reuse ShuffleBlock重用ShuffleNet V2(ShuffleBlock)的基本構建塊中第一個1x1卷積層輸出的特徵映射圖。由於1x1卷積運算在計算上很昂貴,此方法用於降低模型中1x1卷積的比率。第三,隨著通道數量的增加,卷積核大小增加,以獲得相同的感知域大小,同時計算複雜度更低。第四,深度卷積運算用於替換一些ShuffleBlocks。第五,將其他現有的現有算法與所提出的方法相結合,以查看它們是否可以提高所提出方法的性能 - 效率權衡。
    在五個人臉驗證測試數據集的實驗結果表明,ShuffleHalf比其他所有方法都具有更高的準確度,並且只需要目前最先進的算法MobileFaceNet的48% FLOPs。通過Reuse ShuffleBlock重用特徵技術,ShuffleHalf的準確性得到進一步提高。該方法將計算複雜度降低到僅為MobileFaceNet的42% FLOPs。同時,改變卷積核大小和使用depthwise repetition都可以進一步降低計算複雜度,使MobileFaceNet的FLOPs只剩下38%,但效果依然優於MobileFaceNet。與一些現有方法的組合不會增加模型的準確性和性能 - 效率權衡。但是,添加shortcut連接和使用Swish激發函數可以提高模型的準確性,而不會顯著增加計算複雜度。;In recent years, face verification has been widely used to secure various transactions on the internet. The current state-of-the-art in face verification is convolutional neural network (CNN). Despite the performance of CNN, deploying CNN in mobile and embedded devices is still challenging because the available computational resource on these devices is constrained. In this paper, we propose a lightweight CNN for face verification using several methods. First, a modified version of ShuffleNet V2 called ShuffleHalf is used as the backbone network for the FaceNet algorithm. Second, the feature maps in the model are reused using two proposed methods called Reuse Later and Reuse ShuffleBlock. Reuse Later works by reusing the potentially unused features by connecting the features directly to the fully connected layer. Meanwhile, Reuse ShuffleBlock works by reusing the feature maps output of the first 1x1 convolution in the basic building block of ShuffleNet V2 (ShuffleBlock). This method is used to reduce the percentage of 1x1 convolution in the model because 1x1 convolution operation is computationally expensive. Third, kernel size is increased as the number of channels increases to obtain the same receptive field size with less computational complexity. Fourth, the depthwise convolution operations are used to replace some ShuffleBlocks. Fifth, other existing previous state-of-the-art algorithms are combined with the proposed method to see if they can increase the performance-efficiency tradeoff of the proposed method.
    Experimental results on five testing datasets show that ShuffleHalf achieves better accuracy than all other baselines with only 48% FLOPs of the previous state-of-the-art algorithm, MobileFaceNet. The accuracy of ShuffleHalf is further improved by reusing the feature. This method can also reduce the computational complexity to only 42% FLOPs of MobileFaceNet. Meanwhile, both changing kernel size and using depthwise repetition can further decrease computational complexity to only 38% FLOPs of MobileFaceNet with better performance than MobileFaceNet. Combination with some existing methods does not increase the accuracy nor performance-efficiency tradeoff of the model. However, adding shortcut connections and using Swish activation function can improve the accuracy of the model without any noticeable increase in the computational complexity.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML169View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明