博碩士論文 107522627 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator哈帝恩zh_TW
DC.creatorFattah Azzuhry Rahadianen_US
dc.date.accessioned2019-7-29T07:39:07Z
dc.date.available2019-7-29T07:39:07Z
dc.date.issued2019
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=107522627
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract近年來,人臉驗證已廣泛用於保護網際網路上的各種交易行為。人臉驗證最先進的技術為卷積神經網絡(CNN)。然而,雖然CNN有極好的效果,將其佈署於行動裝置與嵌入式設備上仍具有挑戰性,因為這些設備僅有受限的可用計算資源。在本論文中,我們提出了一種輕量級CNN,並使用多種方法進行人臉驗證。首先,我們提出ShuffleNet V2的修改版本ShuffleHalf,並將其做為FaceNet算法的骨幹網路。其次,使用Reuse Later以及Reuse ShuffleBlock方法來重用模型中的特徵映射圖。Reuse Later通過將特徵直接與全連接層相連來重用可能未使用的特徵。同時,Reuse ShuffleBlock重用ShuffleNet V2(ShuffleBlock)的基本構建塊中第一個1x1卷積層輸出的特徵映射圖。由於1x1卷積運算在計算上很昂貴,此方法用於降低模型中1x1卷積的比率。第三,隨著通道數量的增加,卷積核大小增加,以獲得相同的感知域大小,同時計算複雜度更低。第四,深度卷積運算用於替換一些ShuffleBlocks。第五,將其他現有的現有算法與所提出的方法相結合,以查看它們是否可以提高所提出方法的性能 - 效率權衡。 在五個人臉驗證測試數據集的實驗結果表明,ShuffleHalf比其他所有方法都具有更高的準確度,並且只需要目前最先進的算法MobileFaceNet的48% FLOPs。通過Reuse ShuffleBlock重用特徵技術,ShuffleHalf的準確性得到進一步提高。該方法將計算複雜度降低到僅為MobileFaceNet的42% FLOPs。同時,改變卷積核大小和使用depthwise repetition都可以進一步降低計算複雜度,使MobileFaceNet的FLOPs只剩下38%,但效果依然優於MobileFaceNet。與一些現有方法的組合不會增加模型的準確性和性能 - 效率權衡。但是,添加shortcut連接和使用Swish激發函數可以提高模型的準確性,而不會顯著增加計算複雜度。zh_TW
dc.description.abstractIn recent years, face verification has been widely used to secure various transactions on the internet. The current state-of-the-art in face verification is convolutional neural network (CNN). Despite the performance of CNN, deploying CNN in mobile and embedded devices is still challenging because the available computational resource on these devices is constrained. In this paper, we propose a lightweight CNN for face verification using several methods. First, a modified version of ShuffleNet V2 called ShuffleHalf is used as the backbone network for the FaceNet algorithm. Second, the feature maps in the model are reused using two proposed methods called Reuse Later and Reuse ShuffleBlock. Reuse Later works by reusing the potentially unused features by connecting the features directly to the fully connected layer. Meanwhile, Reuse ShuffleBlock works by reusing the feature maps output of the first 1x1 convolution in the basic building block of ShuffleNet V2 (ShuffleBlock). This method is used to reduce the percentage of 1x1 convolution in the model because 1x1 convolution operation is computationally expensive. Third, kernel size is increased as the number of channels increases to obtain the same receptive field size with less computational complexity. Fourth, the depthwise convolution operations are used to replace some ShuffleBlocks. Fifth, other existing previous state-of-the-art algorithms are combined with the proposed method to see if they can increase the performance-efficiency tradeoff of the proposed method. Experimental results on five testing datasets show that ShuffleHalf achieves better accuracy than all other baselines with only 48% FLOPs of the previous state-of-the-art algorithm, MobileFaceNet. The accuracy of ShuffleHalf is further improved by reusing the feature. This method can also reduce the computational complexity to only 42% FLOPs of MobileFaceNet. Meanwhile, both changing kernel size and using depthwise repetition can further decrease computational complexity to only 38% FLOPs of MobileFaceNet with better performance than MobileFaceNet. Combination with some existing methods does not increase the accuracy nor performance-efficiency tradeoff of the model. However, adding shortcut connections and using Swish activation function can improve the accuracy of the model without any noticeable increase in the computational complexity.en_US
DC.subject人臉驗證zh_TW
DC.subject輕量級zh_TW
DC.subject卷積神經網絡zh_TW
DC.subject複雜度zh_TW
DC.subjectface verificationen_US
DC.subjectlightweighten_US
DC.subjectconvolutional neural networken_US
DC.subjectcomplexityen_US
DC.title用於人臉驗證的緊湊且低成本的卷積神經網路zh_TW
dc.language.isozh-TWzh-TW
DC.titleCompact and Low-Cost CNN for Face Verificationen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明