博碩士論文 111522092 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator梁字清zh_TW
DC.creatorZi-Qing Liangen_US
dc.date.accessioned2024-7-2T07:39:07Z
dc.date.available2024-7-2T07:39:07Z
dc.date.issued2024
dc.identifier.urihttp://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=111522092
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract現今的模型為了得到更好的準確率會將網路設計的更加龐大,模型的運算量也成指數增長,在這個情況下要應用於邊緣計算相當有難度。而Binary Neural Networks (BNNs)二進制神經網路是將卷積核(Filter)權重和激勵值量化至1位元(Bit)的模型,這種模型非常適合ARM、FPGA等小晶片或其他邊緣計算裝置,為了設計一個對邊緣計算裝置更友善的模型,如何降低模型浮點數運算量起著重要的作用。Batch normalization (BN)是二進制神經網路的重要工具,然而在卷積層被量化至1位元(Bit)的情況下,BN層的浮點數計算成本變得較為高昂,本論文透過移除模型的BN層來降低浮點數運算量,並加入Scaled Weight Standardization Convolution(WS-Conv)方法來避免無BN層後準確率大幅降低的問題,並透過一系列的優化方式提升模型的性能。具體來說我們的模型在沒有BN層的情況下仍使模型的計算成本及準確度保持著競爭力,再加入一系列訓練方法讓模型在Cifar-100的準確率仍高於Baseline 0.6%,而總運算量則只有Baseline的46%,其中在BOPs不變的情況下FLOPs降低至接近0,使其更適合FPGA等嵌入式平台。zh_TW
dc.description.abstractIn order to achieve better accuracy, modern models have become increasingly large, leading to an exponential increase in computational load, making it challenging to apply them to edge computing. Binary Neural Networks (BNNs) are models that quantize the filter weights and activations to 1-bit. These models are highly suitable for small chips like ARM, FPGA, and other edge computing devices. To design a model that is more friendly to edge computing devices, it is crucial to reduce the floating-point operations (FLOPs). Batch normalization (BN) is an essential tool for binary neural networks; however, when convolution layers are quantized to 1-bit, the floating-point computation cost of BN layers becomes significantly high. This thesis aims to reduce the floating-point operations by removing the BN layers from the model and introducing the Scaled Weight Standardization Convolution (WS-Conv) method to avoid the significant accuracy drop caused by the absence of BN layers, and to enhance the model performance through a series of optimizations. Specifically, our model maintains competitive computational cost and accuracy even without BN layers. Furthermore, by incorporating a series of training methods, the model′s accuracy on CIFAR-100 is 0.6% higher than the baseline, while the total computational load is only 46% of the baseline. With unchanged BOPs, the FLOPs are reduced to nearly zero, making it more suitable for embedded platforms like FPGA.en_US
DC.subject人工智慧zh_TW
DC.subject模型辨識zh_TW
DC.subject邊緣計算zh_TW
DC.subject深度學習zh_TW
DC.subject二進制神經網路zh_TW
DC.subject影像辨識zh_TW
DC.subject模型壓縮zh_TW
DC.subject網路量化zh_TW
DC.subjectArtificial Intelligenceen_US
DC.subjectModel Recognitionen_US
DC.subjectEdge Computingen_US
DC.subjectDeep Learningen_US
DC.subjectBinary Neural Networksen_US
DC.subjectImage Recognitionen_US
DC.subjectModel Compressionen_US
DC.subjectNetwork Quantizationen_US
DC.title利用權重標準分流二進位神經網路做邊緣計算之影像辨識zh_TW
dc.language.isozh-TWzh-TW
DC.titleWeight Standardization Fractional Binary Neural Network (WSFracBNN) for Image Recognition in Edge Computingen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明