English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41142293      線上人數 : 344
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/95406


    題名: ERCNet:以精簡的ECA分支增強ReActNet;ERCNet: Enhancing ReActNet with a Compact ECA Branch
    作者: 陳彥廷;Chen, Yen-Ting
    貢獻者: 資訊工程學系在職專班
    關鍵詞: 二值化卷積神經網路;有效率通道注意力機制;影像辨識;物件偵測;Binary neural network;Efficient channel attention;Classification;Object detection
    日期: 2024-06-12
    上傳時間: 2024-10-09 16:46:53 (UTC+8)
    出版者: 國立中央大學
    摘要: 自2016年以來,Courbariaux率先開創了二值神經網路,大幅降低了卷積神經網絡的參數量和計算成本。後續的研究持續不斷的縮小與浮點數網路能力差距。其中,ReActNet在眾多二值模型中嶄露頭角。
    本論文重新設計了ReActNet的基礎模塊。首先,我們移除了基礎模塊中所有1x1的二值卷積層,以減少權重大小和運算量。在下採樣模塊中的其中一支1x1二值卷積,用高效通道注意力(ECA)取代,以豐富表示能力。另外,在分支合併之後新增一個BatchNorm,以使數據分佈更優化。最後,將殘差捷徑連接位置移至RPReLU之後,以保留殘差捷徑資訊的完整性,稱之為ERCNet。
    從實驗表明,ERCNet在CIFAR100上的Top-1準確率比原始ReActNet高出2.39%,而記憶體佔用量和計算量則分別降低了約10%和8%。在物件偵測實驗中,將ERCNet移入YOLOv8骨幹。在KITTI數據集上,我們的ERCNet比浮點數YOLOv8更為表現出色,達到94.8%的mAP50,分別超越YOLOv8-L和-N 1.9%和11.2%。
    最後,根據實驗的結果,我們證明了在某些特定數據集中,二值化神經網路表現能力優於浮點數神經網路,並保持具有更低的記憶體和計算成本。因此,在未來應用於輕量級設備上的特定數據集更加合適。
    ;Since 2016, Courbariaux pioneered Binary Neural Network to dramatically decrease the storage and computation cost of CNN for lightweight application, researchers have made continued efforts to drill the cost as well as minimize the representation capacity loss and accuracy gap to its real-valued counterpart. Among them, ReActNet achieving 62.16% Top-1 accuracy on CFAR100 sets a new horizon on this competition landscape. In this thesis, we strive for further polishing its performance yet at even a lower overall cost.
    We redesign the General Building block of the ReActNet (GBR) in an effort to elevating the accuracy on CIFAR100 image classification dataset, PSCAL VOC 07+12 object detection dataset, and KITTI vision benchmark suits, yet at a lower memory footprint and lower computation cost. The GBR comprises a single Down-sampling Block (DB) and a plurality of Common Blocks (CB). Firstly, we eliminate all the 1x1 Binary Convolutional (BConv) layers of the CBs to reduce the weight parameters as well as the network size. Second, the 1x1 Bconv duplicate of the DB is replaced by the Efficient Channel Attention (ECA) to enrich the representation capacity. Third, a Batch Normalization (BN) unit is added right after the Concatenator of the DB to render the data distribution more suitable for the performance optimization. Finally, the shortcut connection is resided after the RPReLU activation unit so as to balance the information preservation from the shortcut path and information transformation from the residual path. Our experiment shows that the enhanced network (ERCNet) delivers 2.39% higher Top-1 accuracy on CIFAR100 than the original ReActNet yet at around 10% lower memory and 8% lower computation flops. It generates 81.8% mAP50 under YOLOv8 framework on Pascal VOC 07+12 data set, surpassing the ReActNet by 0.8%. Furthermore, it is extremely encouraging that on the KITTI dataset, our ERCNET wins a landslide victory over all the models of the official YOLOv8 backbone, presenting 94.8% mAP50 which transcends YOLOv8-L &-N by 1.9% and 11.2%, respectively. On the other hand, we also find that our ERCNET performs slightly inferiorly to the default YOLOv8 backbone when regressing both on Pascal VOC 07+12.
    Our experiments indicate that ERCNet demonstrates better performance than CNN in some particular data sets such as KITTI, yet at a lower memory and computation cost. As such, ERCNet makes it further suitable for having BNN on specific dataset applications in lightweight devices.
    顯示於類別:[資訊工程學系碩士在職專班 ] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML26檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明