|
English
|
正體中文
|
简体中文
|
全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41142315
線上人數 : 366
|
|
|
資料載入中.....
|
請使用永久網址來引用或連結此文件:
http://ir.lib.ncu.edu.tw/handle/987654321/95281
|
題名: | 低硬體資源需求的CNN-XGB分類器設計;A CNN-XGB classifier with low hardware resource requirement |
作者: | 陳勁為;Chen, Gin-Wei |
貢獻者: | 軟體工程研究所 |
關鍵詞: | 深度卷積網路;集成學習模型;多決策樹硬體加速器;硬體化設計;CNN-XGB;XGBoost;FPGA |
日期: | 2024-07-23 |
上傳時間: | 2024-10-09 16:37:23 (UTC+8) |
出版者: | 國立中央大學 |
摘要: | CNN-XGB架構結合了CNN的特徵提取和XGBoost的分類能力,在許多文獻中其性能優於單獨使用CNN或XGBoost。然而,過深的CNN會導致運算時間增加,為解決此問題,有學者剪去CNN尾端的部分層,試圖使XGBoost取代這些功能並提升效率,但也因此發現了模型性能降低的情形。本研究提出低硬體資源需求的CNN-XGB架構,與其他研究不同的地方在於我們減去了更多層的CNN神經層,並使用影像特徵算法如LBP、HOG輔助CNN,提供更多特徵資料給XGBoost分類器,讓CNN-XGB分類器的性能不會因為使用了深度剪枝的CNN而下降太多。在實驗設計中,我們會逐步減少CNN層數,觀察其效能和性能變化。此外,我們設計了一套自動化程式,可以將XGBoost模型從軟體端快速部署到硬體端。在實驗結果中,我們驗證了剪枝後的CNN雖然會導致CNN-XGB的辨識率下降1~5%,但運算時間和儲存資源分別可以降低10~25%與40~80%,在多模態CNN-XGB實驗中,使用多模態增強後,部分實驗結果顯示CNN-XGB的性能可以回升至與未剪枝前相同,同時保有低資源帶來的效能提升。而在XGB硬體化設計的實驗結果則驗證了XGBoost模型能成功部署在硬體端上,硬體化的XGBoost模型雖然辨識率下降1~6%,但運算速度可以相較軟體端提升至24到32倍。未來,期望能完成CNN部分的硬體化設計並接上本文設計好的XGBoost硬體化設計,讓本文提出的低資源需求的CNN-XGB分類器能夠完整在硬體端實現,並期望能在相關領域中有所貢獻。;The CNN-XGB architecture combines the feature extraction capabilities of Convolutional Neural Networks (CNN) with the classification power of XGBoost. Many studies have shown that CNN-XGB outperforms using CNN or XGBoost alone. However, deep CNNs can lead to increased computation time. To address this issue, some researchers have pruned the tail end of the CNN layers, attempting to allow XGBoost to replace these functions. However, they have also found that this can lead to a decrease model’s performance. This study proposes a CNN-XGB architecture with low hardware resource requirement. Unlike other studies, we have reduced even more layers from the CNN and utilized image feature algorithms such as Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG) to assist the CNN, providing more feature data to the XGBoost classifier. This approach aims to prevent significant performance drops despite using a deeply pruned CNN. In our experimental design, we gradually reduce the number of CNN layers and observe the changes in efficiency and performance. Additionally, we have developed an automated program to quickly deploy the XGBoost model from software to hardware. Our experimental results confirm that although pruning the CNN causes a 1-5% drop in the CNN-XGB recognition rate, computation time and storage resources can be reduced by 10-25% and 40-80%, respectively. In multimodal CNN-XGB experiments, using multimodal enhancement, some results show that the performance of CNN-XGB can recover to the level of the unpruned model while maintaining the efficiency gains brought by low resource usage. In experiments on the hardware implementation of XGBoost, results verify that the XGBoost model can be successfully deployed on hardware. Although the accuracy drops by 1-6%, the computation speed can increase by 24 to 32 times compared to the software implementation. In the future, we aim to complete the hardware design for the CNN part and connect it with the XGBoost hardware design developed in this study. This will enable the proposed low resource requirement CNN-XGB classifier to be fully implemented on hardware, contributing to advancements in the relevant fields. |
顯示於類別: | [軟體工程研究所 ] 博碩士論文
|
文件中的檔案:
檔案 |
描述 |
大小 | 格式 | 瀏覽次數 |
index.html | | 0Kb | HTML | 24 | 檢視/開啟 |
|
在NCUIR中所有的資料項目都受到原著作權保護.
|
::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::