博碩士論文 109521001 完整後設資料紀錄

DC 欄位 語言
DC.contributor電機工程學系zh_TW
DC.creator陳子捷zh_TW
DC.creatorTZU-CHIEH CHENen_US
dc.date.accessioned2023-8-9T07:39:07Z
dc.date.available2023-8-9T07:39:07Z
dc.date.issued2023
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=109521001
dc.contributor.department電機工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract人類的動作辨識可以使用各種不同的數據當作輸入,主要有RGB圖像或是骨架數據等。傳統方法使用RGB圖像當作輸入,並採用了卷積神經網路(CNN)或循環神經網路(RNN)模型。然而,由於圖片中存在各種噪音,導致這些方法的準確性很低。為了提高辨識的準確性,一些研究探討了使用骨架數據做為替代的輸入。儘管如此,CNN或RNN模型無法充分利用骨架數據中存在的空間和時間關係,因此限制了模型的有效性。近年來,圖卷積網路(GCN)因為其在社會網路分析以及推薦系統等任務中的廣泛適用性而獲得了極大的關注。GCN特別適用於處理非歐基里德的數據,像是人體骨骼關節。與RGB圖像不同的是,它不會受到環境因素的影響。然而,由於GCN的計算複雜性以及數據稀疏性,當使用在CPU或是GPU平台上時,往往會導致高延遲以及低功率效率。為了面對這些挑戰,設計專門的硬體加速器起到了至關重要的作用。時空圖卷積網路(ST-GCN)是一個廣泛用於人類動作辨識的模型。在本文中,我們為ST-GCN提出了一個高度平行化運算且靈活的架構。我們的架構包含了通用型的處理元件(PE),它們可以被歸類為組合引擎和聚合引擎來計算GCN層。這些PE也可以在處理TCN層時相互連接。根據我們提出的方法,此加速器還具有良好的可擴展性。我們在ASIC和FPGA平台上都實現了該硬體設計。與其他一樣為ST-GCN實現硬體設計的論文相比,我們提出的方法實現了高達39.5%的延遲降低以及高達2.23倍的功率效率提高。zh_TW
dc.description.abstractHuman action recognition can leverage various data inputs, including RGB images and skeleton data. Traditional approaches utilize RGB images as input and employ convolutional neural network (CNN) or recurrent neural network (RNN) models. However, these methods suffer from low accuracy due to the presence of various background noises in the images. In an attempt to enhance accuracy, some studies have explored the use of skeleton data as an alternative input. Nevertheless, CNN or RNN models are unable to fully exploit the spatial and temporal relationships inherent in the skeleton data, limiting their effectiveness. In recent years, Graph Convolutional Networks (GCNs) have gained significant attention due to their wide applicability in tasks such as social network analysis and recommendation systems. GCNs are particularly suitable for processing non-Euclidean data, such as human skeleton joints, which remain unaffected by environmental factors unlike RGB images. However, the computational complexity and data sparsity of GCNs often result in high latency and low power efficiency when deployed on CPU or GPU platforms. To address these challenges, dedicated hardware accelerators play a crucial role. In this paper, we propose a highly parallelized and flexible architecture for Spatial-Temporal Graph Convolutional Networks (ST-GCN), a widely used model in human action recognition. Our architecture incorporates general Processing Elements (PEs) that can be grouped into combination engines and aggregation engines to compute GCN layers. These PEs can also be interconnected while processing TCN layers. The accelerator also has high scalability due to our proposed method. We implemented our design on both ASIC and FPGA platforms. Compared to other works that also implement hardware designs for ST-GCN, our proposed method achieves up to 39.5% reduction in latency and up to 2.23x improvement in power efficiency.en_US
DC.subject圖卷積網路zh_TW
DC.subject硬體加速器zh_TW
DC.subject可重構架構zh_TW
DC.subject特殊應用積體電路zh_TW
DC.subject現場可程式化邏輯閘陣列zh_TW
DC.subjectgraph convolutional neural networken_US
DC.subjecthardware acceleratoren_US
DC.subjectreconfigurable architectureen_US
DC.subjectASICen_US
DC.subjectFPGAen_US
DC.title用於動作辨識中的時空圖卷積網路之可重構硬體架構設計zh_TW
dc.language.isozh-TWzh-TW
DC.titleA Reconfigurable Hardware Architecture for Spatial Temporal Graph Convolutional Network in action recognitionen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明