博碩士論文 106521040 完整後設資料紀錄

DC 欄位 語言
DC.contributor電機工程學系zh_TW
DC.creator鄧凱云zh_TW
DC.creatorKai-Yun Dengen_US
dc.date.accessioned2020-8-24T07:39:07Z
dc.date.available2020-8-24T07:39:07Z
dc.date.issued2020
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=106521040
dc.contributor.department電機工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract深度神經網絡(Deep Neural Networks, DNN)已經被廣泛地使用於人工智能應用。而典型的深度神經網絡系統加速器通常具有靜態隨機訪問記憶體(SRAM),用來暫時儲存數據及資料。在此論文當中,我們提出了一種有效的內建自我修復方案(Built-in self-repair),用來提高深度神經網絡系統加速器中靜態隨機訪問記憶體(SRAM)的良率。論文的第一部分,我們提出一種交換機制的技術,在有限制的降低推論精確度(inference accuracy)之下提高記憶體良率。而此交換機制技術可以與現有的內建冗餘分析(Built-in redundancy analysis)演算法相互結合。我們實現兩種與交換機制結合之內建冗餘分析方案,包含局部修復最大化(local-repair-most, LRM)以及全面的內建冗餘分析(exhaustive BIRA)兩種。實驗結果表明,經修改的局部修復最大化演算法與全面的內建冗餘分析演算法各別在記憶體大小256 千位元組且帕松分布均值為0.2~1.0 (1.0~3.0)的條件下進行模擬,可以提高修復率大約3.4% (30.7%)與3.5% (27.3%),並且犧牲至多0.10% (0.73%) 及0.12% (0.95%) 於MobileNet 以及Res-Net-50 模型之推論精確度。論文的第二部分,我們為上述所提出的內建冗餘分析方案提供一個自動評估與驗證平台。在平台當中,內建冗餘分析編譯器可以產生我們所提出的內建冗餘分析的暫存器傳輸級(RTL)之設計。其中評估工具根據指定的深度神經網絡模型以及加速器的靜態隨機訪問記憶體,提供修復率以及推論精確度的預測。另外驗證平台的部分,可以產生Verilog 測試平台(testbench)用以驗證內建冗餘分析的暫存器傳輸級設計。zh_TW
dc.description.abstractDeep neural networks (DNNs) have been widely used for artificial intelligence applications. An accelerator in a DNN system typically has static random access memories (SRAMs) for data buffering. In this thesis, we propose an efficient built-in self-repair scheme for enhancing the yield of SRAMs in the accelerator of DNN systems. In the first part of this thesis, a swapping mechanism is proposed to increase the yield under the constraint of inference accuracy reduction. The swapping mechanism can be integrated into existing built-in redundancy analysis (BIRA) algorithms. A local-repair-most (LRM) and an exhaustive BIRA algorithms are modified to include the swapping mechanism. Simulation results show that the modified LRM scheme and exhaustive BIRA schemes can gain about 3.4% (30.7%) and 3.5% (27.3%) increment of repair rate by sacrificing most 0.10% (0.73%) and 0.12% (0.95%) inference accuracy reduction for a MobileNet and ResNet-50 under the condition of injection fault with Poisson distribution mean value 0.2~1.0 (and in mean value 1.0~3.0) of 256 Kbyte memory size with 2D redundancy configuration, respectively. In the second part of this thesis, we present an automation, evaluation and verification platform for the proposed BIRA schemes. In the platform, a BIRA compiler is designed to generate the RTL of the proposed BIRA schemes. An evaluation tool is proposed to estimate the repair rate and inference accuracy for the SRAMs in a given accelerator executing a given DNN model. Finally, the platform can generate Verilog testbench for the verification of RTL design of BIRAs.en_US
DC.subject內建自我修復技術zh_TW
DC.subject修復率zh_TW
DC.subject內建備份元件分析技術zh_TW
DC.subjectbuilt in self repairen_US
DC.subjectrepair rateen_US
DC.subjectbuilt-in redundancy-analysisen_US
DC.title應用於深度神經網絡加速器中靜態隨機存取記憶體之內建自我修復技術zh_TW
dc.language.isozh-TWzh-TW
DC.titleBuilt-In Self-Repair Scheme for SRAMs in Deep Neural Network Acceleratorsen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明