博碩士論文 104521025 完整後設資料紀錄

DC 欄位 語言
DC.contributor電機工程學系zh_TW
DC.creator謝宗甫zh_TW
DC.creatorTsung-Fu Thsiehen_US
dc.date.accessioned2018-12-27T07:39:07Z
dc.date.available2018-12-27T07:39:07Z
dc.date.issued2018
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=104521025
dc.contributor.department電機工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract深度神經網路(DNN)被視為一個十分有應用價值的人工智慧技術。DNN系統通常需要以動態隨機記憶體(DRAM)來儲存數據。然而DRAM是一種十分耗電的元件,因此需要有針對DNN系統中用於降低DRAM功耗的技術。 本論文提出了一種混和投票機制與錯誤更正碼(Voting and error-correction code, VECC)的資料保護技術,通過延長DRAM的刷新週期來降低功耗。VECC以投票的方法保護在DNN中數值趨近於零的權重資料,並以錯誤糾正碼保護剩餘資料。以此種混合式的保護機制來糾正受到DRAM刷新周期延長而出現的資料失效(retention fault)。為了實現VECC的技術於DNN系統中,本論文提出了一個軟硬體結合的自我測試技術(Software-Hardware-Cooperated built-in self test, SHC-BIST),用以蒐集在不同DRAM刷新周期下的資料錯誤資訊。此外也提出了相應的解碼以及重組硬體設計。 模擬結果顯示,在四個著名的DNN模組中,VECC可以節省至少93.7%的DRAM刷新功耗,且精準度損耗(accuracy loss)小於0.5%,而額外所需付出的錯誤檢驗碼位元數均小於原始資料的1%。zh_TW
dc.description.abstractDeep neural network (DNN) is considered as a practical and effective artificial intelligence technique. A DNN system typically needs a dynamic random access memory (DRAM) for the storing of data. However, DRAM is a power-hungry component. Effective techniques for reducing the power consumption of the DRAM in a DNN system thus are needed. In this thesis, a hybrid voting and error-correction code (VECC) technique is proposed to reduce the refresh power of DRAMs in DNN systems by extending the DRAM refresh period. The VECC technique takes advantage of the characteristics of wights of DNN model to reduce the cost of check bits. Most of weights of a DNN model are close to zero. Therefore, the VECC technique extends the refresh period of DRAMs by using the voting mechanism to protect weights being close to zero from retention faults and using the error correction code (ECC) to protect weights being not close to zero from retention faults. To realize the VECC technique, a software-hardwarecooperated built-in self-test (SHC-BIST) scheme is proposed to test the cells with data retention faults of the DRAM with respect to different refresh periods. Also, a decoding and remapping unit is proposed to decode and remap the encoded weights. Simulation results show that the proposed VECC technique can achieve up to 93.7% refresh power saving for four typical DNN models with the adverse effect of inference accuracy loss less than 0.5%, and the check bit overhead is less than 1%. ien_US
DC.subject深度神經網路zh_TW
DC.subject動態隨機記憶體zh_TW
DC.subject刷新功耗zh_TW
DC.subject資料壓縮zh_TW
DC.subject自我測試zh_TW
DC.subjectDeep Neural Networken_US
DC.subjectDRAMen_US
DC.subjectredresh poweren_US
DC.subjectdata compressionen_US
DC.subjectBISTen_US
DC.title應用於深度神經網路系統內動態隨機存取記憶體之錯誤糾正碼式刷新功耗降低技術zh_TW
dc.language.isozh-TWzh-TW
DC.titleECC-Based Refresh Power Reduction Technique for DRAMs of Deep Neural Network Systemsen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明