博碩士論文 107521030 完整後設資料紀錄

DC 欄位 語言
DC.contributor電機工程學系zh_TW
DC.creator蔡永聿zh_TW
DC.creatorYung-Yu Tsaien_US
dc.date.accessioned2021-8-27T07:39:07Z
dc.date.available2021-8-27T07:39:07Z
dc.date.issued2021
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=107521030
dc.contributor.department電機工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract深度神經網路(DNN)已被廣泛使用於人工智慧的應用上,其中有些應用是對於安全性敏感的應用,可靠性是設計安全性敏感電子產品的重要指標。儘管DNN被認為具有固有的錯誤容忍能力,然而帶有運算簡化技術的DNN加速器與硬體錯誤,可能會大幅減低DNN的錯誤容忍能力。在此篇論文中,我們提出用於評估深度神經網路與加速器之錯誤容忍度模擬器,能以軟體與硬體的角度探討DNN錯誤容忍度的能力。模擬器架構基於TensorFlow與Keras,使用張量(tensor)運算整合量化(quantization)及植入錯誤(fault injection)的函式庫,可以適用於各式的DNN層。因此,模擬器可以協助使用者在加速器設計規劃階段分析錯誤容忍能力,並最佳化DNN模型與加速器。我們評估了各式DNN模型之錯誤容忍能力,DNN模型層數為4至50層,其中量化設定為8位元或16位元的定點數,並且將準確率損失維持在1%以下。在加速器的錯誤容忍度評估中,被測試之緩衝記憶體大小區間是19.2KB到904KB,以及運算單元陣列大小8×8到32×32。分析結果顯示加速器不同元件間的錯誤容忍度能力差異巨大,容忍度高的元件可以多承受數個數量級的錯誤,然而脆弱的元件則是會因為少數關鍵錯誤而大幅影響準確率。根據模擬結果我們歸納出幾個關鍵錯誤的成因,使錯誤修復機制可以針對脆弱點設計。我們在Xilinx ZCU-102 FPGA上實作了8×8脈動陣列(systolic array)的加速器推論LeNet-5模型,模擬器與FPGA之間的平均誤差在6.3%以下。zh_TW
dc.description.abstractDeep neural networks (DNNs) have been widely used for artificial intelligence applications, some of which are safety-critical applications. Reliability is a key metric for designing an electronic system for safety-critical applications. Although DNNs have inherent fault-tolerance capability, the computation-reduction techniques used for designing DNN accelerators and hardware faults might drastically reduce their fault-tolerance capability. In this thesis, we propose a simulator for evaluating the fault-tolerance capability of DNN models and accelerators, it can evaluate the fault-tolerance capability of DNNs at software and hardware levels. The proposed simulator is developed on the frameworks of TensorFlow and Keras. We implement tensor operation-based libraries of quantization and fault injection which are scalable for different types of DNN layers. Designers can use the simulator to analyze the fault-tolerance capability in design phase such that the reliability of DNN models and accelerators can be optimized. We analyze the fault-tolerance capability of a wide range of DNNs with number of layers from 4∼50. The data is quantized to 8-bit or 16-bit fixed-point with accuracy drop under 1%. Accelerators with on-chip memory from 19.3KB to 904KB and the PE array size from 8x8 to 32x32 are simulated. Analysis results show that the difference of fault-tolerance capability between parts of DNN accelerator is huge. Stronger parts can tolerate several orders of magnitude more faults, while a few critical faults on weaker parts can drastically degrade the inference accuracy. We observe a few causes of critical faults that fault mitigation resources can focus on. We also implement an accelerator with 8×8 systolic array on Xilinx ZCU-102 FPGA running LeNet-5 model, the average error between the FPGA and simulator results is within 6.3%.en_US
DC.subject深度神經網路zh_TW
DC.subject錯誤容忍度zh_TW
DC.subject硬體加速器zh_TW
DC.subjectDeep Neural Networken_US
DC.subjectFault-Tolerance Capabilityen_US
DC.subjectHardware Acceleratoren_US
DC.title用於評估深度神經網路與加速器之錯誤容忍度模擬器zh_TW
dc.language.isozh-TWzh-TW
DC.titleA Simulator for Evaluating the Fault-Tolerance Capability of Deep Neural Networks and Acceleratorsen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明