DC 欄位 |
值 |
語言 |
DC.contributor | 電機工程學系 | zh_TW |
DC.creator | 蔡永聿 | zh_TW |
DC.creator | Yung-Yu Tsai | en_US |
dc.date.accessioned | 2021-8-27T07:39:07Z | |
dc.date.available | 2021-8-27T07:39:07Z | |
dc.date.issued | 2021 | |
dc.identifier.uri | http://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=107521030 | |
dc.contributor.department | 電機工程學系 | zh_TW |
DC.description | 國立中央大學 | zh_TW |
DC.description | National Central University | en_US |
dc.description.abstract | 深度神經網路(DNN)已被廣泛使用於人工智慧的應用上,其中有些應用是對於安全性敏感的應用,可靠性是設計安全性敏感電子產品的重要指標。儘管DNN被認為具有固有的錯誤容忍能力,然而帶有運算簡化技術的DNN加速器與硬體錯誤,可能會大幅減低DNN的錯誤容忍能力。在此篇論文中,我們提出用於評估深度神經網路與加速器之錯誤容忍度模擬器,能以軟體與硬體的角度探討DNN錯誤容忍度的能力。模擬器架構基於TensorFlow與Keras,使用張量(tensor)運算整合量化(quantization)及植入錯誤(fault injection)的函式庫,可以適用於各式的DNN層。因此,模擬器可以協助使用者在加速器設計規劃階段分析錯誤容忍能力,並最佳化DNN模型與加速器。我們評估了各式DNN模型之錯誤容忍能力,DNN模型層數為4至50層,其中量化設定為8位元或16位元的定點數,並且將準確率損失維持在1%以下。在加速器的錯誤容忍度評估中,被測試之緩衝記憶體大小區間是19.2KB到904KB,以及運算單元陣列大小8×8到32×32。分析結果顯示加速器不同元件間的錯誤容忍度能力差異巨大,容忍度高的元件可以多承受數個數量級的錯誤,然而脆弱的元件則是會因為少數關鍵錯誤而大幅影響準確率。根據模擬結果我們歸納出幾個關鍵錯誤的成因,使錯誤修復機制可以針對脆弱點設計。我們在Xilinx ZCU-102 FPGA上實作了8×8脈動陣列(systolic array)的加速器推論LeNet-5模型,模擬器與FPGA之間的平均誤差在6.3%以下。 | zh_TW |
dc.description.abstract | Deep neural networks (DNNs) have been widely used for artificial intelligence applications, some of which are safety-critical applications. Reliability is a key metric for designing an electronic system for safety-critical applications. Although DNNs have inherent fault-tolerance capability, the computation-reduction techniques used for designing DNN accelerators and hardware faults might drastically reduce their fault-tolerance capability. In this thesis, we propose a simulator for evaluating the fault-tolerance capability of DNN models and accelerators, it can evaluate the fault-tolerance capability of DNNs at software and hardware levels. The proposed simulator is developed on the frameworks of TensorFlow and Keras. We implement tensor operation-based libraries of quantization and fault injection which are scalable for different types of DNN layers. Designers can use the simulator to analyze the fault-tolerance capability in design phase such that the reliability of DNN models and accelerators can be optimized. We analyze the fault-tolerance capability of a wide range of DNNs with number of layers from 4∼50. The data is quantized to 8-bit or 16-bit fixed-point with accuracy drop under 1%. Accelerators with on-chip memory from 19.3KB to 904KB and the PE array size from 8x8 to 32x32 are simulated. Analysis results show that the difference of fault-tolerance capability between parts of DNN accelerator is huge. Stronger parts can tolerate several orders of magnitude more faults, while a few critical faults on weaker parts can drastically degrade the inference accuracy. We observe a few causes of critical faults that fault mitigation resources can focus on. We also implement an accelerator with 8×8 systolic array on Xilinx ZCU-102 FPGA running LeNet-5 model, the average error between the FPGA and simulator results is within 6.3%. | en_US |
DC.subject | 深度神經網路 | zh_TW |
DC.subject | 錯誤容忍度 | zh_TW |
DC.subject | 硬體加速器 | zh_TW |
DC.subject | Deep Neural Network | en_US |
DC.subject | Fault-Tolerance Capability | en_US |
DC.subject | Hardware Accelerator | en_US |
DC.title | 用於評估深度神經網路與加速器之錯誤容忍度模擬器 | zh_TW |
dc.language.iso | zh-TW | zh-TW |
DC.title | A Simulator for Evaluating the Fault-Tolerance Capability of Deep Neural Networks and Accelerators | en_US |
DC.type | 博碩士論文 | zh_TW |
DC.type | thesis | en_US |
DC.publisher | National Central University | en_US |