English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 78852/78852 (100%)
造訪人次 : 38468711      線上人數 : 306
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86770


    題名: 用於評估深度神經網路與加速器之錯誤容忍度模擬器;A Simulator for Evaluating the Fault-Tolerance Capability of Deep Neural Networks and Accelerators
    作者: 蔡永聿;Tsai, Yung-Yu
    貢獻者: 電機工程學系
    關鍵詞: 深度神經網路;錯誤容忍度;硬體加速器;Deep Neural Network;Fault-Tolerance Capability;Hardware Accelerator
    日期: 2021-08-27
    上傳時間: 2021-12-07 13:11:48 (UTC+8)
    出版者: 國立中央大學
    摘要: 深度神經網路(DNN)已被廣泛使用於人工智慧的應用上,其中有些應用是對於安全性敏感的應用,可靠性是設計安全性敏感電子產品的重要指標。儘管DNN被認為具有固有的錯誤容忍能力,然而帶有運算簡化技術的DNN加速器與硬體錯誤,可能會大幅減低DNN的錯誤容忍能力。在此篇論文中,我們提出用於評估深度神經網路與加速器之錯誤容忍度模擬器,能以軟體與硬體的角度探討DNN錯誤容忍度的能力。模擬器架構基於TensorFlow與Keras,使用張量(tensor)運算整合量化(quantization)及植入錯誤(fault injection)的函式庫,可以適用於各式的DNN層。因此,模擬器可以協助使用者在加速器設計規劃階段分析錯誤容忍能力,並最佳化DNN模型與加速器。我們評估了各式DNN模型之錯誤容忍能力,DNN模型層數為4至50層,其中量化設定為8位元或16位元的定點數,並且將準確率損失維持在1%以下。在加速器的錯誤容忍度評估中,被測試之緩衝記憶體大小區間是19.2KB到904KB,以及運算單元陣列大小8×8到32×32。分析結果顯示加速器不同元件間的錯誤容忍度能力差異巨大,容忍度高的元件可以多承受數個數量級的錯誤,然而脆弱的元件則是會因為少數關鍵錯誤而大幅影響準確率。根據模擬結果我們歸納出幾個關鍵錯誤的成因,使錯誤修復機制可以針對脆弱點設計。我們在Xilinx ZCU-102 FPGA上實作了8×8脈動陣列(systolic array)的加速器推論LeNet-5模型,模擬器與FPGA之間的平均誤差在6.3%以下。;Deep neural networks (DNNs) have been widely used for artificial intelligence applications, some of which are safety-critical applications. Reliability is a key metric for designing an electronic system for safety-critical applications. Although DNNs have inherent fault-tolerance capability, the computation-reduction techniques used for designing DNN accelerators and hardware faults might drastically reduce their fault-tolerance capability. In this thesis, we propose a simulator for evaluating the fault-tolerance capability of DNN models and accelerators, it can evaluate the fault-tolerance capability of DNNs at software and hardware levels. The proposed simulator is developed on the frameworks of TensorFlow and Keras. We implement tensor operation-based libraries of quantization and fault injection which are scalable for different types of DNN layers. Designers can use the simulator to analyze the fault-tolerance capability in design phase such that the reliability of DNN models and accelerators can be optimized. We analyze the fault-tolerance capability of a wide range of DNNs with number of layers from 4?50. The data is quantized to 8-bit or 16-bit fixed-point with accuracy drop under 1%. Accelerators with on-chip memory from 19.3KB to 904KB and the PE array size from 8x8 to 32x32 are simulated. Analysis results show that the difference of fault-tolerance capability between parts of DNN accelerator is huge. Stronger parts can tolerate several orders of magnitude more faults, while a few critical faults on weaker parts can drastically degrade the inference accuracy. We observe a few causes of critical faults that fault mitigation resources can focus on. We also implement an accelerator with 8×8 systolic array on Xilinx ZCU-102 FPGA running LeNet-5 model, the average error between the FPGA and simulator results is within 6.3%.
    顯示於類別:[電機工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML89檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明