中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/86906
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 41647926      Online Users : 2218
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/86906


    Title: 一個有效的邊緣智慧運算加速器設計: 一種適用於深度可分卷積的可重組式架構;An Efficient Accelerator Design for Edge AI: A Reconfigurable Structure for Depthwise Separable Convolution
    Authors: 江鴻儀;Chiang, Hung-Yi
    Contributors: 電機工程學系
    Keywords: 人工智慧加速器;可重組架構;輕量化網路;AI accelerator;Reconfigurable Structure;MobileNets
    Date: 2021-10-26
    Issue Date: 2021-12-07 13:24:52 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 卷積神經網絡(convolution neural network)已廣泛應用於電腦視覺任務(computer vision tasks)的領域,然而標準的神經網絡需要大量的運算和參數,這對嵌入式設備而言是個挑戰。因此前人提出了一種新穎的神經網路架構MobileNets,MobileNets採用深度可分離卷積(depthwise separable convolution)代替標準卷積,使其運算量和參數大幅減少且精度損失有限。而MobileNets中主要有兩種不同的計算方法pointwise和depthwise,如果用傳統的加速器來計算這兩種不同的運算,會因為運算參數和方式的不同而造成硬體利用率低下。除此之外,常見降低神經網路計算負擔的方法還有量化(quantization),其透過減少位寬(bit width)或採用不同位寬來降低計算負荷,但如果用相同精度的硬體來計算不同位寬的資料,則無法有效的節省運算時間。基於MobileNets和量化網路,本文提出了一種可以有效計算量MobileNets的新型計算架構,以達到加速運算和節省面積的效果。;Convolution neural network (CNN) has been widely applied in the fields of computer vision applications. However, conventional neural network computations require a lot of operations and parameters, which becomes a challenge for embedded devices. MobileNets, a novel CNN which adopts depthwise separable convolution to replace the standard convolution, has substantially reduced operations and parameters with only limited loss in accuracy. There are mainly two different calculation methods in MobileNets, pointwise and depthwise. If the same accelerator is used to perform these two different operations, the accelerator may not able to be fully exploited due to different operation parameters. In addition, there are some methods for neural network quantization, which limit the bit width to reduce computing energy and parameters. If the same precision hardware is used to calculate quantized operations, the maximum benefit cannot be achieved. Therefore, A novel architecture which can effectively calculate quantized MobileNets is proposed in this thesis.
    Appears in Collections:[Graduate Institute of Electrical Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML105View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明