中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/86799
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 80990/80990 (100%)
造访人次 : 41642686      在线人数 : 1334
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86799


    题名: On Supporting Large Neural Networks Model Implementation in Programmable Data Plane
    作者: 莫拉那;Maulana, Muhamad Rizka
    贡献者: 資訊工程學系
    关键词: Programmable Data Plane;P4 Language;Neural Networks;Traffic Classification;Programmable Data Plane;P4 Language;Neural Networks;Traffic Classification
    日期: 2021-09-06
    上传时间: 2021-12-07 13:13:58 (UTC+8)
    出版者: 國立中央大學
    摘要: 神經網路演算法以高準確性聞名,被大量運用於解決各個領域的諸多問題。憑藉著在各種應用中的優良表現,將神經網路應用於交換器中的作法可以帶來許多好處,同時也是一個具有前瞻性的概念。隨著可編成資料平面語言P4的出現,使得將神經網路部屬於交換器中變成了可能。然而,當前的P4交換器在實現複雜功能方面仍然存在許多限制,如記憶體大小及有限的指令數量,此外,眾所周知神經網路的計算成本較高,通常需要復雜的神經網路架構才能實現良好的準確度。但是,在架構複雜的情況下,將會影響P4交換器轉發封包的效率。因此,在P4交換器中實現神經網路演算法的同時,如何將其帶來的效能損失最大幅度減少,是一件至關重要的事。本論文提出NNSplit的技術,用來解決將神經網路中的隱藏層佈署於多個P4交換器所帶來的效能損失問題。為了支援此做法,本論文同時提出稱為S?REN的網路協定,透過S?REN協定讓P4交換器在轉送封包的過程中,同時傳遞神經網路運算所需的激活值。本論文使用Mininet與BMv2實作,將此技術應用於分類多種不同的流量。根據實驗結果,NNSplit可減少將近50%的記憶體使用量並提高整體的吞吐量,而僅增加14%的延遲。此外,在封包中加入S?REN協定對整體的處理時間影響不大,僅213微秒。總體而言,本論文所提出的方法可以讓大型的神經網路模型應用於P4交換器中,且只帶來些微的效能損耗。;Neural networks algorithms are known for their high accuracy and are heavily used to solve many problems in various fields. With its proven capability in various tasks, embedding neural networks algorithms in the data plane is an appealing and promising option. This is possible with the emergence of P4 language to control the data plane. However, current data plane technology still has many constraints to implement complex functions. Most data planes have limited memory size and a limited set of operations. In addition, it is also widely known that neural networks are computationally expensive. Generally, a complex neural networks architecture is required for achieving high accuracy. Yet, with a complex architecture, it will affect the data plane’s forwarding capability as the main function. Therefore, minimizing the performance cost caused by implementing neural networks algorithms in the data plane is critical. This thesis proposes a technique called NNSplit for solving the performance issue by splitting neural networks layers into several data planes. By splitting the layers, NNSplit distributes the computational burden from implementing neural networks across data planes. For supporting layer splitting, a new protocol called S?REN is also proposed. S?REN protocol header carries the activation value and bridges neural network layers in all switches. In our implementation, we consider a use case of multi-class traffic classification. The result from experiments using Mininet and BMv2 show that NNSplit can reduce memory usage by almost 50% and increase the throughput compared to non-splitiing scenario, with a cost of small additional delay of 14%. In addition, adding S?REN protocol in the packet brings only a small impact of 213 ?s in terms of processing time. The results suggest that our method can support a large neural networks model in the data plane with a small performance cost.
    显示于类别:[資訊工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML89检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明