博碩士論文 108522602 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator莫拉那zh_TW
DC.creatorMuhamad Rizka Maulanaen_US
dc.date.accessioned2021-9-6T07:39:07Z
dc.date.available2021-9-6T07:39:07Z
dc.date.issued2021
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=108522602
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract神經網路演算法以高準確性聞名,被大量運用於解決各個領域的諸多問題。憑藉著在各種應用中的優良表現,將神經網路應用於交換器中的作法可以帶來許多好處,同時也是一個具有前瞻性的概念。隨著可編成資料平面語言P4的出現,使得將神經網路部屬於交換器中變成了可能。然而,當前的P4交換器在實現複雜功能方面仍然存在許多限制,如記憶體大小及有限的指令數量,此外,眾所周知神經網路的計算成本較高,通常需要復雜的神經網路架構才能實現良好的準確度。但是,在架構複雜的情況下,將會影響P4交換器轉發封包的效率。因此,在P4交換器中實現神經網路演算法的同時,如何將其帶來的效能損失最大幅度減少,是一件至關重要的事。本論文提出NNSplit的技術,用來解決將神經網路中的隱藏層佈署於多個P4交換器所帶來的效能損失問題。為了支援此做法,本論文同時提出稱為SØREN的網路協定,透過SØREN協定讓P4交換器在轉送封包的過程中,同時傳遞神經網路運算所需的激活值。本論文使用Mininet與BMv2實作,將此技術應用於分類多種不同的流量。根據實驗結果,NNSplit可減少將近50%的記憶體使用量並提高整體的吞吐量,而僅增加14%的延遲。此外,在封包中加入SØREN協定對整體的處理時間影響不大,僅213微秒。總體而言,本論文所提出的方法可以讓大型的神經網路模型應用於P4交換器中,且只帶來些微的效能損耗。zh_TW
dc.description.abstractNeural networks algorithms are known for their high accuracy and are heavily used to solve many problems in various fields. With its proven capability in various tasks, embedding neural networks algorithms in the data plane is an appealing and promising option. This is possible with the emergence of P4 language to control the data plane. However, current data plane technology still has many constraints to implement complex functions. Most data planes have limited memory size and a limited set of operations. In addition, it is also widely known that neural networks are computationally expensive. Generally, a complex neural networks architecture is required for achieving high accuracy. Yet, with a complex architecture, it will affect the data plane’s forwarding capability as the main function. Therefore, minimizing the performance cost caused by implementing neural networks algorithms in the data plane is critical. This thesis proposes a technique called NNSplit for solving the performance issue by splitting neural networks layers into several data planes. By splitting the layers, NNSplit distributes the computational burden from implementing neural networks across data planes. For supporting layer splitting, a new protocol called SØREN is also proposed. SØREN protocol header carries the activation value and bridges neural network layers in all switches. In our implementation, we consider a use case of multi-class traffic classification. The result from experiments using Mininet and BMv2 show that NNSplit can reduce memory usage by almost 50% and increase the throughput compared to non-splitiing scenario, with a cost of small additional delay of 14%. In addition, adding SØREN protocol in the packet brings only a small impact of 213 µs in terms of processing time. The results suggest that our method can support a large neural networks model in the data plane with a small performance cost.en_US
DC.subjectProgrammable Data Planezh_TW
DC.subjectP4 Languagezh_TW
DC.subjectNeural Networkszh_TW
DC.subjectTraffic Classificationzh_TW
DC.subjectProgrammable Data Planeen_US
DC.subjectP4 Languageen_US
DC.subjectNeural Networksen_US
DC.subjectTraffic Classificationen_US
DC.titleOn Supporting Large Neural Networks Model Implementation in Programmable Data Planeen_US
dc.language.isoen_USen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明