English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41145165      線上人數 : 658
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/89926


    題名: 聯邦學習中基於剪枝與分群之減輕通訊及運算成本方法;A Communication and Computation Reduction Approach Based on Pruning and Clustering for Federated Learning
    作者: 陳宗琪;Chen, Tsung-Chi
    貢獻者: 資訊工程學系
    關鍵詞: 減輕成本;聯邦學習;機器學習;類神經網路剪枝;參數分群;Cost Reduction;Federated Learning;Machine Learning;Neural Network Pruning;Weight Clustering
    日期: 2022-08-10
    上傳時間: 2022-10-04 12:04:59 (UTC+8)
    出版者: 國立中央大學
    摘要: 由於深度學習的蓬勃發展,基於該技術所發展的軟體服務也日益增長,但是礙於隱私問題,所以用於模型訓練的資料無法被分享。為了解決該問題,聯邦學習的機器學習框架被提出,它可以讓客戶端自身的終端設備參與訓練,使客戶端使用本機資料協助訓練模型,並且只將訓練後的模型結果傳回伺服器。雖然這樣的作法可以保護資料隱私,但對於終端設備卻會增加模型參數交換的通訊成本以及模型訓練的計算成本。由於額外的成本可能降低使用者參與意願,因此如何在該框架下降低成本是一個重要的議題。
    本論文為了同時減少通訊成本以及計算成本,提出Federated Learning Framework with Hybrid of Pruning and Clustering(FedHPC)機制結合類神經網路剪枝和參數分群各自的優點,可以在訓練過程中讓模型逐漸縮小並且使用更少的數值範圍表示參數。在經過兩個技術壓縮下模型儲存空間會被大幅地縮小,如此就能減輕聯邦訓練時的通訊成本。本論文會在CNN模型以及VGG11模型上分別使用ISCXVPN2016與CIFAR-10資料集做驗證。實驗結果顯示本論文提出的機制可以有效減輕通訊成本,在訓練過程中單次傳輸模型最高可以減少95.0%的空間;再加上類神經網路剪枝使模型縮小讓單次模型計算時間最高平均可減少61.4%。
    ;Due to the development in deep learning, software services based on this technology are growing rapidly, however, the data used for model training cannot be shared because of privacy issues. To address this problem, federated learning (FL), a machine learning framework, has been proposed, which allows the client′s own device to participate in the training and to use local data to train the model. Only the parameters of the trained model are sent back to the server. Although this approach can protect data privacy, it increases the communication cost of model parameter exchange and the computational cost of model training for the device. Since the additional cost may reduce the willingness of users to participate in the training, how to reduce the cost in this framework is an important issue.
    We propose a new efficient FL framework called FedHPC (Federated Learning Framework with Hybrid of Pruning and Clustering) to reduce both communication and computational costs, which combines the advantages of neural network pruning and weight clustering. After two techniques of compression, the model space is drastically reduced, thus reducing the cost of traffic during federal training. This work studies FedHPC using popular neural network models, e.g. CNN and VGG11, and publicly available datasets, e.g. ISCXVPN2016 and CIFAR-10. The experimental results show that the proposed framework can effectively reduce the communication cost by up to 95.0% of the model space in a single transfer during the training process, and the model shrinking by network pruning can reduce the computation time by up to 61.8% on average.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML18檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明