中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/89926
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 80990/80990 (100%)
造访人次 : 41245034      在线人数 : 835
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/89926


    题名: 聯邦學習中基於剪枝與分群之減輕通訊及運算成本方法;A Communication and Computation Reduction Approach Based on Pruning and Clustering for Federated Learning
    作者: 陳宗琪;Chen, Tsung-Chi
    贡献者: 資訊工程學系
    关键词: 減輕成本;聯邦學習;機器學習;類神經網路剪枝;參數分群;Cost Reduction;Federated Learning;Machine Learning;Neural Network Pruning;Weight Clustering
    日期: 2022-08-10
    上传时间: 2022-10-04 12:04:59 (UTC+8)
    出版者: 國立中央大學
    摘要: 由於深度學習的蓬勃發展,基於該技術所發展的軟體服務也日益增長,但是礙於隱私問題,所以用於模型訓練的資料無法被分享。為了解決該問題,聯邦學習的機器學習框架被提出,它可以讓客戶端自身的終端設備參與訓練,使客戶端使用本機資料協助訓練模型,並且只將訓練後的模型結果傳回伺服器。雖然這樣的作法可以保護資料隱私,但對於終端設備卻會增加模型參數交換的通訊成本以及模型訓練的計算成本。由於額外的成本可能降低使用者參與意願,因此如何在該框架下降低成本是一個重要的議題。
    本論文為了同時減少通訊成本以及計算成本,提出Federated Learning Framework with Hybrid of Pruning and Clustering(FedHPC)機制結合類神經網路剪枝和參數分群各自的優點,可以在訓練過程中讓模型逐漸縮小並且使用更少的數值範圍表示參數。在經過兩個技術壓縮下模型儲存空間會被大幅地縮小,如此就能減輕聯邦訓練時的通訊成本。本論文會在CNN模型以及VGG11模型上分別使用ISCXVPN2016與CIFAR-10資料集做驗證。實驗結果顯示本論文提出的機制可以有效減輕通訊成本,在訓練過程中單次傳輸模型最高可以減少95.0%的空間;再加上類神經網路剪枝使模型縮小讓單次模型計算時間最高平均可減少61.4%。
    ;Due to the development in deep learning, software services based on this technology are growing rapidly, however, the data used for model training cannot be shared because of privacy issues. To address this problem, federated learning (FL), a machine learning framework, has been proposed, which allows the client′s own device to participate in the training and to use local data to train the model. Only the parameters of the trained model are sent back to the server. Although this approach can protect data privacy, it increases the communication cost of model parameter exchange and the computational cost of model training for the device. Since the additional cost may reduce the willingness of users to participate in the training, how to reduce the cost in this framework is an important issue.
    We propose a new efficient FL framework called FedHPC (Federated Learning Framework with Hybrid of Pruning and Clustering) to reduce both communication and computational costs, which combines the advantages of neural network pruning and weight clustering. After two techniques of compression, the model space is drastically reduced, thus reducing the cost of traffic during federal training. This work studies FedHPC using popular neural network models, e.g. CNN and VGG11, and publicly available datasets, e.g. ISCXVPN2016 and CIFAR-10. The experimental results show that the proposed framework can effectively reduce the communication cost by up to 95.0% of the model space in a single transfer during the training process, and the model shrinking by network pruning can reduce the computation time by up to 61.8% on average.
    显示于类别:[資訊工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML18检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明