由於深度學習的蓬勃發展,基於該技術所發展的軟體服務也日益增長,但是礙於隱私問題,所以用於模型訓練的資料無法被分享。為了解決該問題,聯邦學習的機器學習框架被提出,它可以讓客戶端自身的終端設備參與訓練,使客戶端使用本機資料協助訓練模型,並且只將訓練後的模型結果傳回伺服器。雖然這樣的作法可以保護資料隱私,但對於終端設備卻會增加模型參數交換的通訊成本以及模型訓練的計算成本。由於額外的成本可能降低使用者參與意願,因此如何在該框架下降低成本是一個重要的議題。 本論文為了同時減少通訊成本以及計算成本,提出Federated Learning Framework with Hybrid of Pruning and Clustering(FedHPC)機制結合類神經網路剪枝和參數分群各自的優點,可以在訓練過程中讓模型逐漸縮小並且使用更少的數值範圍表示參數。在經過兩個技術壓縮下模型儲存空間會被大幅地縮小,如此就能減輕聯邦訓練時的通訊成本。本論文會在CNN模型以及VGG11模型上分別使用ISCXVPN2016與CIFAR-10資料集做驗證。實驗結果顯示本論文提出的機制可以有效減輕通訊成本,在訓練過程中單次傳輸模型最高可以減少95.0%的空間;再加上類神經網路剪枝使模型縮小讓單次模型計算時間最高平均可減少61.4%。 ;Due to the development in deep learning, software services based on this technology are growing rapidly, however, the data used for model training cannot be shared because of privacy issues. To address this problem, federated learning (FL), a machine learning framework, has been proposed, which allows the client′s own device to participate in the training and to use local data to train the model. Only the parameters of the trained model are sent back to the server. Although this approach can protect data privacy, it increases the communication cost of model parameter exchange and the computational cost of model training for the device. Since the additional cost may reduce the willingness of users to participate in the training, how to reduce the cost in this framework is an important issue. We propose a new efficient FL framework called FedHPC (Federated Learning Framework with Hybrid of Pruning and Clustering) to reduce both communication and computational costs, which combines the advantages of neural network pruning and weight clustering. After two techniques of compression, the model space is drastically reduced, thus reducing the cost of traffic during federal training. This work studies FedHPC using popular neural network models, e.g. CNN and VGG11, and publicly available datasets, e.g. ISCXVPN2016 and CIFAR-10. The experimental results show that the proposed framework can effectively reduce the communication cost by up to 95.0% of the model space in a single transfer during the training process, and the model shrinking by network pruning can reduce the computation time by up to 61.8% on average.