DC 欄位 |
值 |
語言 |
DC.contributor | 統計研究所 | zh_TW |
DC.creator | 沈珈萱 | zh_TW |
DC.creator | Jia-Xuan Shen | en_US |
dc.date.accessioned | 2024-7-9T07:39:07Z | |
dc.date.available | 2024-7-9T07:39:07Z | |
dc.date.issued | 2024 | |
dc.identifier.uri | http://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=111225014 | |
dc.contributor.department | 統計研究所 | zh_TW |
DC.description | 國立中央大學 | zh_TW |
DC.description | National Central University | en_US |
dc.description.abstract | 聯邦學習 (Federated Learning) 是一種基於多個本地客戶端的本地資料集的去 中心化機器學習演算法,無需明確交換資料樣本。一般來說,它涉及在本地資 料樣本上訓練本地模型,並透過產生所有客戶端共享的雲端模型在這些本地客 戶端之間交換參數。聯邦學習最初的目的是在異質資料集上進行訓練。主成分 分析(Principal Component Analysis)是高維度資料常用的降維工具 — 利用少數 且顯著的特徵來描述原始資料的一個統計方法。在本文中,我們將展示如何透 過隱私差分 (Differential Privacy) 的方法共享本地資料,以達優化主成分分析並 同時確保隱私。此外,我們也給了數值研究和一些理論分析。 | zh_TW |
dc.description.abstract | Federated learning is a decentralized machine learning algorithm on local datasets at multiple local client devices without explicitly exchanging data samples. In general, it involves training models on local data samples and exchanging parameters between these local client devices by a global server. The aim of federated learning originally is training on heterogeneous datasets. Principal Component Analysis (PCA) is a com- monly used dimension reduction for high-dimensional data. In this article, we will demonstrate a PCA method in federated learning. We share local data while ensur- ing privacy through a differential privacy approach. Moreover, numerical studies and theories are provided. | en_US |
DC.subject | 高維度資料 | zh_TW |
DC.subject | 聯邦學習 | zh_TW |
DC.subject | 隱私差分 | zh_TW |
DC.subject | 主成份分析 | zh_TW |
DC.title | 隱私差分應用於聯邦式水平分割主成分分析 | zh_TW |
dc.language.iso | zh-TW | zh-TW |
DC.title | Federated horizontally partitioned principal component analysis with differential privacy | en_US |
DC.type | 博碩士論文 | zh_TW |
DC.type | thesis | en_US |
DC.publisher | National Central University | en_US |