博碩士論文 110522604 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator潘國勝zh_TW
DC.creatorPHAN QUOC THANGen_US
dc.date.accessioned2024-1-26T07:39:07Z
dc.date.available2024-1-26T07:39:07Z
dc.date.issued2024
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=110522604
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract在聯合學習(FL)中,參與者的模型更新可能會對隱私造成破壞性威脅,透過巧妙地充分利用共享更新,攻擊者可以重建參與者的訓練隱私數據,達到像素級別。 差分隱私(DP)作為資料匿名化的標準,就是為了應對這種新出現的威脅而提出的;在這種經過差分隱私改進的隱私保護FL(PPFL)設定中,傳輸的資訊會經過淨化(即 經過因子剪切和噪音擾動),以保護相關方的隱私。 儘管 DP 最初是用於集中學習和表格數據,但最近,它在處理多媒體數據(尤其是圖像)的 FL 方面獲得了越來越多的關注。 基於梯度的重構攻擊通常利用峰值信噪比(PSNR)、結構相似性指數(SSIM)和感知影像補丁相似性(LPIPS)等感知相似性指標作為主要評估方法,以暗示感知相似性與隱私洩露 之間的相關性。 基於深度神經網路(如AlexNet 和VGG)發明的Learned(LPIPS)等感知度量是為了模仿人類的感知,其設計目的是讓度量能夠捕捉兩張圖片之間細微的感知相似性和差異性,並解決 PSNR 和SSIM 等傳統測量無法超越影像像素值的問題。 然而,由於感知度量是建立在人的感知基礎上的,因此重構攻擊過程中造成的難以察覺的細微差別和損壞是否會影響這些度量還不得而知。 因此,作者認為這可能是需要填補的空白。 總而言之,據作者所知,在評估使用影像資料的聯邦學習框架的隱私洩漏時,對感知指標進行全面分析,以及隱私保護技術DP 在保護這種設定免受基於梯度的重構攻擊方面的效果如何 ,仍然是前所未聞的。 為此,本論文旨在研究1. 2.一種新型隱私評估方法的可行性,該方法可揭示SOTA 重構攻擊評估方法中廣泛使用的感知度量LPIPS 與PPFL 中分類任務準確性之間的關係 ;3.差分隱私保護技術對上述SOTA 基於梯度的重構攻擊的有效性。zh_TW
dc.description.abstractIn Federated Learning (FL), a participant’s model update can potentially be a devastating threat to privacy, by cleverly making full use of the shared updates, it is believed that an attacker can reconstruct the participant’s training private data to a pixel-level. Differential Privacy (DP), the norm in data anonymization, was proposed to deal with this emergent threat; in such a DP-fied Privacy-preserving FL (PPFL) setup, the transmitted information is sanitized (i.e. clipped by a factor and perturbed by noise) to protect the privacy of the parties involved. Though was originally intended to be used with centralized learning and tabular data, recently, DP has gained more and more attention in FL with multimedia data, especially images. Gradient-based reconstruction attacks typically utilized perceptual similarity metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Perceptual Image Patch Similarity (LPIPS) as the main evaluation method to imply the correlation between perceptual similarity and privacy leakage. Perceptual metrics such as Learned (LPIPS) were invented to mimic human perception, based on deep neural networks (such as AlexNet and VGG), the design is intended to allow the metric to capture the subtle perceptual similarity and differences between 2 pictures, and solve the incapability to look beyond the image pixel’s value of the traditional metrics like PSNR and SSIM. However, since the perceptual metrics are built upon human perception, it is unknown whether the imperceptible nuances and corruptions caused by the reconstruction attack process could influence those metrics. Therefore, the author sees this could potentially be a gap that needs to be filled. To summarize, according to the author′s best knowledge, a comprehensive analysis of perceptual metrics in evaluating privacy leakages of a Federated Learning framework with image data, and how effectively the privacy-preserving technique DP works in protecting such a setting against gradient-based reconstruction attacks is still unheard of. For that matter, this dissertation is intended to study: 1. The reliability of perceptual metrics, which are employed by reconstruction attacks literature in a realistic Federated Learning framework; 2. The feasibility of a novel privacy evaluation method that can reveal the relationship between the widely used perceptual metric LPIPS in the SOTA reconstruction attack′s evaluation method and the accuracy of a classification task in PPFL; 3. The effectiveness of differential privacy against the aforementioned SOTA gradient-based reconstruction attack.en_US
DC.subject知覺度量zh_TW
DC.subject分類任務準確性zh_TW
DC.subject聯合學習zh_TW
DC.subject保護隱私的聯合學習zh_TW
DC.subject隱私外洩評估zh_TW
DC.subjectPerceptual metricsen_US
DC.subjectClassification task accuracyen_US
DC.subjectFederated Learningen_US
DC.subjectPrivacy-preserving Federated Learningen_US
DC.subjectPrivacy leakage evaluationen_US
DC.title基於梯度的重構攻擊在隱私權保護聯合學習中的評估方法初探zh_TW
dc.language.isozh-TWzh-TW
DC.titleA Preliminary Study on Evaluation Methods of Gradient-based Reconstruction Attacks in Privacy-Preserving Federated Learningen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明