中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/93558
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41679796      線上人數 : 1484
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/93558


    題名: 基於梯度的重構攻擊在隱私權保護聯合學習中的評估方法初探;A Preliminary Study on Evaluation Methods of Gradient-based Reconstruction Attacks in Privacy-Preserving Federated Learning
    作者: 潘國勝;THANG, PHAN QUOC
    貢獻者: 資訊工程學系
    關鍵詞: 知覺度量;分類任務準確性;聯合學習;保護隱私的聯合學習;隱私外洩評估;Perceptual metrics;Classification task accuracy;Federated Learning;Privacy-preserving Federated Learning;Privacy leakage evaluation
    日期: 2024-01-26
    上傳時間: 2024-09-19 17:13:49 (UTC+8)
    出版者: 國立中央大學
    摘要: 在聯合學習(FL)中,參與者的模型更新可能會對隱私造成破壞性威脅,透過巧妙地充分利用共享更新,攻擊者可以重建參與者的訓練隱私數據,達到像素級別。 差分隱私(DP)作為資料匿名化的標準,就是為了應對這種新出現的威脅而提出的;在這種經過差分隱私改進的隱私保護FL(PPFL)設定中,傳輸的資訊會經過淨化(即 經過因子剪切和噪音擾動),以保護相關方的隱私。 儘管 DP 最初是用於集中學習和表格數據,但最近,它在處理多媒體數據(尤其是圖像)的 FL 方面獲得了越來越多的關注。

    基於梯度的重構攻擊通常利用峰值信噪比(PSNR)、結構相似性指數(SSIM)和感知影像補丁相似性(LPIPS)等感知相似性指標作為主要評估方法,以暗示感知相似性與隱私洩露 之間的相關性。 基於深度神經網路(如AlexNet 和VGG)發明的Learned(LPIPS)等感知度量是為了模仿人類的感知,其設計目的是讓度量能夠捕捉兩張圖片之間細微的感知相似性和差異性,並解決 PSNR 和SSIM 等傳統測量無法超越影像像素值的問題。
    然而,由於感知度量是建立在人的感知基礎上的,因此重構攻擊過程中造成的難以察覺的細微差別和損壞是否會影響這些度量還不得而知。 因此,作者認為這可能是需要填補的空白。
    總而言之,據作者所知,在評估使用影像資料的聯邦學習框架的隱私洩漏時,對感知指標進行全面分析,以及隱私保護技術DP 在保護這種設定免受基於梯度的重構攻擊方面的效果如何 ,仍然是前所未聞的。

    為此,本論文旨在研究1. 2.一種新型隱私評估方法的可行性,該方法可揭示SOTA 重構攻擊評估方法中廣泛使用的感知度量LPIPS 與PPFL 中分類任務準確性之間的關係 ;3.差分隱私保護技術對上述SOTA 基於梯度的重構攻擊的有效性。;In Federated Learning (FL), a participant’s model update can potentially be a devastating threat to privacy, by cleverly making full use of the shared updates, it is believed that an attacker can reconstruct the participant’s training private data to a pixel-level. Differential Privacy (DP), the norm in data anonymization, was proposed to deal with this emergent threat; in such a DP-fied Privacy-preserving FL (PPFL) setup, the transmitted information is sanitized (i.e. clipped by a factor and perturbed by noise) to protect the privacy of the parties involved. Though was originally intended to be used with centralized learning and tabular data, recently, DP has gained more and more attention in FL with multimedia data, especially images.

    Gradient-based reconstruction attacks typically utilized perceptual similarity metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Perceptual Image Patch Similarity (LPIPS) as the main evaluation method to imply the correlation between perceptual similarity and privacy leakage. Perceptual metrics such as Learned (LPIPS) were invented to mimic human perception, based on deep neural networks (such as AlexNet and VGG), the design is intended to allow the metric to capture the subtle perceptual similarity and differences between 2 pictures, and solve the incapability to look beyond the image pixel’s value of the traditional metrics like PSNR and SSIM.
    However, since the perceptual metrics are built upon human perception, it is unknown whether the imperceptible nuances and corruptions caused by the reconstruction attack process could influence those metrics. Therefore, the author sees this could potentially be a gap that needs to be filled.
    To summarize, according to the author′s best knowledge, a comprehensive analysis of perceptual metrics in evaluating privacy leakages of a Federated Learning framework with image data, and how effectively the privacy-preserving technique DP works in protecting such a setting against gradient-based reconstruction attacks is still unheard of.

    For that matter, this dissertation is intended to study: 1. The reliability of perceptual metrics, which are employed by reconstruction attacks literature in a realistic Federated Learning framework; 2. The feasibility of a novel privacy evaluation method that can reveal the relationship between the widely used perceptual metric LPIPS in the SOTA reconstruction attack′s evaluation method and the accuracy of a classification task in PPFL; 3. The effectiveness of differential privacy against the aforementioned SOTA gradient-based reconstruction attack.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML15檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明