中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/95825
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 41143719      Online Users : 209
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/95825


    Title: 針對深度偽造生成影像之對抗性擾動訊號嵌入策略;Effective Strategies of Adversarial Signal Embedding for Resisting Deepfakes Images
    Authors: 張友安;Chang, Yu-An
    Contributors: 資訊工程學系
    Keywords: 深度偽造;視覺感知模型;GAN;對抗性擾動;深度學習
    Date: 2024-08-19
    Issue Date: 2024-10-09 17:18:51 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 利用生成模型進行深度偽造的技術日益進步且易於使用,可能的應用包括將輸入的人物影像合成符合某種需求如特定表情與外觀的輸出影像,或者是將影像轉換為不同的風格的畫面。此類應用同時也帶來不少潛在隱憂。大多數生成模型影像包含人臉,但其來源可能觸及敏感議題或未經畫面人物的授權使用,如何防範影像的不當使用是值得關注的議題。
    一種對人臉生成模型的反制方法是在影像中加入微小但不易察覺的擾動,藉此干預後續生成模型的運作。現存方法雖然讓加入擾動訊號的影像在生成模型的產出中產生內容破壞,但嵌入的擾動訊號卻容易造成影像明顯的失真,減少了實際運用的可行性。本研究提出結合視覺感知之最小可覺差(Just Noticeable Difference)與多種對抗性影像生成演算法的方式,產生與原圖更接近的擾動訊號嵌入影像,並探究不同的實作方式以確認對於生成模型的產出進行有效破壞。為了驗證擾動的適應性,我們亦測試反擾動攻擊,藉此比較對抗性擾動策略的優劣。實驗結果顯示,與現有方式限制最大像素值改變的方法相比,在保證對於目標生成模型的破壞效果下,我們基於最小可覺差的方法在影像品質的保持有更佳的表現。
    ;The technology for deepfakes using generative models is rapidly advancing and becoming increasingly accessible. Potential applications include synthesizing images of individuals that match specific requirements, such as certain expressions and appearances, or converting images into different styles. However, these applications also bring serious concerns. Most generative model outputs contain human faces, but their sources may involve sensitive issues or unauthorized use of individuals’ images. Preventing the misuse of such images is an important issue. One countermeasure against facial generative models is to introduce subtle but imperceptible perturbations into images to disrupt the subsequent operation of generative models. Existing methods, while causing content disruption in the outputs of generative models, often result in noticeable distortions in the images with embedded perturbations, reducing their practical usability. This study proposes a method that combines Just Noticeable Difference (JND) with various adversarial image generation strategies to produce perturbations that are closer to the original image. We also explore different implementation methods to ensure effective disruption of the generative model’s output. To validate the adaptability of the perturbations, we test against counter-perturbation attacks, comparing the effectiveness of different adversarial perturbation strategies. Experimental results show that, compared to existing methods that limit the maximum pixel value change, our JND-based approach provides better image quality preservation while ensuring effective disruption of the target generative model.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML27View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明