博碩士論文 109522052 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator王冠中zh_TW
DC.creatorKuan-Chung Wangen_US
dc.date.accessioned2023-1-11T07:39:07Z
dc.date.available2023-1-11T07:39:07Z
dc.date.issued2023
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=109522052
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract深度偽造技術的出現對於數位視訊真實性帶來很大的威脅,近期許多研究針對深度偽造內容是否存在於視訊中發表相關的偵測與辨識方法,另也有研究學者提出在公開的影像中嵌入所謂對抗性浮水印,試圖使深偽模型所生成的竄改影像內容偏離預期結果,避免產生有效的竄改內容。現有的浮水印方法多於像素域中加入這種對抗性訊號,然而為了避免過強的浮水印訊號損及原影像畫質,無法在像素值施予較大幅度的改變。本研究提出於影像頻率域中嵌入對抗性浮水印,將影像轉換至亮度及色度空間後計算離散餘弦轉換(Discrete Cosine Transform, DCT),透過Watson感知模型計算在不被人眼察覺下,確保DCT係數的修改低於可能的最大改變量,並依此決定浮水印在訓練階段時的修改步長。實驗結果顯示,所加入的高強度浮水印訊號確實能使深偽模型所生成的影像更容易發生嚴重失真,同時藉由計算影像畫質衡量來證實這樣的方法與像素值嵌入方法相比可有效降低對於原影像畫質的破壞。zh_TW
dc.description.abstractThe emergence of Deepfakes poses a serious threat to the authenticity of digital videos. Recently, many studies have proposed methods for detecting and identifying the presence of Deepfakes in videos. On the other hand, some researchers adopted the approach of digital watermarking by embedding adversarial signals in public images to make the tampering results generated by Deepfake models deviate from their expected goals, so as to avoid producing effective falsified content. Most existing watermarking methods embedded such adversarial signals in the pixel domain. However, in order to prevent the quality of original image from being damaged by overly strong watermark signals, making large changes to the pixel values is not feasible. In this research, we propose to embed the adversarial watermark signals in the frequency domain of images. After converting the image from RGB color channels to YUV channels, the DCT (Discrete Cosine Transform) is applied on each channel. The Watson’s perception model is employed to determine the maximum possible change of DCT coefficients to ensure that the modification won’t be noticed by the human eyes. The perceptual mask is also used to determine the modification step size of the watermark in the training stage. The experimental results show that embedding such stronger watermarking signals can introduce more severe distortions on the image generated by the Deepfake models.en_US
DC.subject深度偽造zh_TW
DC.subject對抗性浮水印zh_TW
DC.subject深度學習zh_TW
DC.subjectDeepfakesen_US
DC.subjectadversarial watermarken_US
DC.subjectdeep learningen_US
DC.title基於視覺感知模型之深度偽造對抗性擾動zh_TW
dc.language.isozh-TWzh-TW
DC.titleAdversarial Perturbation against Deepfakes based on Visual Perceptual Modelen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明