中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/92496
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 78937/78937 (100%)
造访人次 : 39425460      在线人数 : 481
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/92496


    题名: 基於自動編碼器和雙重檢查方法的對抗性圖像檢測;ADVERSARIAL IMAGES DETECTION BASED ON AUTOENCODER WITH DOUBLE CHECK METHODS
    作者: 李思瑤;Banditsingha, Pakkapat
    贡献者: 資訊工程學系
    关键词: 對抗性檢測;自動編碼器;深度學習;Kullback Leibler 散度;Adversarial Detection;Autoencoder;Deep learning;Kullback Leibler Divergence
    日期: 2023-07-13
    上传时间: 2023-10-04 16:03:04 (UTC+8)
    出版者: 國立中央大學
    摘要: 對抗性示例旨在以輸入數據輸入分類模型前緩慢操作輸入數據的方式來攻擊目標模型。這種操作對輸入數據引入了精心設計的小擾動,人眼無法注意到這些擾動,但分類模型可以。為了防止對抗性攻擊,提出了許多對抗性檢測方法。流行的技術之一是使用通過使用未受攻擊的輸入數據訓練的自動編碼器來強迫模型從對抗性示例中刪除擾動值。結果顯示,該方法可以很好地重建輸入,但有時自動編碼器模型也會重建對抗值。繼從 [9] 中的先前作業,他們提出了針對對抗性示例的對抗性檢測器,該檢測器用正常圖像構建普通自動編碼器訓練,然後使用組合向量訓練異常值檢測網絡,該組合向量包括使用均方比較輸入、重建輸出的值誤差和分類模型的預測概率。如果異常值很大,則意味著輸入是對抗性示例。但從這項作業中無法確保自動編碼器會消除擾動值。針對這個問題,我提出了一種稱為以對抗性檢測基礎的 自動編碼器損失競賽的對抗性檢測。通過創建自動編碼器損失函數,可以正確檢測對抗性示例,該函數包括兩個模型競相預測輸入數據,然後選擇最佳模型來計算輸入數據和真實數據之間的損失,並與以模型為基礎的重建損失相結合。結果顯示,使用所提出的損失可以使對抗樣本的重建輸出正確分類到原始圖像。對於在所提出的損失函數中使用的分類模型,我利用基礎模型值和VGG16模型來使預測結果準確。;Adversarial examples are known as one kind of deep neural network (DNN) attack method, it can perturb the deep neural model, especially in the image classification model. The adversarial example was designed to attack the target model by slowly manipulating the input data before feeding it into the classification models. This manipulation introduces small, carefully crafted perturbations to the input data that cannot be noticed by human eyes but can be noticed by the classification model. To prevent adversarial attacks, many adversarial detection methods were proposed. One of the popular techniques is using an autoencoder trained by using non-attacked input data to force the model to remove the perturbation value from the adversarial examples. The result shows that this method can reconstruct the input well but there is the case that the autoencoder model also reconstructs the adversarial value also. Following the previous work from [9], they propose the adversarial detector against adversarial examples that construct the vanilla autoencoder training with normal images and then training outlier detection network by using the combined vector that includes the value from comparing input and reconstruction output using Mean Square Error and classification model’s prediction probability. If the anomaly value is large, it means that the input is an adversarial example. But from this work cannot make sure that the autoencoder will remove the perturbation value. Following this issue, lead me to propose the adversarial detection called Adversarial detection-based autoencoder loss racing. That can correctly detect the adversarial example by creating the autoencoder loss function that includes two models racing to predict the input data and then will pick the best one to calculate the loss between input data and true data and combined with the model-based reconstruction loss. The result shows that using the proposed loss makes the reconstructed output of the adversarial sample correctly classified to the original image. For the classification model used in the proposed loss function, I utilize the based model value and VGG16 model to make the prediction result accurate.
    显示于类别:[資訊工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML36检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明