中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/93075
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41745192      線上人數 : 1539
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/93075


    題名: 基於自動編碼器和雙重檢查方法的對抗性圖像檢測;ADVERSARIAL IMAGES DETECTION BASED ON AUTOENCODER WITH DOUBLE CHECK METHODS
    作者: 李思瑤;Banditsingha, Pakkapat
    貢獻者: 資訊工程學系
    關鍵詞: 對抗性檢測;自動編碼器;深度學習;Kullback Leibler 散度;Adversarial Detection;Autoencoder;Deep learning;Kullback Leibler Divergence
    日期: 2023-07-13
    上傳時間: 2024-09-19 16:41:02 (UTC+8)
    出版者: 國立中央大學
    摘要: 對抗性示例旨在以輸入數據輸入分類模型前緩慢操作輸入數據的方式來攻擊目標模型。這種操作對輸入數據引入了精心設計的小擾動,人眼無法注意到這些擾動,但分類模型可以。為了防止對抗性攻擊,提出了許多對抗性檢測方法。流行的技術之一是使用通過使用未受攻擊的輸入數據訓練的自動編碼器來強迫模型從對抗性示例中刪除擾動值。結果顯示,該方法可以很好地重建輸入,但有時自動編碼器模型也會重建對抗值。繼從 [9] 中的先前作業,他們提出了針對對抗性示例的對抗性檢測器,該檢測器用正常圖像構建普通自動編碼器訓練,然後使用組合向量訓練異常值檢測網絡,該組合向量包括使用均方比較輸入、重建輸出的值誤差和分類模型的預測概率。如果異常值很大,則意味著輸入是對抗性示例。但從這項作業中無法確保自動編碼器會消除擾動值。針對這個問題,我提出了一種稱為以對抗性檢測基礎的 自動編碼器損失競賽的對抗性檢測。通過創建自動編碼器損失函數,可以正確檢測對抗性示例,該函數包括兩個模型競相預測輸入數據,然後選擇最佳模型來計算輸入數據和真實數據之間的損失,並與以模型為基礎的重建損失相結合。結果顯示,使用所提出的損失可以使對抗樣本的重建輸出正確分類到原始圖像。對於在所提出的損失函數中使用的分類模型,我利用基礎模型值和VGG16模型來使預測結果準確。;Adversarial examples are known as one kind of deep neural network (DNN) attack method, it can perturb the deep neural model, especially in the image classification model. The adversarial example was designed to attack the target model by slowly manipulating the input data before feeding it into the classification models. This manipulation introduces small, carefully crafted perturbations to the input data that cannot be noticed by human eyes but can be noticed by the classification model. To prevent adversarial attacks, many adversarial detection methods were proposed. One of the popular techniques is using an autoencoder trained by using non-attacked input data to force the model to remove the perturbation value from the adversarial examples. The result shows that this method can reconstruct the input well but there is the case that the autoencoder model also reconstructs the adversarial value also. Following the previous work from [9], they propose the adversarial detector against adversarial examples that construct the vanilla autoencoder training with normal images and then training outlier detection network by using the combined vector that includes the value from comparing input and reconstruction output using Mean Square Error and classification model’s prediction probability. If the anomaly value is large, it means that the input is an adversarial example. But from this work cannot make sure that the autoencoder will remove the perturbation value. Following this issue, lead me to propose the adversarial detection called Adversarial detection-based autoencoder loss racing. That can correctly detect the adversarial example by creating the autoencoder loss function that includes two models racing to predict the input data and then will pick the best one to calculate the loss between input data and true data and combined with the model-based reconstruction loss. The result shows that using the proposed loss makes the reconstructed output of the adversarial sample correctly classified to the original image. For the classification model used in the proposed loss function, I utilize the based model value and VGG16 model to make the prediction result accurate.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML15檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明