English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 42118575      線上人數 : 972
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/89946


    題名: 基於對抗式攻擊之強健自監督式學習模型;A Self-Supervised Learning Model for Adversarial Robustness
    作者: 李庭瑄;Lee, Ting-Hsuan
    貢獻者: 資訊工程學系
    關鍵詞: 深度學習;對抗式攻擊;影像辨識;自監督式學習;模型強健性;deep learning;adversarial attack;AI security;image recognition;self-supervised learning
    日期: 2022-08-13
    上傳時間: 2022-10-04 12:05:29 (UTC+8)
    出版者: 國立中央大學
    摘要: 隨著深度學習技術的發展,市面上出現越來越多的深度學習系統融入於我們 的生活中,例如:自駕車系統、人臉辨識系統等等。然而,我們往往忽略了深度 學習系統若出現錯誤決策,可能會導致嚴重的人身事故和財產損失問題。實際上, 現今有許多深度模型可能被惡意攻擊,而導致做出錯誤的決策,例如:在輸入的 資料中插入對抗性擾動以影響深度學習系統的判斷能力,導致模型做出錯誤的決 策。這也證實了深度神經網絡的不安全性。這樣的問題為下游任務帶來了相應的 風險,例如,汽車自動駕駛系統中的速限偵測系統可能會受到對抗性攻擊,使汽 車錯誤辨識導致行駛在高速公路上突然停止或降速等其他非預期之行為,相應地 增加了交通風險。
    為了抵擋對抗性攻擊,目前普遍的方法為對抗式訓練,即將對抗性攻擊產生 的對抗樣本也作為訓練資料讓模型進行學習。雖然經過訓練後,模型可以有效的 防禦對抗樣本,但也影響了對普通樣本的分類能力,進而降低模型的泛化性。於 是,我們提出了使用自監督式學習方式,在不提供正確的標記下,模型自行學習 對抗樣本與原始資料的差異。透過這樣的學習方式來增強模型的強健性,利用少 量標記資料訓練的同時,加強模型對於攻擊樣本的防禦能力。;With the rapid development of deep learning, a growing number of deep
    learning systems are associated with our daily life, such as auto-driving system, face recognition system, ..., etc. However, we often ignore that the deep- learning system may make a wrong prediction caused by the attacker, and lead the serious personal accidents and property damage. For example, the attacker
    may feed the adversarial example into the deep-learning system and lead the model make a wrong decision. This fact also verified the unreliability of deep learning model and increase the potential risk of the downstream task. For example, the speed violation detection sub-system may be subject to adversarial attacks, causing the auto-driving system to take the unexpected
    behavior and increasing the corresponding risk.
    In order to defend the system against the adversarial attacks, the common
    method is the adversarial training, which allow the model trained on the adversarial examples generated by the adversarial attacks. Although the model is capable to defend the adversarial attack in some degree, it also decreases the performance of the corresponding task and reduces the generalization of the model. Therefore, we propose the framework to train the model in self- supervised learning, which learns to distinguish the adversarial example from
    the original data by without providing the correct label. The proposed framework enhances the robustness as well as the generalization of the trained model to
    against the adversarial attack.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML33檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明