English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 83776/83776 (100%)
造訪人次 : 59277853      線上人數 : 1113
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: https://ir.lib.ncu.edu.tw/handle/987654321/98400


    題名: Vul LLaMa—透過微調 Code LLaMA 自注意力機制以提升 Android 程式碼行級別漏洞定位效能;Vul LLaMA, Enhancing Line-Level Vulnerability Localization Performance in Android Code by Fine-Tuning the Self-Attention Mechanism of Code LLaMA
    作者: 朱珮瑜;Pei-Yu, Zhu
    貢獻者: 資訊管理學系
    關鍵詞: Android 原始碼漏洞檢測;大型語言模型;微調;自注意力機制;可解釋性技術;Android source code vulnerability detection;Large Language Models(LLMs);Fine-tuning;Self-attention mechanism;Explainable techniques
    日期: 2025-08-01
    上傳時間: 2025-10-17 12:44:23 (UTC+8)
    出版者: 國立中央大學
    摘要: 隨著Android應用程式普及,行動裝置的資安風險日益升高。由於Google Play缺乏嚴格的原始碼漏洞審查流程,含有漏洞的應用程式可能直接上架造成重大的資安威脅。隨著大型語言模型(Large Language Models, LLMs)於程式碼領域表現卓越,本研究旨在利用Code LLaMA具備大規模語料預訓練與長序列建模能力的優勢,達成具備高準確度與可解釋性的自動化行級別漏洞標記方法。
    本研究提出Vul LLaMA,在原始Code LLaMA架構基礎上進行改良,透過兩階段注意力機制,於部分 Decoder 層中引入雙向自注意力(Bidirectional Attention),以強化模型的上下文理解能力;此外,Vul LLaMA引入基於注意力機制的可解釋性(Attention-based Interpretability)技術,計算行級別注意力分數以揭示模型的判斷依據,實現內在可解釋性(Intrinsic Interpretability),協助開發者理解模型判斷漏洞的依據。
    實驗分為函式級別漏洞檢測與行級別漏洞定位兩項任務,結果顯示,經過微調的Vul LLaMA可以大幅提升Code LLaMa表現。在函式級別漏洞檢測任務中各項指標的提升幅度達到30%-45%;在行級別漏洞定位任務中,Vul LLaMA整體表現亦優於Code LLaMA,且在Top-1 Accuracy指標中以35%超越現有SOTA模型LineVul(33%)。此結果尤具意義,因為根據統計,超過70%的漏洞函式僅包含1行漏洞,Top-1命中與否即決定預測成敗,顯示Vul LLaMA在實務應用中更具精準性與價值。
    綜合而言,Vul LLaMA透過強化上下文理解能力與基於注意力機制的可解釋性技術,成功提升Code LLaMA在Android程式碼行級別漏洞定位任務中的效能與可解釋性,為資安領域提供一套具備實務部署潛力的自動化分析模型。
    ;As Android applications proliferate, mobile security risks continue to grow. The absence of rigorous source code screening in the Google Play review process allows vulnerable apps to reach users, posing serious threats. Given the success of Large Language Models (LLMs) in code-related tasks, this study leverages Code LLaMA’s strengths in large-scale pretraining and long-sequence modeling to develop an interpretable and accurate line-level vulnerability localization framework.
    To address Code LLaMA’s limited global context understanding and opaque decision-making, we propose Vul LLaMA, which introduces a Two-Stage Attention Mechanism by integrating bidirectional attention into early decoder layers. Additionally, we incorporate attention-based interpretability by computing line-level attention scores, enabling insight into the model’s decision process.
    Experiments are conducted on two tasks: function-level vulnerability classification and line-level vulnerability localization. Results show that Vul LLaMA significantly outperforms the original Code LLaMA, achieving 30–45% improvements in function-level metrics. In line-level tasks, Vul LLaMA attains a Top-1 Accuracy of 35%, surpassing the state-of-the-art LineVul model (33%). This is especially meaningful, as over 70% of vulnerable functions contain only a single vulnerable line—making Top-1 predictions highly critical.
    Overall, Vul LLaMA enhances both the effectiveness and interpretability of Code LLaMA for line-level vulnerability detection in Android code, offering a practical and scalable solution for secure software development.
    顯示於類別:[資訊管理研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML18檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明