English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41142308      線上人數 : 359
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/95465


    題名: PXGen:生成模型的事後可解釋方法;PXGen:A Post-hoc Explainable Method for Generative Models
    作者: 黃彥龍;Huang, Yen-Lung
    貢獻者: 資訊工程學系在職專班
    關鍵詞: 可解釋的人工智慧;生成式人工智慧;變分自編碼器;事後解釋;XAI;generative AI;VAE;post-hoc explanation
    日期: 2024-07-31
    上傳時間: 2024-10-09 16:52:58 (UTC+8)
    出版者: 國立中央大學
    摘要: 隨著生成式人工智慧在眾多應用中的迅速成長,可解釋性人工智慧(XAI)在生成式
    人工智慧技術的發展和部署中扮演著至關重要的角色,賦予使用者理解、信任和有效
    利用這些強大工具的能力,同時最小化潛在風險和偏見。近年來,可解釋性人工智慧
    (XAI)取得了顯著的進步和廣泛的應用,這反映出大家共同努力提高人工智慧系統的
    透明度、可解釋性和可信度。最近的研究強調,一個成熟的XAI方法應遵循一套標準,
    主要聚焦於兩個關鍵領域。首先,它應確保解釋的品質和流暢性,涵蓋如忠實性、合
    理性、完整性和針對個體需求的定制等方面。其次,XAI系統或機制的設計原則應該涵
    蓋以下因素,例如可靠性、韌性、其輸出的可驗證性以及其算法的透明度。然而,針
    對生成模型的XAI研究相對稀少,對於這樣的方法如何有效在該領域滿足這些標準的探
    索不多。
    在這篇論文中,我們提出了PXGen,一種針對生成模型的事後可解釋方法。給定
    一個需要解釋的模型,PXGen為解釋準備了兩種項目:“ 錨點集”以及“ 內在和外在指
    標”。這些項目可以根據使用者的目的和需求進行自定義。通過計算每個指標,每個錨
    點都有一組特徵值,且PXGen根據所有錨點的特徵值提供基於實體的解釋方法,並通
    過如k-dispersion或k-center這樣的容易駕馭的演算法向使用者展示和視覺化。在這個框
    架下,PXGen處理了上述需求並提供額外好處,如低執行時間、不需介入模型訓練,
    等等......。根據我們的評估顯示,與最先進的方法相比,PXGen可以很好地找到代表性
    的訓練樣本。;With the rapid growth of generative AI in numerous applications, explainable AI (XAI)
    plays a crucial role in ensuring the responsible development and deployment of generative
    AI technologies, empowering users to understand, trust, and effectively utilize these
    powerful tools while minimizing potential risks and biases. Explainable AI (XAI) has
    undergone notable advancements and widespread adoption in recent years, reflecting a
    concerted push to enhance the transparency, interpretability, and credibility of AI systems.
    Recent research emphasizes that a proficient XAI method should adhere to a set
    of criteria, primarily focusing on two key areas. Firstly, it should ensure the quality and
    fluidity of explanations, encompassing aspects like faithfulness, plausibility, completeness,
    and tailoring to individual needs. Secondly, the design principle of the XAI system or
    mechanism should cover the following factors such as reliability, resilience, the verifiability
    of its outputs, and the transparency of its algorithm. However, research in XAI for
    generative models remains relatively scarce, with little exploration into how such methods
    can effectively meet these criteria in that domain.
    In this work, we propose PXGen, a post-hoc explainable method for generative models.
    Given a model that needs to be explained, PXGen prepares two materials for the
    explanation, the Anchor set and intrinsic & extrinsic criteria. Those materials are customizable
    by users according to their purpose and requirements. Via the calculation of
    each criterion, each anchor has a set of feature values and PXGen provides examplebased
    explanation methods according to the feature values among all the anchors and
    illustrated and visualized to the users via tractable algorithms such as k-dispersion or
    k-center. Under this framework, PXGen addresses the abovementioned desiderata and
    provides additional benefits with low execution time, no additional access requirement,
    etc. Our evaluation shows that PXGen can find representative training samples well
    compared with the state-of-the-art.
    顯示於類別:[資訊工程學系碩士在職專班 ] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML46檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明