博碩士論文 111552017 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系在職專班zh_TW
DC.creator黃彥龍zh_TW
DC.creatorYen-Lung Huangen_US
dc.date.accessioned2024-7-31T07:39:07Z
dc.date.available2024-7-31T07:39:07Z
dc.date.issued2024
dc.identifier.urihttp://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=111552017
dc.contributor.department資訊工程學系在職專班zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract隨著生成式人工智慧在眾多應用中的迅速成長,可解釋性人工智慧(XAI)在生成式 人工智慧技術的發展和部署中扮演著至關重要的角色,賦予使用者理解、信任和有效 利用這些強大工具的能力,同時最小化潛在風險和偏見。近年來,可解釋性人工智慧 (XAI)取得了顯著的進步和廣泛的應用,這反映出大家共同努力提高人工智慧系統的 透明度、可解釋性和可信度。最近的研究強調,一個成熟的XAI方法應遵循一套標準, 主要聚焦於兩個關鍵領域。首先,它應確保解釋的品質和流暢性,涵蓋如忠實性、合 理性、完整性和針對個體需求的定制等方面。其次,XAI系統或機制的設計原則應該涵 蓋以下因素,例如可靠性、韌性、其輸出的可驗證性以及其算法的透明度。然而,針 對生成模型的XAI研究相對稀少,對於這樣的方法如何有效在該領域滿足這些標準的探 索不多。 在這篇論文中,我們提出了PXGen,一種針對生成模型的事後可解釋方法。給定 一個需要解釋的模型,PXGen為解釋準備了兩種項目:“ 錨點集”以及“ 內在和外在指 標”。這些項目可以根據使用者的目的和需求進行自定義。通過計算每個指標,每個錨 點都有一組特徵值,且PXGen根據所有錨點的特徵值提供基於實體的解釋方法,並通 過如k-dispersion或k-center這樣的容易駕馭的演算法向使用者展示和視覺化。在這個框 架下,PXGen處理了上述需求並提供額外好處,如低執行時間、不需介入模型訓練, 等等......。根據我們的評估顯示,與最先進的方法相比,PXGen可以很好地找到代表性 的訓練樣本。zh_TW
dc.description.abstractWith the rapid growth of generative AI in numerous applications, explainable AI (XAI) plays a crucial role in ensuring the responsible development and deployment of generative AI technologies, empowering users to understand, trust, and effectively utilize these powerful tools while minimizing potential risks and biases. Explainable AI (XAI) has undergone notable advancements and widespread adoption in recent years, reflecting a concerted push to enhance the transparency, interpretability, and credibility of AI systems. Recent research emphasizes that a proficient XAI method should adhere to a set of criteria, primarily focusing on two key areas. Firstly, it should ensure the quality and fluidity of explanations, encompassing aspects like faithfulness, plausibility, completeness, and tailoring to individual needs. Secondly, the design principle of the XAI system or mechanism should cover the following factors such as reliability, resilience, the verifiability of its outputs, and the transparency of its algorithm. However, research in XAI for generative models remains relatively scarce, with little exploration into how such methods can effectively meet these criteria in that domain. In this work, we propose PXGen, a post-hoc explainable method for generative models. Given a model that needs to be explained, PXGen prepares two materials for the explanation, the Anchor set and intrinsic & extrinsic criteria. Those materials are customizable by users according to their purpose and requirements. Via the calculation of each criterion, each anchor has a set of feature values and PXGen provides examplebased explanation methods according to the feature values among all the anchors and illustrated and visualized to the users via tractable algorithms such as k-dispersion or k-center. Under this framework, PXGen addresses the abovementioned desiderata and provides additional benefits with low execution time, no additional access requirement, etc. Our evaluation shows that PXGen can find representative training samples well compared with the state-of-the-art.en_US
DC.subject可解釋的人工智慧zh_TW
DC.subject生成式人工智慧zh_TW
DC.subject變分自編碼器zh_TW
DC.subject事後解釋zh_TW
DC.subjectXAIen_US
DC.subjectgenerative AIen_US
DC.subjectVAEen_US
DC.subjectpost-hoc explanationen_US
DC.titlePXGen:生成模型的事後可解釋方法zh_TW
dc.language.isozh-TWzh-TW
DC.titlePXGen:A Post-hoc Explainable Method for Generative Modelsen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明