dc.description.abstract | With the rapid growth of generative AI in numerous applications, explainable AI (XAI)
plays a crucial role in ensuring the responsible development and deployment of generative
AI technologies, empowering users to understand, trust, and effectively utilize these
powerful tools while minimizing potential risks and biases. Explainable AI (XAI) has
undergone notable advancements and widespread adoption in recent years, reflecting a
concerted push to enhance the transparency, interpretability, and credibility of AI systems.
Recent research emphasizes that a proficient XAI method should adhere to a set
of criteria, primarily focusing on two key areas. Firstly, it should ensure the quality and
fluidity of explanations, encompassing aspects like faithfulness, plausibility, completeness,
and tailoring to individual needs. Secondly, the design principle of the XAI system or
mechanism should cover the following factors such as reliability, resilience, the verifiability
of its outputs, and the transparency of its algorithm. However, research in XAI for
generative models remains relatively scarce, with little exploration into how such methods
can effectively meet these criteria in that domain.
In this work, we propose PXGen, a post-hoc explainable method for generative models.
Given a model that needs to be explained, PXGen prepares two materials for the
explanation, the Anchor set and intrinsic & extrinsic criteria. Those materials are customizable
by users according to their purpose and requirements. Via the calculation of
each criterion, each anchor has a set of feature values and PXGen provides examplebased
explanation methods according to the feature values among all the anchors and
illustrated and visualized to the users via tractable algorithms such as k-dispersion or
k-center. Under this framework, PXGen addresses the abovementioned desiderata and
provides additional benefits with low execution time, no additional access requirement,
etc. Our evaluation shows that PXGen can find representative training samples well
compared with the state-of-the-art. | en_US |