參考文獻 |
[1] Alejandro Barredo Arrieta, Natalia D´ıaz-Rodr´ıguez, Javier Del Ser, Adrien Bennetot,
Siham Tabik, Alberto Barbado, Salvador Garc´ıa, Sergio Gil-L´opez, Daniel Molina,
Richard Benjamins, et al. Explainable artificial intelligence (xai): Concepts, taxonomies,
opportunities and challenges toward responsible ai. Information fusion,
58:82–115, 2020.
[2] Eric Bauer and Ron Kohavi. An empirical comparison of voting classification algorithms:
Bagging, boosting, and variants. Machine learning, 36:105–139, 1999.
[3] David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review
for statisticians. Journal of the American statistical Association, 112(518):859–877,
2017.
[4] Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi,
and Salvatore Rinzivillo. Benchmarking and survey of explanation methods
for black box models. Data Mining and Knowledge Discovery, 37(5):1719–1778, 2023.
[5] Philippe M Burlina, Neil Joshi, Katia D Pacheco, TY Alvin Liu, and Neil M Bressler.
Assessment of deep generative models for high-resolution synthetic retinal image
generation of age-related macular degeneration. JAMA ophthalmology, 137(3):258–
264, 2019.
[6] Alfonso Cevallos, Friedrich Eisenbrand, and Rico Zenklusen. Max-sum diversity via
convex programming. arXiv preprint arXiv:1511.07077, 2015.
[7] Alfonso Cevallos, Friedrich Eisenbrand, and Rico Zenklusen. Local search for maxsum
diversification. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium
on Discrete Algorithms, pages 130–142. SIAM, 2017.
[8] Raymond Chen. On mentzer’s hardness of the k-center problem on the euclidean
plane. 2021.
[9] Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta,
and Anil A Bharath. Generative adversarial networks: An overview. IEEE
signal processing magazine, 35(1):53–65, 2018.
[10] Tal Daniel and Aviv Tamar. Soft-introvae: Analyzing and improving the introspective
variational autoencoder. In Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, pages 4391–4400, 2021.
[11] Ryan Daws. Medical chatbot using openai’s gpt-3 told a fake patient to kill themselves.
AI News, 2020.
[12] Li Deng. The mnist database of handwritten digit images for machine learning
research [best of the web]. IEEE signal processing magazine, 29(6):141–142, 2012.
[13] Carl Doersch. Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908,
2016.
[14] Chris Donahue, Julian McAuley, and Miller Puckette. Adversarial audio synthesis.
arXiv preprint arXiv:1802.04208, 2018.
[15] Rudresh Dwivedi, Devam Dave, Het Naik, Smiti Singhal, Rana Omer, Pankesh Patel,
Bin Qian, Zhenyu Wen, Tejal Shah, Graham Morgan, et al. Explainable ai (xai):
Core ideas, techniques, and solutions. ACM Computing Surveys, 55(9):1–33, 2023.
[16] Tom´as Feder and Daniel Greene. Optimal algorithms for approximate clustering. In
Proceedings of the twentieth annual ACM symposium on Theory of computing, pages
434–444, 1988.
[17] Giorgio Franceschelli and Mirco Musolesi. Copyright in generative deep learning.
Data & Policy, 4:e17, 2022.
[18] Jun Gao, Tianchang Shen, Zian Wang, Wenzheng Chen, Kangxue Yin, Daiqing Li,
Or Litany, Zan Gojcic, and Sanja Fidler. Get3d: A generative model of high quality
3d textured shapes learned from images. Advances In Neural Information Processing
Systems, 35:31841–31854, 2022.
[19] Albert Gatt and Emiel Krahmer. Survey of the state of the art in natural language
generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence
Research, 61:65–170, 2018.
[20] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley,
Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets.
Advances in neural information processing systems, 27, 2014.
[21] Riccardo Guidotti. Counterfactual explanations and how to find them: literature
review and benchmarking. Data Mining and Knowledge Discovery, pages 1–55, 2022.
[22] David Gunning, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, and
Guang-Zhong Yang. Xai—explainable artificial intelligence. Science robotics,
4(37):eaay7120, 2019.
[23] Han Guo, Nazneen Fatema Rajani, Peter Hase, Mohit Bansal, and Caiming Xiong.
Fastif: Scalable influence functions for efficient model interpretation and debugging.
arXiv preprint arXiv:2012.15781, 2020.
[24] Isobel Asher Hamilton. An ai tool which reconstructed a pixelated picture of barack
obama to look like a white man perfectly illustrates racial bias in algorithms. Business
Insider, 2020.
[25] Zayd Hammoudeh and Daniel Lowd. Training data influence analysis and estimation:
A survey. arXiv preprint arXiv:2212.04612, 2022.
[26] Refael Hassin, Shlomi Rubinstein, and Arie Tamir. Approximation algorithms for
maximum dispersion. Operations research letters, 21(3):133–137, 1997.
[27] Xin He, Kaiyong Zhao, and Xiaowen Chu. Automl: A survey of the state-of-the-art.
Knowledge-based systems, 212:106622, 2021.
[28] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp
Hochreiter. Gans trained by a two time-scale update rule converge to a local nash
equilibrium. Advances in neural information processing systems, 30, 2017.
[29] Irina Higgins, Loic Matthey, Arka Pal, Christopher P Burgess, Xavier Glorot,
Matthew M Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning
basic visual concepts with a constrained variational framework. ICLR (Poster),
3, 2017.
[30] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models.
Advances in neural information processing systems, 33:6840–6851, 2020.
[31] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv
preprint arXiv:1312.6114, 2013.
[32] Diederik P Kingma, Max Welling, et al. An introduction to variational autoencoders.
Foundations and Trends® in Machine Learning, 12(4):307–392, 2019.
[33] Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence
functions. In International conference on machine learning, pages 1885–1894. PMLR,
2017.
[34] Zhifeng Kong and Kamalika Chaudhuri. Understanding instance-based interpretability
of variational auto-encoders. Advances in Neural Information Processing Systems,
34:2400–2412, 2021.
[35] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from
tiny images. 2009.
[36] Solomon Kullback and Richard A Leibler. On information and sufficiency. The annals
of mathematical statistics, 22(1):79–86, 1951.
[37] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at
scale. arXiv preprint arXiv:1611.01236, 2016.
[38] Pantelis Linardatos, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. Explainable
ai: A review of machine learning interpretability methods. Entropy, 23(1):18, 2020.
[39] James Lucas, George Tucker, Roger B Grosse, and Mohammad Norouzi. Don’t
blame the elbo! a linear vae perspective on posterior collapse. Advances in Neural
Information Processing Systems, 32, 2019.
[40] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions.
Advances in neural information processing systems, 30, 2017.
[41] Calvin Luo. Understanding diffusion models: A unified perspective. arXiv preprint
arXiv:2208.11970, 2022.
[42] Qing Lyu, Marianna Apidianaki, and Chris Callison-Burch. Towards faithful model
explanation in nlp: A survey. Computational Linguistics, pages 1–70, 2024.
[43] Stuart G Mentzer. Approximability of metric clustering problems. Unpublished
manuscript, March, 2016.
[44] Douglas C Montgomery, Elizabeth A Peck, and G Geoffrey Vining. Introduction to
linear regression analysis. John Wiley & Sons, 2021.
[45] Leif E Peterson. K-nearest neighbor. Scholarpedia, 4(2):1883, 2009.
[46] Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. Estimating
training data influence by tracing gradient descent. Advances in Neural Information
Processing Systems, 33:19920–19930, 2020.
[47] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ” why should i trust you?”
explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD
international conference on knowledge discovery and data mining, pages 1135–1144,
2016.
[48] Johanes Schneider and Joshua Handali. Personalized explanation in machine learning:
A conceptualization. arXiv preprint arXiv:1901.00770, 2019.
[49] Johannes Schneider. Explainable generative ai (genxai): A survey, conceptualization,
and research agenda. arXiv preprint arXiv:2404.09554, 2024.
[50] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam,
Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep
networks via gradient-based localization. In Proceedings of the IEEE international
conference on computer vision, pages 618–626, 2017.
[51] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional
networks: Visualising image classification models and saliency maps. arXiv preprint
arXiv:1312.6034, 2013.
[52] Yan-Yan Song and LU Ying. Decision tree methods: applications for classification
and prediction. Shanghai archives of psychiatry, 27(2):130, 2015.
[53] Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi,
Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias
Nießner, et al. State of the art on neural rendering. In Computer Graphics Forum,
volume 39, pages 701–727. Wiley Online Library, 2020.
[54] Andrea Tirinzoni, Riccardo Poiani, and Marcello Restelli. Sequential transfer in
reinforcement learning with a generative model. In International Conference on
Machine Learning, pages 9481–9492. PMLR, 2020.
[55] Luan Tran, Xi Yin, and Xiaoming Liu. Disentangled representation learning gan for
pose-invariant face recognition. In Proceedings of the IEEE conference on computer
vision and pattern recognition, pages 1415–1424, 2017.
[56] Andrea Vattani. K-means requires exponentially many iterations even in the plane.
In Proceedings of the twenty-fifth annual symposium on Computational geometry,
pages 324–332, 2009.
[57] Andrey Voynov and Artem Babenko. Unsupervised discovery of interpretable directions
in the gan latent space. In International conference on machine learning, pages
9786–9796. PMLR, 2020.
[58] W Patrick Walters and Mark Murcko. Assessing the impact of generative ai on
medicinal chemistry. Nature biotechnology, 38(2):143–145, 2020.
[59] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality
assessment: from error visibility to structural similarity. IEEE transactions on image
processing, 13(4):600–612, 2004.
[60] Chih-Kuan Yeh, Joon Kim, Ian En-Hsu Yen, and Pradeep K Ravikumar. Representer
point selection for explaining deep neural networks. Advances in neural information
processing systems, 31, 2018.
[61] Kayo Yin and Graham Neubig. Interpreting language models with contrastive explanations.
arXiv preprint arXiv:2202.10419, 2022.
[62] Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial
network. arXiv preprint arXiv:1609.03126, 2016.
[63] Joyce Zhou and Thorsten Joachims. How to explain and justify almost any decision:
Potential pitfalls for accountability in ai decision-making. In Proceedings of the 2023
ACM Conference on Fairness, Accountability, and Transparency, pages 12–21, 2023. |