One important challenge in machine learning is the “black box” problem, in which an artificial intelligence reaches a result without any humans being able to explain why. This problem is typically present in deep artificial neural networks, in which the hidden layers are impenetrable. To tackle this problem, researchers have introduced the no- tion of explainable AI (XAI), artificial intelligence the results of which can be understood by humans. The XAI position is usually characterised in terms of three properties: transparency, interpretability, and explainability. While the first two have standard def- initions, explainability is not understood in a uniform manner. What does explainability mean? What kind of AI is explainable? Can there be properly explainable machine learning systems? In this workshop, we discuss a variety of approaches to these topics in connection to fundamental questions in artificial intelligence. What are explanations in AI? What do AI systems explain and how? How does AI explanation relate to the topics of human understanding and intelligence?
Confirmed speakers are: Jobst Landgrebe (Cognotekt Köln), Markus Pantsar (University of Helsinki, c:o/re), Frederik Stjernfelt (Aalborg University Copenhagen, c:o/re), Gabriele Gramelsberger (c:o/re Aachen), Ana L. C. Bazzan (Universidade Federal do Rio Grande do Sul, c:o/re Aachen), Joffrey Becker (Laboratoire d‘Anthropologie Sociale, c:o/re Aachen), Daniel Wenz (CSS Lab RWTH Aachen), and Andreas Kaminski (High Performance Computing Center Stuttgart).
Information and Program: https://khk.rwth-aachen.de/2022/01/27/2281/explainable-ai-explanations-in-ai/