In recent years, research on intelligent systems that can explain their inferences and decisions to (human and machine) users has emerged as an important subfield of Artificial Intelligence (AI). In this context, the interest in symbolic and hybrid approaches to AI and their ability to facilitate explainable and trustworthy reasoning and decision-making -- often in combination with machine learning algorithms -- is increasing. Computational argumentation is considered a particularly promising paradigm for facilitating explainable AI (XAI). This trend is reflected by the fact that many researchers who study argumentation have started to:
- apply argumentation as a method of explainable reasoning;
- combine argumentation with other subfields of AI, such as knowledge representation and reasoning (KR) and machine learning (ML), to facilitate the latter's explainability;
- study explainability properties of argumentation.
To get an overview of argumentation and XAI, interested researchers may consult Francesca Toni's invited talk about explainable reasoning at KR 2021, as well as the following two survey papers:
- Argumentation and Explainable Artificial Intelligence: A Survey. The Knowledge Engineering Review. 2021.
- Argumentative XAI: A Survey. 30th International Joint Conference on Artificial Intelligence. 2021.
Topics
Topics include, but are not limited to:- Argumentative Explainability
- Formal definitions of explanations
- Computational properties of explanations
- Neuro-symbolic explainable argumentation
- Explanation as a form of argumentation
- Human intelligibility of formal argumentation
- Dialectical, dialogical and conversational explanations
- AI methods to support argumentative explainability
- Argumentation for XAI
- Applications of argumentation for explainability in the fields of AI (e.g. machine learning, machine reasoning, multi-agent systems, natural language processing) and overlapping fields of research (e.g. optimisation, human-computer interaction, philosophy and social sciences)
- User-acceptance and evaluation of argumentation-based explanations
- Software systems that provide argumentation-based explanations
- Other defeasible reasoning approaches to XAI
Program
-
09:15 - 09:30: Welcome
- Joeri Peters, Floris Bex and Henry Prakken. Justifications derived from inconsistent case bases using authoritativeness. (Slides)
- Wijnand van Woerkom, Davide Grossi, Henry Prakken and Bart Verheij. Justification in Case-Based Reasoning. (Slides)
- Guilherme Paulino-Passos and Francesca Toni. On Monotonicity of Dispute Trees as Explanations for Case-Based Reasoning with Abstract Argumentation. (Slides)
- Kristijonas Čyras, Timotheus Kampik and Qingtao Weng. Dispute Trees as Explanations in Quantitative (Bipolar) Argumentation. (Slides)
- Tjitze Rienstra, Jesse Heyninck, Gabriele Kern-Isberner, Kenneth Skiba and Matthias Thimm. Explaining Argument Acceptance in ADFs. (Slides)
- Nico Potyka, Xiang Yin and Francesca Toni. On the Tradeoff Between Correctness and Completeness in Argumentative Explainable AI. (Slides)
- Giulia Vilone and Luca Longo. An XAI method for the Automatic Formation of an Abstract Argumentation Framework from a Neural Network and its Objective Evaluation. (Slides)
- Nikolaos Spanoudakis, Antonis Kakas and Adamos Koumi. Application Level Explanations for Argumentation-based Decision Making. (Slides)
- Atefeh Keshavarzi Zafarghandi and Davide Ceolin. Fostering Explainable Online Review Assessment Through Computational Argumentation. (Slides)
- Loan Ho, Victor de Boer, M. Birna van Riemsdijk, Stefan Schlobach and Myrthe Tielman. Abstract Argumentation for Hybrid Intelligence Scenarios. (Slides)
- Lars Bengel, Lydia Blümel, Tjitze Rienstra and Matthias Thimm. Argumentation-based Causal and Counterfactual Reasoning. (Slides)
- Submission deadline: 15th July 2022 (AoE, final deadline extension)
- Notification: 15th August
- Camera-ready deadline: 1st September 2022
- Workshop: 12th September 2022
- Kristijonas Čyras, Ericsson Research
- Timotheus Kampik, Umeå University and SAP BPI
- Oana Cocarascu, King’s College London
- Antonio Rago, Imperial College London
- Nico Potyka, Imperial College London
- Johannes Wallner, Graz University of Technology
- Tjitze Rienstra, Maastricht University
- Pietro Baroni, University of Brescia
- Henry Prakken, Utrecht University, University of Groningen
- Wijnand van Woerkom, Utrecht University
- Leila Amgoud, CNRS
- Alejandro Javier García, Universidad Nacional del Sur
- Fernando Tohme, Universidad Nacional del Sur
- Antonis Kakas, University of Cyprus
- Beishui Liao, Zhejiang University
- Nadin Kökciyan, University of Edinburgh
- Isabel Sassoon, Brunel University London
- Jérôme Delobelle, University of Paris
- Anna Collins, King’s College London
- Xiuyi Fan, Nanyang Technological University
- Alexandros Vassiliades, Aristotle University of Thessaloniki
- Nick Bassiliades, Aristotle University of Thessaloniki
- Alison R. Panisson, Federal University of Santa Catarina
- Mariela Morveli-Espinoza, Federal University of Technology - Paraná
- Juan Carlos Nieves, Umeå University
- Jieting Luo, Zhejiang University
- Roberta Calegari, University of Bologna
- Francesca Mosca, King’s College London
- Elizabeth Sklar, University of Lincoln
- Simon Parsons, University of Lincoln
- Madalina Croitoru, University of Montpellier
- Zeynep G. Saribatur, Vienna University of Technology
- Christian Straßer, Ruhr-University Bochum
- Elizabeth Black, King’s College London
- Odinaldo Rodrigues, King’s College London
- Serena Villata, CNRS
- Anthony Hunter, University College London
- Markus Ulbricht, Leipzig University
09:30 - 10:30: Invited talk: Nick Bassiliades - Using Argumentation for Explaining AI Systems' Decisions
Argumentation is the study of how conclusions can be supported or undermined by premises mainly through logical reasoning. Explainable AI (XAI) is artificial intelligence (AI) in which humans can understand the decisions or predictions made by the AI. Explainability of an Artificial Intelligence (AI) system, i.e., tracking the steps that lead to the decision, was an easy task in the early stages of AI, as most of the systems were logic-based. This nowadays has changed since data-driven methods mostly based on complex numerical computational methods are used to produce highly accurate "black box" predictive models that cannot easily be reached by means of logic-based approaches, where, however, even its designers cannot explain why an AI arrived at a specific decision. Argumentation has strong Explainability capabilities, as it can translate the decision of an AI system in an argumentation procedure, which shows step by step how it concludes to a result. Argumentation can provide reasoning over uncertainty and can find solutions when conflicting information is faced. In this talk, we will elaborate over the topics of Argumentation and XAI combined, by reviewing all the important methods and studies, as well as implementations that use Argumentation to provide Explainability in AI. More specifically, we will present how Argumentation can enable Explainability for solving various types of problems in decision-making, justification of an opinion, and dialogues. Subsequently, we will review Argumentation approaches for explainable systems in various applications domains, such as in Medical Informatics, Law, the Semantic Web, Security, Robotics, and some general-purpose systems. Finally, we will overview approaches that combine Machine Learning and Argumentation Theory, towards more interpretable predictive models. The talk will conclude by elaborating on some research attempts made in the Intelligent Systems Lab of the Aristotle University of Thessaloniki, to combine Explainability, Argumentation and Machine Learning.
10:30 - 11:00: Coffee Break
11:00 - 12:45: Session 1 - Justifications, Dialogues, Dispute Trees
14:00 - 15:00: Invited talk: Roberta Calegari - Computational argumentation and symbolic reasoning for explainable AI
New research efforts toward eXplainable artificial intelligence (XAI) are aimed at mitigating the opacity issue, and pursuing the ultimate goal of building understandable, accountable, and trustable intelligent systems—although still a long way to go. In this context, it is increasingly recognised that symbolic approaches to machine intelligence may have a critical role to play in overcoming the current limitations arising from the intrinsic opacity of sub-symbolic approaches. In particular, among various approaches to XAI, argumentative models have been advocated in both the AI and social science literature, as their dialectical nature appears to fit some desirable features of the explanation activity. Computational argumentation is a well-established paradigm in AI, at the intersection of knowledge representation and reasoning, natural language processing and multi-agent systems. It is based on defining argumentation frameworks including sets of arguments and dialectical relations between them (e.g., attack, support), as well as semantics (e.g., definitions of dialectically acceptable sets of arguments or of dialectical strength of arguments) with accompanying computational machinery. In this talk, I will show how computational argumentation, combined with a variety of mechanisms for mining argumentation frameworks, can be used to support various forms of XAI as well as existing approaches in the field.
15:00 - 15:05: Micro-break
15:05 - 15:45: Session 2 - ArgXAI & Machine Learning
16:10 - 17:10: Session 3 - Applications
Submission
Submissions should be up to 12 pages in PDF format, including abstract, figures and references, and according to the CEUR-WS template (single column). The reviewing will be single-blind. All submissions will be made electronically, here through the EasyChair conference system.Accepted papers will be included in CEUR-WS workshop proceedings after a careful review process. At least one of the authors will be required to register and attend the COMMA conference to present the paper in order for it to be included in the workshop proceedings.
Important Dates
Organisation
Kristijonas Čyras |
Timotheus Kampik |
Oana Cocarascu |
Antonio Rago |
Ericsson Research | Umeå University & SAP Signavio |
King's College London | Imperial College London |
Contact us at argxai22@easychair.org.
Programme Committee
We thank CEUR-WS.org for supporting this workshop.