Workshop - DREXA: Dialogues-Reasoning-Ethics-Explainability-Argumentation

Date: June 19, 2025
Venue: Clayton Hotel Chiswick, London, UK (in conjunction with the Computing Conference 2025)

When faced with incomplete or inconsistent information, humans reason by using argumentation: by providing arguments for and against a topic, examining the relationships between them, and, then, deciding which of them are acceptable. This way, the epistemic state of the discussed topic can be evaluated. In the context of AI, the field of computational argumentation refers to the use of computational methods and tools to construct, analyse, and evaluate arguments in various fields.

It leverages logical foundations and rule-governed mechanisms to enable reasoning engines, while its dialectical nature and affinity with common-sense reasoning make it more comprehensible to users. Computational argumentation also enables AI systems to justify their conclusions by providing a structured representation of the arguments supporting them. This transparency not only enhances trust and accountability but also allows users to understand and critique the reasoning process, leading to more informed decision-making.

This workshop is designed to delve into the dynamic and interdisciplinary field of computational argumentation, focusing on a range of pertinent topics:

  • Argumentation Theory
  • Reasoning
  • Non-monotonic Logic
  • Explainable AI (XAI)
  • AI Ethics
  • Dialogue Systems
  • Human-AI Collaboration
  • Machine Learning Algorithms for Argument Mining

Papers intended to foster discussion and exchange of ideas are welcome from academics, researchers, practitioners, students, private sector, and anyone else with an interest in the field.

Important Dates

  • Submission Deadline: 17 March 2025
  • Notification of Acceptance: 24 April 2025
  • Registration Deadline: 15 May 2025
  • Camera-Ready Paper Submission: 15 May 2025
  • Workshop Date: 19 June 2025

Submission Guidelines

  • The workshop accepts submissions of up to 8 pages (double column), excluding references. The author(s) of an accepted paper will be expected to give a 15-minute presentation covering their work, followed by a 5-minute Q/A session. Finally, we follow a double-blind review format, where both the reviewers and the authors remain anonymous to each other, and the review process does not include a rebuttal phase. Condition for inclusion in the workshop proceedings is that at least one of the co-authors has presented the paper at the workshop.
  • Papers must be written in English.
  • Papers should be thoroughly checked and proofread before submission. After you have submitted your article you are unable to make any changes to it during the refereeing process—although if accepted, you will have a chance to make minor revisions after refereeing and before the final submission of your paper.
  • No additions or deletions to the author list may be made after paper submission, either during the review period, or in case of acceptance, at the final camera ready stage.

Submission Process

Electronic submissions will be handled via Open Review.
Authors who submit their work commit themselves to present their paper at the workshop in case of acceptance.

Submit Now

Publication

Workshop proceedings will be published through OHAAI outlet (https://ohaai.github.io/).

Organizers

This workshop is proposed by the organisers of the Online Handbook of Argumentation for AI (OHAAI). OHAAI is a curation of selected peer-reviewed papers summarising ongoing PhD work on Argumentation in AI, published as an annual online Open Access handbook.

  • Elfia Bezou-Vrakatseli is a PhD candidate in the UK Research and Innovation Centre for Doctoral Training CDT in Safe and Trusted Artificial Intelligence (STAI) at King’s College London and Imperial College London, where she is also a representative of the Equality, Diversity, and Inclusion (EDI) committee. She is an Affiliate of the King's Institute for Artificial Intelligence and is currently part of The Alan Turing Institute's Enrichment Scheme. Her expertise is in argumentation and her research focuses on using argumentation tools to enhance the communication between humans and between humans and AI systems.
  • Andreas Xydis is currently a Post-Doctoral Research Associate in Intelligent Systems at the Lincoln Institute for Agri-food Technology (LIAT) in the University of Lincoln, UK. His expertise is in argumentation and his research focuses on argumentation-based dialogues, non-monotonic reasoning and using argumentation tools for explainability purposes, aiming at bridging the gap between formal logic-based models of dialogue and communication as witnessed in real-world dialogue, enhancing the communication between humans and/or AI systems as well as the trustworthiness of humans towards AI systems.