Download PDFOpen PDF in browser

Irrelevant Explanations: a Logical Formalization and a Case Study

EasyChair Preprint 13141

10 pagesDate: April 30, 2024

Abstract

Explaining the behavior of AI-based tools, whose results may be unexpected even to experts, has become a major request from society and a major concern of AI practitioners and theoreticians. In this position paper we raise two points: (1) \emph{irrelevance} is more amenable to a logical formalization than relevance; (2) since effective explanations must take into account both the context and the receiver of the explanations (called the explainee) so it should be also for the definition of irrelevance. We propose a general, logical framework characterizing context-aware and receiver-aware irrelevance, and provide a case study on an existing tool, based on Semantic Web, that prunes irrelevant parts of an explanation.

Keyphrases: Semantic Web, XAI, logic

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:13141,
  author    = {Simona Colucci and Francesco M Donini and Tommaso Di Noia and Claudio Pomo and Eugenio Di Sciascio},
  title     = {Irrelevant Explanations: a Logical Formalization and a Case Study},
  howpublished = {EasyChair Preprint 13141},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser