A central quest in explainable AI relates to understanding the decisions made by learned classifiers. There are three dimensions of this understanding that have been receiving significant attention in recent years. The first dimension relates to characterizing conditions on instances that are necessary and sufficient for decisions, therefore providing abstractions of instances that can be viewed as the “reasons behind decisions.“ The next dimension relates to characterizing minimal conditions that are sufficient for a decision, therefore identifying maximal aspects of the instance that are irrelevant to the decision. The last dimension relates to characterizing minimal conditions that are necessary for a decision, therefore identifying minimal perturbations to the instance that yield alternate decisions. We discuss in this tutorial a comprehensive, semantical and computational theory of explainability along these dimensions which is based on some recent developments in symbolic logic. The tutorial will also discuss how this theory is particularly applicable to non-symbolic classifiers such as those based on Bayesian networks, decision trees, random forests and some types of neural networks.
-- Tutorial paper, “Logic for Explainable AI“:
-- Tutorial slides: ~darwiche/XAI/
-- This recording is a slightly expanded version of the live tutorial Adnan Darwiche of UCLA gave at the 2023 ACM/IEEE Symposium on Logic in Computer Science (LICS)
00:00 Introduction
02:39 From numeric to symbolic classifiers
10:33 Representing classifiers using tractable circuits
14:28 Representing classifiers using class formulas
18:52 Discrete logic (vs Boolean logic)
25:11 The sufficient reasons for decisions: why a decision was made? (aka abductive explanations, PI-explanations)
34:50 The complete reasons for decisions: instance abstraction
45:26 The necessary reasons for decisions: how to change a decision? (aka contrastive explanations, counterfactual explanations)
51:04 Terminology: PI-explanations, abductive explanations, contrastive explanations, counterfactual explanations
52:38 A logical operator for computing instance abstractions (complete reasons)
1:00:54 The first theory of explanation: A summary
1:05:23 Beyond simple explanations: A key insight
1:10:33 The general reasons for decisions: instance abstraction
1:15:14 Complete vs general reasons (two notions of instance abstraction)
1:19:05 The general sufficient and general necessary reasons for decisions
1:26:22 The second theory of explanation: A summary
1:32:21 Targeting a new decision
1:35:39 Selection semantics of complete and general reasons (instance abstractions)
1:40:43 Compiling classifiers into class formulas from decision trees, random forests, Bayesian networks, and (Binary) neural networks
1:53:54 Conclusion
3 views
289
89
3 months ago 12:00:00 1
Python Full Course for free 🐍
3 months ago 00:03:29 1
𝐃𝐉 𝐓𝐞𝐦𝐩𝐞𝐫𝐚𝐓𝐮𝐫𝐚 - 𝐃𝐚𝐧𝐜𝐞 𝐋𝐨𝐠𝐢𝐜 (𝐕𝐉 𝐀𝐮𝐗)
3 months ago 00:59:09 1
David Guetta | Miami Ultra Music Festival 2024
3 months ago 00:00:19 1
Modern Talking Songs #moderntalkingdisco #shortsmusic #discobeats
3 months ago 00:00:52 1
WHAT IS LOGICAL || animation meme || The Amazing Digital Circus
3 months ago 00:00:09 1
Cosplaying Kafka takes my two biological children to comic up | Honkai: Star Rail
3 months ago 00:03:55 1
Hurt Division - Another Kind of Me (Official Music Video)