Principle-based and Explainable Reasoning: From Humans to Machines
Timotheus Kampik
### About Me
* 4th year PhD Student
* Research interests:
* Automated reasoning
* Engineering intelligent systems
* Also *Scientist in Residence, Product*, Signavio \ SAP
(Business Process Intelligence)
#### Outline
* In many real-world application scenarios we need machines that
**learn** and **reason**
* We can explain reasoning by making use of *principles*
* *Human* reasoning has been studied from descriptive and prescriptive perspectives since (at least) centuries
* Can the principles according to which humans (should) reason inform the way machines reason?
By now, we know that economic rationality is not a good model of human (intelligent) decision-making.
Economists try to adjusts their models accordingly.
Key improvement: modeling knowledge in decision scenarios.
Kahneman, Daniel. *Maps of bounded rationality.*
Rubinstein, Ariel. *Modeling bounded rationality.*
#### Consistent Preferences in Knowledge-based Systems
* We want to determine the relevant citizenship (passports) of a client
* Example: case handling of immigration or tax administration
* We use decision management software (a real-world system)
* The decision models can be deployed to high-scalability engines such as [jDMN](https://goldmansachs.github.io/jdmn/)
TK. Nieves, Juan Carlos. *Abstract Argumentation and the Rational Man*.
#### Example: Decision Model and Notation (DMN)
* Decision:
* Set of ``if ... then ...`` rules
* Aggregation function or order on all rules
* Graphical/XML model of data sources and hierarchical decisions
* Open standard (OMG)
DMN Example Decision
#### Rough Formalization Attempt, Decision Table
Tuple $\langle T, I, O, type, facet, R, P, C, H \rangle$
* $T$: table name
* $I, O$ finite disjoint sets of input and output attributes
* $type$: function that maps each I, O to a data type
* $facet$: function that maps each I, O to an *acceptable list of objects*
* $R$: finite set of 'if ... then ...' rules
* $P$: total order on rules
* $C$: boolean completeness indicator
* $H$: hit policy indicator
Calvanese *et al*. Semantics and Analysis of DMN Decision Tables.
#### Consistent Preferences in Knowledge-based Systems
* First, insert ``NO`` (Norwegian citizenship)
→ ``NO`` considered relevant
* Then, insert ``UK`` (UK citizenship) as additional option
→ neither ``NO`` nor ``UK`` relevant: not rational!
* Automated checks of decision management software don't detect this problem
#### There are more principles
* Example: legal reasoning, *burden of persuasion*
* If several conclusions/decisions are possible
* If in doubt, remain consistent with previous decision
"Reasoning Backwards"
We also know that humans "reason backwards".
We commit to a decision intuitively.
We make up a line of reasoning if necessary.
Haidt, Jonathan. *The emotional dog and its rational tail: a social intuitionist approach to moral judgment.*
"Reasoning backwards": Find an explanation that happens to be satisfied
#### "Reasoning backwards": Find an explanation that happens to be satisfied
* **Journalist**: *When you were at Chelsea, you were asked whether you would ever come to the Spurs and you said: 'Never, I love the Chelsea fans too much.' What has changed?*
* **Mourinho**: [*That was*] *before I was sacked* [*at Chelsea*].
#### Alternative to Reasoning Backwards
* Principle-based and evidence-based reasoning
* Explaining change
#### *Mourinho* Example
* Rule: maximize expected utility/payoff
* ``Payoff_Tottenham`` << ``Payoff_Chelsea``
changed to
``Payoff_Tottenham`` >> ``Payoff_Chelsea``
#### Example
* Change: new passport reported: ``UK``
* Principle violated: *reference independence*
* Explanation, "new" rules that fire
* If any passport is EU passport, remove non-EU passports
* If ``r1`` then ``r2``
* ``UK`` is ``EU`` in r1 but is not ``EU`` in ``r2``
A Formal Perspective
Kampik \& Nieves. Abstract Argumentation and the Rational Man.
Kampik \& Gabbay. Explainable Reasoning in Face of Contradictions: From Humans to Machines.
#### Abstract Argumentation I
Dung, Phan Minh. *On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games.*
Abstract Argumentation II
Dung, Phan Minh. *On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games.*
#### Economic Rationality & Abstract Argumentation
* $AF = (AR, AT)$; arguments $AR$, e.g.: $\\{a, b, c\\}$, attacks $AT$, e.g.: $\\{(a, b), (b, c)\\}$
* Semantics $\sigma(AF)$ returns set of extensions $ES \subseteq 2^{AR}$
* Extension $E \in ES, E \subseteq AR$ **implies** preferences: $\forall S \in AR, E \succeq S$
* Consistent preferences when **normally expanding** $AF$ (Economics' *ceteris paribus* assumption)
Dung, Phan Minh. *On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games.*
#### Normal Expansion
* Given $AF = (AR, AT), AF' = (AR', AT')$ AF' normally expands AF iff:
* $AR \subseteq AR', AT \subseteq AT'$
* $(AT' \setminus AT) \cap (AR \times AR) = \\{\\}$
* Only add arguments and attacks, don't change attacks between existing arguments
* Denoted by $AF \preceq_N AF'$ (Baumann, Brewka)
Baumann & Brewka. *Expanding Argumentation Frameworks: Enforcing and Monotonicity Results.*
#### (Weak) Reference Independence Principle
* Given semantics $\sigma$, $AF = (AR, AT), AF' = (AR', AT'), AF \preceq_N AF'$
* **Weak**: no matter what conclusion/extension we select from $AF$,
we can infer a conclusion from $AF'$ that implies consistent preferences
Kampik & Nieves. *Abstract Argumentation and the Rational Man.*
Example
Example
If all newly added arguments are not valid conclusions, $a$ should remain a valid conclusion.
Because we make clear decisions we consider arguments either valid conclusions or not (no undecided arguments)
Which semantics allow us to be economically rational in this scenario?
Semantics Families
Family
Admissibility-Based
Weak Admissibility-Based
Naive-Based
Satisfied by any established semantics$^*$
No
No
Yes
Satisfied by
-
-
Naive, CF2, presumably SCF2 and nsa(CF2)
$^*$ Could potentially be satisfied by a semantics that always returns the empty set and hence is in all families.
*Degrees of Monotony* to Ensure Consistency
Limitations of Reference Independence
#### *Degrees of Monotony* Approach
* Given $AF, AF', AF \preceq_N AF', \sigma, E \in \sigma(AF)$
* Select an $E' \in \sigma(AF')$ that is *as monotonic as possible*
* Degree of monotony for non-empty $E$:
$\frac{|E \cap E'|}{|E|}$
* Property is not transitive
'Degrees of Monotony'-Dilemma
#### Learning and Principle-based Reasoning
#### Learning and Principle-based Reasoning
* We know that we can mine knowledge and reason about it
* Principles are not generally applicable (at least most principles)
* Can we learn which principles should be satisfied?
* How can we learn (and reason about) new principles?
Learning Knowledge and Reasoning About It I
Example: process mining
Mine Petri Nets from event log data
Formally analyze properties like liveness, deadlock-freeness.
Van der Aalst, Wil. *Process mining.*
Learning Knowledge and Reasoning About It II
Example: explainable recommender systems
Mine argumentation graphs from (movie) review data
Enforce relaxed monotony principles and facilitate explainability.
Rago *et al.* *Argumentation as a Framework for Interactive Explanations for Recommendations*
#### What about "Discrete" Principles in a "Gradual" Context?
* Principles of gradual argumentation are well-researched (Baroni, Rago, Toni)
* Assumption: from a decision-making context, it sometimes makes sense to discretize
* Open questions: does it then make sense to apply some of the aforementioned principles?
Baroni *et al.* *From fine-grained properties to broad principles for gradual argumentation: A principled spectrum*
Learning Knowledge and Reasoning About It III
Gap between technology ecosystems
#### Learning to Select Principles
* Connect historic data to KPIs
* Enforce different principles and select a set of non-mutually exclusive principles that maximizes KPI achievement
#### Learning New/Refined Principles
* Humans do this (legal system of any advanced society)
* Requires automated reasoning about reasoning
* Is in its infancy but a hot topic
[Popular science overview of SOTA](https://www.quantamagazine.org/building-the-mathematical-library-of-the-future-20201001/)
Questions?
*Explainable AI workshop*:
[https://extraamas.ehealth.hevs.ch/](https://extraamas.ehealth.hevs.ch/)
Special Issue in the Journal of Applied Logics - IfCoLog Journal:
*Explainable Reasoning in Face of Contradictions: Cross-disciplinary Perspectives* ([CFP](https://people.cs.umu.se/tkampik/CFP_Special_Issue_IfCoLoG_Journal.pdf))