Thesis Contents

Papers

These are the papers published during my PhD. In red are the papers on which my thesis is based on.
# Title Authors Conference Year
12 The Mirror Agent Model: a Bayesian Architecture for Interpretable Agent Behavior Michele Persiani, Thomas Hellström EXTRAAMAS workshop 2022 (file)
11 Informative Communication of Robot Plans Michele Persiani, Thomas Hellström PAAMS 2022 (file)
10 Policy Regularization Techniques for Legible Behavior (Full paper) Michele Persiani, Thomas Hellström Conditionally accepted at NCAA 2022 (file)
9 Policy Regularization for Legible Behavior (Extended abstract) Michele Persiani HARL workshop (ICDL) 2021 (file)
8 Inference of the Intentions of Unknown Agents in a Theory of Mind Setting Michele Persiani, Thomas Hellström PAAMS 2021 (file)
7 Towards We-intentional Human-Robot Interaction using Theory of Mind and Hierarchical Task Network Maitreyee Tewari, Michele Persiani CHIRA 2021 (file)
6 Probabilistic Plan Legibility with Off-the-shelf Planners Michele Persiani, Thomas Hellström PlanRob workshop (ICAPS) 2021 (file)
5 (Extended abstract) Mediating Joint Intentions with a Dialogue Management System Michele Persiani, Maitreyee Tewari NeHuAI Workshop (ECAI) 2020 (file)
4 Traveling Drinksman Michele Persiani, Cagatay Odabasi, Florenz Graf, Mohit Kalra, Thomas Hellström, Birgit Graf ISR 2020 (file)
3 Intent Recognition from Speech and Plan Recognition Michele Persiani, Thomas Hellström PAAMS 2020 (file)
2 Variational Autoencoding Dialogue Sub-structures Using a Novel Hierarchical Annotation Scheme Maitreyee Tewari, Michele Persiani CiSt 2020 (file)
1 Unsupervised Inference of Object Affordances from Text Corpora Michele Persiani, Thomas Hellström NoDaLiDa 2019 (file)

Brief Research Overview

I focused on two main models regarding intentionality in artificial agents: intent recognition, which is about detecting and classifying the intentions expressed by the observations into an agent model, and interpretable behavior, that is to create observations for which the intention is easily discernable, and from which is easy to find the correct model.
Intent recognition can be casted as first-order thory of mind reasoning, where an observer attempts to reconstruct parts of the mind of an actor, ie. its intention. The following Mirror Agent Model (see thesis and papers) describes in a computational form the intent recognition process, comprised of an observer (left) and an actor (right). The agents in the pictures are two BDI agents.
For additional details you can refer to papers 3 and 7.


On the other hand, legible behavior is such that its underlying intention is easily discernable by an observer. This is equivalent to a second-order theory of mind setting, where an agent acts knowing that its mind is being inferred, and reasons on which behaviors are intrerpretable by the observer. The following Mirror Agent Model describe a second-order theory of mind setting between agent (left) and observer (right).
For additional details you can refer to papers 6, 9, 11 and 12.
Notice that it is basically the same as for the first-order case, what changes is the way we utilize the same framework. In our current research we aim at unifying intent recognition and interpretable behavior in theory of mind by using the Mirror Agent Model. See papers and thesis.


Kappa of the thesis

The kappa of my PhD thesis Expressing and Recognizing Intentions can be found at the following link. The thesis extends some of the work from the licentiate, but mostly focuses on the Mirror Agent Model that instead is not present.
Almost all of the content of the thesis is already present, it is still however subject to changes. The final version will be available 1-2 months before the defense. Also, Paper IV and V (as they are in the thesis) are still under review and are also subject to change.

Licentiate

On the 7 September of 2020 I received my Licentiate with title Computational Models for Intent Recognition in Robotic Systems (link).