DiaGest2: Multimodal Dialogue Acts in Task-oriented Dialogues

GENERAL DESCRIPTION
The aim of the project is to build models of selected dialogue act categories for task-oriented dialogues. The study is based on a corpus of task-oriented dialogue recordings. The task was to reconstruct a figure made of paper and some common artefacts (batteries, safety matches). The instruction giver (IG) was provided with the complete figure and the instruction follower (IF) with all the materials of which it had been made. The experiment was conducted in two settings: (a) the IG and the IF could see each other and the IG could see the actions of the IF, but the IF could not see the figure the IF was provided with; (b) the IG and the IF could not see each other and the IG could not see the figure being reconstructed by the IF.

Synchronised footage of the IGs and respective IFs will be analysed for interactional behaviour.
The models of dialogue acts will include speech and gesture. In the spoken component, the lexical content, syntactic structure, intonation and prominence will be described. Gesture analysis will be focused on hand and head movements.
3-dimensional reconstructions of hand gestures and head movements, descriptions of the verbal content as well as phonetic features of their realisations (mostly prosody) will be incorporated in the models of multimodal dialogue acts.

While our primary aim is to discover the patterns of human multimodal communicative behaviour, the results of our study may be also applied in a few areas, including the creation of animated agents and avatars.

Let us express our gratidude to all the people who agreed to take part in the recordings - especially to the students of the Faculty of Modern Languages and Literature AMU, PoznaƄ

MODELS OF MULTIMODAL DIALOGUE ACTS
As a result of our project, we propose complex, multimodal models of four categories of dialogue acts:

* Instruction @ Task (i.e., instructions directly related to the task)
* Positive Feedback (both allo- and autofeedback in Bunt's categorisation)
* Negative Feedback (both allo- and autofeedback in Bunt's categorisation)
* Communication Management

The following aspects of realisation are taken into account:

* Gestural realisation (hand movements: gestural phrase and its phases, gesture category and gesture space);
* Head movement (for selected realisations)
* Gaze direction (for selected realisations)
* Syntactic frame and its components
* Lexical content
* Semantic structure (for Instructions) in terms of a basic ontology
* Prosodic realisation (prosody-driven focusing and delimiting mechanisms)

Our avatar, Ludwik, is a Blender ready-made free, fully articulate figure that we use to picture some representative gestures and body movements.

The project started in October 2009 and it was finished in August 2010.

For more information, please contact Maciej Karpinski (maciej.karpinski [at] amu.edu.pl)