Aligning speech and co-speech gesture in a constraint-based grammar


Katya Alahverdzhieva, School of Informatics, University of Edinburgh, United Kingdom
Alex Lascarides, School of Informatics, University of Edinburgh, United Kingdom
Dan Flickinger, Stanford University,

Abstract


This paper concerns the form-meaning mapping of communicative
actions consisting of speech and improvised co-speech
gestures. Based on the findings of previous cognitive and
computational approaches, we advance a new theory in which this
form-meaning mapping is analysed in a constraint-based grammar.
Motivated by observations in naturally occurring examples, we
propose several construction rules, which use linguistic form,
gesture form and their relative timing to constrain the derivation
of a single speech-gesture syntax tree, from which a meaning
representation can be composed via standard methods for semantic
composition. The paper further reports on implementing these
speech-gesture construction rules within the English Resource
Grammar (Copestake and Flickinger 2000). Since gestural form often
underspecifies its meaning, the logical formulae that are composed
via syntax are underspecified so that current models of the
semantics/pragmatics interface support the range of possible
interpretations of the speech-gesture act in its context of use.


Keywords


Co-speech gesture, multimodal grammar, Syntax, Underspecified semantics

Full Text:

PDF


DOI: http://dx.doi.org/10.15398/jlm.v5i3.167

ISSN of the paper edition: 2299-856X