Aligning speech and co-speech gesture in a constraint-based grammar

Authors

  • Katya Alahverdzhieva School of Informatics, University of Edinburgh
  • Alex Lascarides School of Informatics, University of Edinburgh
  • Dan Flickinger Stanford University

Keywords:

co-speech gesture, constraint-based grammar, compositional semantics, underspecification

Abstract

This paper concerns the form-meaning mapping of communicative actions consisting of speech and improvised co-speech gestures. Based on the findings of previous cognitive and computational approaches, we advance a new theory in which this form-meaning mapping is analysed in a constraint-based grammar. Motivated by observations in naturally occurring examples, we propose several construction rules, which use linguistic form, gesture form and their relative timing to constrain the derivation of a single speech-gesture syntax tree, from which a meaning representation can be composed via standard methods for semantic composition. The paper further reports on implementing these speech-gesture construction rules within the English Resource Grammar (Flickinger 2000). Since gestural form often underspecifies its meaning, the logical formulae that are composed via syntax are underspecified so that current models of the semantics/pragmatics interface support the range of possible interpretations of the speech-gesture act in its context of use.

DOI:

https://doi.org/10.15398/jlm.v5i3.167

Full article

Published

2018-01-18

How to Cite

Alahverdzhieva, K., Lascarides, A., & Flickinger, D. (2018). Aligning speech and co-speech gesture in a constraint-based grammar. Journal of Language Modelling, 5(3), 421–464. https://doi.org/10.15398/jlm.v5i3.167

Issue

Section

Articles