Gesture Use - From Real to Virtual Humans and Back

Kirsten Bergmann
Bielefeld University, Faculty of Technology, CITEC, Germany

Ladda ner artikel

Ingår i: Proceedings of the 2nd European and the 5th Nordic Symposium on Multimodal Communication, August 6-8, 2014, Tartu, Estonia

Linköping Electronic Conference Proceedings 110:1, s. 1-3

Visa mer +

Publicerad: 2015-05-26

ISBN: 978-91-7519-074-7

ISSN: 1650-3686 (tryckt), 1650-3740 (online)


When we are face to face with others, we use not only speech, but also a multitude of nonverbal be- haviors to communicate with each other. A head nod expresses accordance with what someone else said before. A facial expression like a frown indicates doubts or misgivings about what one is hearing or see- ing. A pointing gesture is used to refer to something. More complex movements or configurations of the hands depict the shape or size of an object. Of all these nonverbal behaviors, gestures, the spontaneous and meaningful hand motions that accompany speech, stand out as they are very closely linked to the semantic content of the speech they accompany, in both form and timing. Speech and gesture together comprise an utterance and externalize thought; they are believed to emerge from the same underlying cognitive representation and to be governed, at least in part, by the same cognitive processes (Kendon, 2004; McNeill, 2005). Despite this important role of co-speech gestures in communication, little is known, however, about the mechanisms that underlie gesture production in human speakers (cf. Bavelas et al. (2008), de Ruiter (2007)) as well as the functions gesture use fulfills in communication and even beyond, e.g. in educational or therapeutic contexts. My talk at the symposium showed how building computational simulation models of natural, communicative behavior and employing these models in virtual humans allows to address these research issues.


Inga nyckelord är tillgängliga


J. Bavelas, J. Gerwing, C. Sutton, and D. Prevost. 2008. Gesturing on the telephone: Independent effects of dialogue and visibility. Journal of Memory and Language, 58:495–520.

K. Bergmann and S. Kopp. 2009. GNetIc—Using Bayesian decision networks for iconic gesture generation. In Z. Ruttkay, M. Kipp, A. Nijholt, and H. Vilhjalmsson, editors, Proceedings of the 9th International Conference on Intelligent Virtual Agents, pages 76–89. Springer, Berlin/Heidelberg.

K. Bergmann and M. Macedonia. 2013. A virtual agent as vocabulary trainer: Iconic gestures help to improve learners’ memory performance. In Proceedings of the 13th International Conference on Intelligent Virtual Agents, pages 139–148, Berlin/Heidelberg. Springer.

Kirsten Bergmann, Stefan Kopp, and Friederike Eyssel. 2010. Individualized gesturing outperforms average gesturing–evaluating gesture production in virtual humans. In J. Allbeck, N. Badler, T. Bickmore, C. Pelachaud, and A. Safonova, editors, Proceedings of the 10th Conference on Intelligent Virtual Agents, pages 104–117, Berlin/Heidelberg. Springer.

J.P. de Ruiter. 2007. Some multimodal signals in humans. In Proceedings of the 1st Workshop on Multimodal Output Generation, pages 141–148. CTIT.

A. Kendon. 2004. Gesture—Visible Action as Utterance. Cambridge University Press.

A. L¨ucking, K. Bergmann, F. Hahn, S. Kopp, and H. Rieser. 2013. Data-based analysis of speech and gesture: the bielefeld speech and gesture alignment corpus (SaGA) and its applications. Journal on Multimodal User Interfaces, 7(1-2):5–18.

M. Macedonia, K. M¨uller, and A.D. Friederici. 2011. The impact of iconic gestures on foreign language word learning and its neural substrate. Human Brain Mapping, 32:982—998.

D. McNeill. 1992. Hand and Mind—What Gestures Reveal about Thought. University of Chicago Press, Chicago. D. McNeill. 2005. Gesture and Thought. University of Chicago Press, Chicago, IL.

Citeringar i Crossref