Konferensartikel

On the Attribution of Affective-Epistemic States to Communicative Behavior in Different Modes of Recording

Stefano Lanzini
SCCIIL (SSKKII), Gothenburg University, Sweden

Jens Allwood
SCCIIL (SSKKII), Gothenburg University, Sweden

Ladda ner artikel

Ingår i: Proceedings of the 2nd European and the 5th Nordic Symposium on Multimodal Communication, August 6-8, 2014, Tartu, Estonia

Linköping Electronic Conference Proceedings 110:7, s. 47-52

Visa mer +

Publicerad: 2015-05-26

ISBN: 978-91-7519-074-7

ISSN: 1650-3686 (tryckt), 1650-3740 (online)

Abstract

Face-to-face communication is multimodal with varying contributions from all sensory modalities, see e.g. Kopp (2013), Kendon (1980) and Allwood (1979). This paper reports a study of respondents inter- preting vocal and gestural verbal and non-verbal, behavior. 10 clips from 5 different short video + audio recordings of two persons meeting for the first time were used as stimulus in a perception/classification study. The respondents were divided in 3 different groups. The first group watched only the video part of the clips without any sound. The second group listened to the audio track without video. The third group was exposed to both the audio and video tracks of the clip. In order to collect the data, we used a crowdsourcing questionnaire. The study reports on how respondents classified clips containing 4 differ- ent types of behavior (looking up, looking down, nodding and laughing) that were found to be frequent in a previous study (Lanzini 2013) according to which Affective Epistemic State (AES) the behaviors were perceived as expressing. We grouped the linguistic terms for the affective epistemic states that the respondents used in- to 27 different semantic fields. In this paper we will focus on the 7 most common fields, i.e. the fields of Thinking, Nervousness, Happiness, Assertiveness, Embarrassment, Indifference and Interest. The aim of the study is to increase understanding of how exposure to video and/or audio modalities affect the interpretation of vocal and gestural verbal and non-verbal behavior, when they are displayed uni- modally and multi-modally.

Nyckelord

Inga nyckelord är tillgängliga

Referenser

Allwood, J. 1979."Ickeverbal kommunikation - en översikt" in Stedje and af Trampe (Ed.) Tvåspråkighet. Stockholm, Akademilitteratur. Also in Invandrare och Minoriteter nr 3, 1979, pp. 16-24.

Allwood, J. and Cerrato, L. 2003 A Study of Gestural Feedback Expressions. First Nordic Symposium on Multimodal Communication. Paggio P. Jokinen, K. Jönsson, A. (eds). Copenhagen, 23-24, September 2003, pp. 7-22.

Boholm M. and Lindblad G. 2011. Head movements and prosody in multimodal feedback. NEALT Proceedings Series: 3rd Nordic Symposium on Multimodal Communication, 15, p. 25-32.

Chindamo, Massimo, Allwood, Jens & Ahlsén, Elisabeth. 2012. Some suggestions for the study of stance in communication. Proceedings of IEEE SocialCom Amsterdam 2012, 3-5.

Kopp Stefan. 2013. Giving interaction a hand: deep models of co-speech gesture in multimodal systems. Proceedings of the 15th ACM on International conference on multimodal interaction. ACM, 245-246.

Kendon,Adam. 1980. Gesticulation and speech: two aspects of the process of utterance. In M.R.Key (ed), The Relationship of Verbal and Nonverbal Communication, pp. 207-227. The Hague: Mouton and Co.

Lanzini, Stefano. 2013. How do different modes contribute to the interpretation of affective epistemic states. Published master’s thesis for master’s degree. University Gothenburg, Division of Communication and Cognition, Department of Applied IT.

Schroder Marc, Bevacqua Elisabetta, Cowie Roddy, Eyben Florian, Gunes Hatice, Heylen Dirk, ter Maat Mark, McKeown Gary, Pammi Sathish, Pantic Maja, Pelachaud Catherine, Schuller Bjorn, de Sevin Etienne, Valstar Michel, and Wollmer Martin. 2011. Building Autonomous Sensitive Artificial Listeners. IEEE Trans. Affective Computing. 9. (1). p. 1.

Citeringar i Crossref