Konferensartikel

The Acorformed Coprus: Investigating Multimodality in Human-Human and Human-Virtual Patient Interactions

Magalie Ochs
LIS UMR 7020, Aix Marseille Université, Universit´e de Toulon, CNRS, France

Philippe Blache
LPL UMR 7309, Aix Marseille Université, Universit´e de Toulon, CNRS, France

Grégoire Montcheuil
LPL UMR 7309, Boréal Innovation, Aix Marseille Université, Universit´e de Toulon, CNRS, France

Jean-Marie Pergandi
ISM UMR 7287, Aix Marseille Université, Universit´e de Toulon, CNRS, France

Roxane Bertrand
LPL UMR 7309, Aix Marseille Universit´e, Université de Toulon, CNRS, France

Jorane Saubesty
LPL UMR 7309, Aix Marseille Universit´e, Université de Toulon, CNRS, France

Daniel Francon
Institut Paoli-Calmettes (IPC), Marseille, France

Daniel Mestre
ISM UMR 7287, Aix Marseille Universit´e, Universit´e de Toulon, CNRS, France

Ladda ner artikel

Ingår i: Selected papers from the CLARIN Annual Conference 2018, Pisa, 8-10 October 2018

Linköping Electronic Conference Proceedings 159:12, s. 113-120

Visa mer +

Publicerad: 2019-05-28

ISBN: 978-91-7685-034-3

ISSN: 1650-3686 (tryckt), 1650-3740 (online)

Abstract

The paper aims at presenting the Acorformed corpus composed of human-human and human-machine interactions in French in the specific context of training doctors to break bad news to patients. In the context of human-human interaction, an audiovisual corpus of interactions between doctors and actors playing the role of patients during real training sessions in French medical institutions have been collected and annotated. This corpus has been exploited to develop a platform to train doctors to break bad news with a virtual patient. The platform has been exploited to collect a corpus of human-virtual patient interactions annotated semi-automatically and collected in different virtual reality environments with different degree of immersion (PC, virtual reality headset and virtual reality room).

Nyckelord

Multimodal corpora, Multimodal annotation, Virtual reality, Embodied Conversational Agents, Doctor-patient interaction

Referenser

K. Anderson, E. Andr´e, T. Baur, S. Bernardini, M. Chollet, E. Chryssafidou, I. Damian, C. Ennis, A. Egges, P. Gebhard, et al. 2013. The tardis framework: intelligent virtual agents for social coaching in job interviews. In Advances in computer entertainment, pages 476–491. Springer.

A. D. Andrade, A. Bagri, K. Zaw, B. A. Roos, and Ruiz J. G. 2010. Avatar-mediated training in the delivery of bad news in a virtual world. Journal of palliative medicine, 13(12):1415–1419.

W. Baile, R. Buckman, R. Lenzi, G. Glober, E. Beale, and A. Kudelka. 2000. Spikes—a six-step protocol for delivering bad news: application to the patient with cancer. Oncologist, 5(4):302–311.

J. N. Bailenson, C. Swinth, K.nd Hoyt, S. Persky, A. Dimov, and J. Blascovich. 2005. The independent and interactive effects of embodied-agent appearance and behavior on self-report, cognitive, and behavioral markers of copresence in immersive virtual environments. Presence: Teleoperators and Virtual Environments, 14(4):379–393.

R. Beale and C. Creed. 2009. Affective interaction: How emotional agents affect users. International journal of human-computer studies, 67(9):755–776.

Roxane Bertrand, Philippe Blache, Robert Espesser, Ga¨elle Ferr´e, Christine Meunier, B´eatrice Priego-Valverde, and St´ephane Rauzy. 2008. Le cid-corpus of interactional data-annotation et exploitation multimodale de parole conversationnelle. Traitement automatique des langues, 49(3):pp–105.

B. Bigi. 2012. Sppas: a tool for the phonetic segmentations of speech. In The eighth international conference on Language Resources and Evaluation.

P. Boersma. 2002. Praat, a system for doing phonetics by computer. Glot international, 13(341-345). J. Cassell. 2000. More than just another pretty face: Embodied conversational interface agents. Communications of the ACM, 43:70–78.

M. Chollet, M. Ochs, and C. Pelachaud. 2017. A methodology for the automatic extraction and generation of non-verbal signals sequences conveying interpersonal attitudes. IEEE Transactions on Affective Computing.

Michael Maurice Dodson, Michel Mendes France, and Michel Mendes. 2013. On the entropy of curves. Journal of Integer Sequences, 16(2):3.

S. Finkelstein, S. Yarzebinski, C. Vaughn, A. Ogan, and J. Cassell. 2013. The effects of culturally congruent educational technologies on student achievement. In International Conference on Artificial Intelligence in Education, pages 493–502. Springer.

S. Garrod and M. Pickering. 2004. Why is conversation so easy? Trends in cognitive sciences, 8(1):8–11.

M. Gerhard, D. J Moore, and D. Hobbs. 2001. Continuous presence in collaborative virtual environments: Towards a hybrid avatar-agent model for user representation. In International Workshop on Intelligent Virtual Agents, pages 137–155. Springer.

Hanae Koiso, Yasuo Horiuchi, Syun Tutiya, Akira Ichikawa, and Yasuharu Den. 1998. An analysis of turn-taking and backchannels based on prosodic and syntactic features in japanese map task dialogs. Language and speech, 41(3-4):295–321.

N. Krämer. 2008. Social effects of virtual assistants. a review of empirical results with regard to communication. In Proceedings of the international conference on Intelligent Virtual Agents (IVA), pages 507–508, Berlin, Heidelberg. Springer-Verlag.

Christophe Maïano, Pierre Therme, and Daniel Mestre. 2011. Affective, anxiety and behavioral effects of an aversive stimulation during a simulated navigation task within a virtual environment: A pilot study. Computers in Human Behavior, 27(1):169–175.

K. Monden, L. Gentry, and T. Cox. 2016. Delivering bad news to patients. Proceedings (Baylor University. Medical Center), 29(1).

M. Ochs, G. Montcheuil, J-M Pergandi, J. Saubesty, B. Donval, C. Pelachaud, D. Mestre, and P. Blache. 2017. An architecture of virtual patient simulation platform to train doctor to break bad news. In International Conference on Computer Animation and Social Agents (CASA).

M. Ochs, P. Blache, G. Montcheuil, J.-M. Pergandi, J. Saubesty, D. Francon, and D. Mestre. 2018a. A semiautonomous system for creating a human-machine interaction corpus in virtual reality: Application to the acorformed system for training doctors to break bad news. In Proceedings of LREC.

Magalie Ochs, Sameer Jain, Jean-Marie Pergandi, and Philippe Blache. 2018b. Toward an automatic prediction of the sense of presence in virtual reality environment. In Proceedings of the 6th International Conference on Human-Agent Interaction, pages 161–166. ACM.

Magalie Ochs, Daniel Mestre, Gr´egoire De Montcheuil, Jean-Marie Pergandi, Jorane Saubesty, Evelyne Lombardo, Daniel Francon, and Philippe Blache. 2018c. Training doctors’ social skills to break bad news: evaluation of the impact of virtual environment displays on the sense of presence. Journal on Multimodal User Interfaces, pages 1–11.

C. Porhet, M. Ochs, J. Saubesty, G. Montcheuil, and R. Bertrand. 2017. Mining a multimodal corpus of doctor’s
training for virtual patient’s feedbacks. In Proceedings of 19th ACM International Conference on Multimodal
Interaction (ICMI), Glasgow, UK.
Stéphane Rauzy, Gr´egoire Montcheuil, and Philippe Blache. 2014. Marsatag, a tagger for french written texts and speech transcriptions. In Proceedings of Second Asian Pacific Corpus linguistics Conference, page 220.

M. Rosenbaum, K. Ferguson, and J. Lobas. 2004. Teaching medical students and residents skills for delivering bad news: A review of strategies. Acad Med, 79.

J. Saubesty and M. Tellier. 2016. Multimodal analysis of hand gesture back-channel feedback. In Gesture and Speech in Interaction, Nantes, France.

T. Schubert. 2003. The sense of presence in virtual environments: A three-component scale measuring spatial presence, involvement, and realness. Zeitschrift für Medienpsychologie, 15(69-71).

H. Sloetjes and P. Wittenburg. 2008. Annotation by category: Elan and iso dcr. In 6th International Conference on Language Resources and Evaluation.

R. Stalnaker. 2002. Common ground. Linguistics and Philosophy, 25(5):701–721.

Citeringar i Crossref