Two more recent papers: Virtual Agents, and Dysfluencies

Paper at DiSS 2013: The 6th Workshop on Disfluency in Spontaneous Speech: Ginzburg, J., Fernández, R., & Schlangen, D. (2013) Self Addressed Questions in Dysfluencies, In: Proceedings of the 6th Workshop on Disfluency in Spontaneous Speech, Stockholm, 2013

Short paper at IVA 2013: van Walbergen, H., Baumann, T., Kopp, S., Schlangen, D. (2013) „Incremental, Adaptive and Interruptive Speech Realization for Fluent Conversation with ECAs“, In: Proceedings of the Thirteenth International Conference on Intelligent Virtual Agents (IVA 2013), Edinburgh, August 2013.

Links to pdfs to follow.

Paper at Interspeech 2013, II: Tools for Multimodal Data Recording, Handling, and Analysis

And the other paper: MINT.tools: Tools and Adaptors Supporting Acquisition, Annotation and Analysis of Multimodal Corpora; Spyros Kousidis, Thies Pfeiffer, David Schlangen.

Abstract:

This paper presents a collection of tools (and adaptors for ex- isting tools) that we have recently developed, which support ac- quisition, annotation and analysis of multimodal corpora. For acquisition, an extensible architecture is offered that integrates various sensors, based on existing connectors (e.g. for motion capturing via VICON, or ART) and on connectors we contribute (for motion tracking via Microsoft Kinect as well as eye track- ing via Seeingmachines FaceLAB 5). The architecture provides live visualisation of the multimodal data in a unified virtual real- ity (VR) view (using Fraunhofer Instant Reality) for control dur- ing recordings, and enables recording of synchronised streams. For annotation, we provide a connection between the annotation tool ELAN (MPI Nijmegen) and the VR visualisation. For anal- ysis, we provide routines in the programming language Python that read in and manipulate (aggregate, transform, plot, anal- yse) the sensor data, as well as text annotation formats (Praat TextGrids). Use of this toolset in multimodal studies proved to be efficient and effective, as we discuss. We make the collection available as open source for use by other researchers.

Bibtex, pdf: here. There’s also a dedicated website for the tool set.

Paper at Interspeech 2013: Cross-Linguistic Study on Turn-Taking

We will be presenting two papers this year at Interspeech (in Lyon). The first is

A cross-linguistic study on turn-taking and temporal alignment in verbal interaction; Spyros Kousidis, David Schlangen, Stavros Skopeteas

Abstract:

That speakers take turns in interaction is a fundamental fact across languages and speaker communities. How this taking of turns is organised is less clearly established. We have looked at interactions recorded in the field using the same task, in a set of three genetically and regionally diverse languages: Georgian, Cabe ́car, and Fongbe. As in previous studies, we find evidence for avoidance of gaps and overlaps in floor transitions in all languages, but also find contrasting differences between them on these features. Further, we observe that interlocutors align on these temporal features in all three languages. (We show this by correlating speaker averages of temporal features, which has been done before, and further ground it by ruling out potential alternative explanations, which is novel and a minor methodological contribution.) The universality of smooth turn-taking and alignment despite potentially relevant grammatical differ- ences suggests that the different resources that each of these languages make available are nevertheless used to achieve the same effects. This finding has potential consequences both from a theoretical point of view as well as for modeling such phenom- ena in conversational agents.

BibTex, pdf here.