Paper at Interspeech 2013, II: Tools for Multimodal Data Recording, Handling, and Analysis

And the other paper: MINT.tools: Tools and Adaptors Supporting Acquisition, Annotation and Analysis of Multimodal Corpora; Spyros Kousidis, Thies Pfeiffer, David Schlangen.

Abstract:

This paper presents a collection of tools (and adaptors for ex- isting tools) that we have recently developed, which support ac- quisition, annotation and analysis of multimodal corpora. For acquisition, an extensible architecture is offered that integrates various sensors, based on existing connectors (e.g. for motion capturing via VICON, or ART) and on connectors we contribute (for motion tracking via Microsoft Kinect as well as eye track- ing via Seeingmachines FaceLAB 5). The architecture provides live visualisation of the multimodal data in a unified virtual real- ity (VR) view (using Fraunhofer Instant Reality) for control dur- ing recordings, and enables recording of synchronised streams. For annotation, we provide a connection between the annotation tool ELAN (MPI Nijmegen) and the VR visualisation. For anal- ysis, we provide routines in the programming language Python that read in and manipulate (aggregate, transform, plot, anal- yse) the sensor data, as well as text annotation formats (Praat TextGrids). Use of this toolset in multimodal studies proved to be efficient and effective, as we discuss. We make the collection available as open source for use by other researchers.

Bibtex, pdf: here. There’s also a dedicated website for the tool set.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>