Projects & Software, Overview

At DSG, we combine foundational research with supporting infrastructure-building research. A major topic in the last years has been incremental processing, which we’ve studied both theoretically / principally, as well as by building prototypes and toolkits that support this research. More recently, we’ve moved into (even more) situated dialogue and multimodal input and output. Again, we’re studying this from a theoretical / principled angle, but we’re also building a technical environment in which to study these questions.

The projects in particular are:

  • DUEL: Disfluencies, Exclamations and Laughter in Dialogue (2014-2016). Funded by DFG; cooperation with Jonathan Ginzburg in Paris (joint DFG/ANR programme). Our part is computational modelling of disfluencies in dialogue systems, as well as looking into recognition and generation of intra-utterance laughter. Julian Hough (formerly Queen Mary London) is Post-Doc on this project.
  • CITEC LSP: Cognitive Service Apartment (2014-2016). This is a large scale project (around 10 PIs) within the “Centre of Excellence on Cognitive Interaction Technology” (CITEC). The aim is to build an “intelligent apartment” that consists of networked appliance as well as containing a humanoid robot. Our part is the dialogue management for the robot. Birte Carlmeyer is a PhD on this project.
  • CITEC LSP: Intelligent Coaching Space (2014-2016). This is another large scale project within CITEC. The aim here is to build a virtual environment (in a CAVE) where sports coaching can be done with a fine degree of control. Our part (together with Stefan Kopp’s sociable agent group) is the online generation of corrective verbal feedback (“the left arm higher, .. higher, a bit more, that’s perfect”). Iwan de Kok is Post-Doc on this project./li>
  • Virtual Deixis in Situated Interaction (2013-2016). PhD stipend to Ting Han, funded by China Scholarship Council. Ting is looking into abstract pointing gestures in route descriptions, trying to build a computational end-to-end model (recognition, online interpretation).
  • Incremental Natural Language Understanding (2011-2015). PhD stipend to Casey Kennington, funded by CITEC (Center of Excellence Cognitive Interaction Technology). Casey is exploring statistical models for understanding spoken utterances incrementally, most recently also taking into account multimodal input such as gaze information and gesture.
  • Multimodal Interaction Lab (2011-). Partially funded by SFB 673. An infrastructure project where we are building an environment for the collection, manipulation and analysis of multimodal conversational data such as speech (obviously), motion capture, eye gaze, but also more off-beat sensors such as breathing belts. Dedicated webpage here.
  • X1: Multimodal alignment corpora (2010-2014). Part of SFB 673. Another infrastructure project, devoted to managing research corpora. (Co-PIs: Philipp Cimiano, Sven Wachsmuth)
  • InPro: Incrementality and Projection in Dialogue Management (2006-2012). Funded by DFG in Emmy Noether Programme. Layed groundwork for further exploration of incremental processing paradigm, both theoretically (Schlangen & Skantze 2009, A General Abstract Model) and practically (InProTK, a toolkit for incremental processing). See dedicated Inpro/InproTK site.