DUEL project launch- Laughing all the way!

Last month, DUEL (“Disfluencies, exclamations and laughter in dialogue”), a joint project between the Bielefeld DSG and Université Paris Diderot (Paris 7) launched in Paris.

The project aims to investigate how and why people’s talk is filled with disfluent material such as filled pauses (`”um”, “uh”), repairs (e.g. “I, uh, I really want to go”..), exclamations such as “oops” and laughter of all different kinds, from the chortle to the titter.

Traditionally in theoretical linguistics, such phenomena are rendered outside of the human linguistic faculty, an opinion held since the dawn of the modern field, particularly owing to Chomsky’s early performance and competence distinction (Chomsky, 1965).  However, as Jonathan Ginzburg and our own group head David Schlangen claim in their recent paper, disfluency is analogous to friction in physics: while an idealized theory of language can do without it, one that purports to model what actually happens in dialogue cannot throw these frequent phenomena aside.

The project aims to investigate the interactive contribution of the disfluency and laughter that fill our every conversation through a three-fold attack: empirical observation, theory building and, of course, dialogue system implementation. The project aims to investigate how the phenomena vary across languages and use the insights gained from data analyses and formal modelling to incorporate them into the interpretation and generation of a working spoken dialogue system. We aim to build a system that can be disfluent in a natural way, and is also capable of interactionally appropriate laughter when interacting with users. These are milestones for moving towards more natural spoken conversations between humans and machines, which despite the recent questionable press claiming this has recently leaped forward, is still a far-from-solved problem.

You can follow the progress of the DUEL project on its new website. Which- uh, I mean, haha, watch this space..


We announce the release of InproTKs, which is a set of extension modules for InproTK that allow for easier integration of multimodal sensors for situated (hence the s) dialogue. Also included in this release is a module that can directly use the Google Web Speech interface, allowing a dialogue system to have access to a large-domain speech recognition engine.

You can read about it in our 2014 SigDial paper. An explanation is given below. Download instructions can be found at the end of this page.

InproTK is an implementation of the incremental unit processing architecture described in Schlangen & Skantze (2011). It includes modules for several components, including an incremental version of Sphinx ASR, incremental speech synthesis using Mary TTS, as well as some example modules for getting you up and running with your own incremental dialogue system.

The extensions released with InproTKs make it possible to get information into InproTK  from outside sources (even from network devices on various platforms), as well as get information from InproTK out to other modules.

Below is an example. A human user can interact with the system using speech and gestures. InproTK provides the ASR (speech recognition) as a native module. Using the extensions in InproTKs, information from a motion sensor (such as a Microsoft Kinect) can be fed into an external processing module which, for example, can detect certain gesture types, like when the human is pointing to an object. That information can then be sent to InproTKs  using the extension modules (denoted in the figure as a Listener). That information can then be used to help the dialogue system make decisions. Information can also be outputted to external modules (such as the logger in the figure).


There are three methods which one can use to get information into and out of InproTK: XmlRpc, the Robotics Service Bus, and InstantReality. Modules in InproTKs exist for each type, a module for getting data into InproTKs known as Listeners and modules for getting modules out of InproTKs, known as Informers. Each method now be briefly explained.

XmlRpc is a remote procedural call protocol that can be used to send information across processes (potentially running on different machines). Libraries exist in most programming languages.

Robotics Service Bus (from their website) “The Robotics Service Bus (RSB) is a message-oriented, event-driven middleware aiming at scalable integration of robotics systems in diverse environments. Being fundamentally a bus architecture, RSB structures heterogeneous systems of service providers and consumers using broadcast communication over a hierarchy of logically unified channels instead of a large number of point-to-point connections. Nevertheless RSB comes with a collection of communication patterns and other tools for structuring communication, but does not require a particular functional or decomposition style.”

RSB has bindings for Java, Python, and C/C++ languages.

InstantReality (from their website) “The instantreality-framework is a high-performance Mixed-Reality (MR) system, which combines various components to provide a single and consistent interface for AR/VR developers.”

InstantReality only handles Java and C++, but can be used directly with the InstantReality player where one can create a virtual reality scene.

Download Instructions

The InproTKs is found in the InproTK git reposity under the “develop” branch.

Open a terminal and run the following two commands:
>git clone https://[your-user-name]@bitbucket.org/inpro/inprotk.git
>git fetch && git checkout develop

This will reveal the inpro.io package in the src folder. The package contains a README file that explains examples. You will need the instantreality.jar from the InstantReality website (simple download and install the software and you will find the jar included) as well as the protobuf jar from the Google protobuf website. (Eclipse users can import the InproTK as a project; note that the inpro.io.instantreality package might be excluded from the build because it relies on a jar that we cannot distribute; please add the two above jars to the classpath and include the inpro.io.instantreality package).