Accepted Papers: RefNet Workshop

We have 3 recently accepted papers to the RefNet workshop which will take place in Edinburgh.

Title: A Corpus of Virtual Pointing Gestures
Authors: Ting Han, Spyros Kousidis, David Schlangen

Title: Comparing Listener Gaze with Predictions of an Incremental Reference Resolution Model
Authors: Casey Kennington, Spyros Kousidis, David Schlangen
Abstract:
In situated dialogue, listeners resolve referring expressions incrementally (on-line) and their gaze often attends to objects in the context as those objects are being described. In this work, we have looked at how listener gaze compares to a statistical reference resolution model that works incrementally. We found that listeners gaze at referred objects even before a referring expression begins, suggesting that salience and prior information is important in reference resolution models.

Title: Lattice Theoretic Relevance in Incremental Reference Processing

Authors: Julian Hough and Matthew Purver

Abstract: We build on Hough and Purver (2014)’s integration of Knuth (2005)’s lattice theoretic characterization of probabilistic inference to model incremental interpretation of repaired instructions in a small reference domain.

Accepted Papers: SemDial 2014

We have 4 new accepted short papers for the SemDial 2014 conference, which will be in Edinburgh:

Title: Towards Automatic Understanding of `Virtual Pointing’ in Interaction
Authors: Ting Han, Spyros Kousidis, David Schlangen

Title: Multimodal Incremental Dialogue with InproTKs
Authors: Casey Kennington, Spyros Kousidis, David Schlangen
Abstract: We present extensions of the incremental processing toolkit InproTK which make it possible to plug in sensors and to achieve situated, real-time, multimodal dialogue. We also describe a new module which enables the use in InproTK of the Google Web Speech API, which offers speech recognition with a very large vocabulary and a wide choice of languages. We illustrate the use of these extensions with a description of two systems handling different situated settings.

Title: Dialogue Structure of Coaching Sessions
Authors: Iwan de Kok, Julian Hough, Cornelia Frank, David Schlangen and Stefan Kopp
Abstract: We report initial findings of the ICSPACE (`Intelligent Coaching Space’) project on virtual coaching. We describe the gathering of a corpus of dyadic squat coaching interactions and initial high-level models of the structure of these sessions

Title: The Disfluency, Exclamation and Laughter in Dialogue (DUEL) Project
Authors: Jonathan Ginzburg, David Schlangen, Ye Tian and Julian Hough

Accepted Paper: CoLing 2014

We have a recently accepted paper at the CoLing 2014 conference which will take place in Dublin, Ireland.
Title: Situated Incremental Natural Language Understanding using a Multimodal, Linguistically-driven Update Model

Authors: Casey Kennington, Spyros Kousidis, David Schlangen

Abstract:
A common site of language use is interactive dialogue between two people
situated together in shared time and space. In this paper, we present a
statistical model for understanding natural human language that works
incrementally (i.e., does not wait until the end of an utterance to begin
processing), and is grounded by linking semantic entities with objects in a
shared space. We describe our model, show how a semantic meaning representation
is grounded with properties of real-world objects, and further show that it can
ground with embodied, interactive cues such as pointing gestures or eye gaze.

Accepted Paper: AutomotiveUI 2014

We have recently accepted paper to the upcoming AutomotiveUI 2014 conference, which will take place in Seattle, U.S.A.

Title: Better Driving and Recall When In-car Information Presentation Uses Situationally-Aware Incremental Speech Output Generation

Authors: Spyros Kousidis, Casey Kennington, Timo Baumann, Hendrik Buschmeier, Stefan Kopp, David Schlangen

Abstract:
It is by now established that driver distraction is the result of
sharing cognitive resources between the primary task (driving)
and any other secondary task. In the case of holding conversations,
a human passenger who is aware of the driving
conditions can choose to interrupt his/her speech in situations
potentially requiring more attention from the driver; but incar
information systems typically do not exhibit such sensitivity.
We have designed and tested such a system in a driving
simulation environment. Unlike other systems, our system
delivers information via speech (calendar entries with scheduled
meetings) but is able to react to signals from the environment
to interrupt and subsequently resume its delivery when
the driver needs to be fully attentive to the driving task. Distraction
is measured by a secondary short-term memory task.
In both tasks, drivers perform significantly worse when the
system does not adapt its speech, while they perform equally
well to control conditions (no concurrent task) when the system
intelligently interrupts and resumes.