I'm a researcher who works on building computational models of human communication with natural language and physical action, and then implementing them in artificial systems to make interaction more natural. Anyone afraid that AI is a done deal and that the machines are coming for us is welcome to interact with state-of-the-art systems which interpret and generate speech and gesture- you will probably walk away rather relieved, and with more understanding as to why people like me have a job!
My focus is generally on incremental processing and interactive responsiveness. I try to explain my work using the 1000 most common words of English on #UPGOERFIVE.
I'm currently a Wissenschaftlicher Mitarbeiter (post-doc research associate) at Bielefeld University in the Dialogue Systems Group headed by David Schlangen working mainly on the DUEL project, where together with Ye Tian
and Jonathan Ginzburg in Paris we investigate disfluency and laughter empirically from real data, by recording and analysing people in conversations, and then designing systems which understand these (much misunderstood!) phenomena. The other part of my job is helping to build a virtual coach within the ICSPACE project working with David, Iwan de Kok and Stefan Kopp.
Previously I was a PhD student and research assistant in the Cognitive Science group at Queen Mary University of London working with my supervisor Matthew Purver. My PhD put self-repair capabilities into incremental parsing and generation- some of which is summarized by this video (if you can stand the intro music and have 25 minutes).
While at Queen Mary I also worked on the DynDial project with Ruth Kempson, Pat Healey, Eleni Gregoromichelaki, Chris Howes and particularly with my supervisor Matt and Arash Eshghi in developing the DyLan dialogue system. I also worked on incremental semantic grammar learning within the RISER project.