The main scientific goal of the SmartKom project is to develop a new human--machine interaction metaphor for multimodal dialog systems. It combines speech, gesture, and facial expression input with speech, gesture and graphics output. The system is realized as a distributed collection of communicating and cooperating autonomous modules based on a multi--blackboard architecture. Multimodal output generation is consequently separated in two steps. First, the modality--specific output data are generated. Second, an inter--media synchronization of these data is realized on independent media devices to perform the multimodal presentation to the user. This paper describes the generation of appropriate lip animations that are based on a phonetic r...
For many audio-visual applications, the integration and synchronization of audio and video signals i...
Kopp S, Wachsmuth I. Synthesizing multimodal utterances for conversational agents. Computer Animatio...
Abstract—This paper describes a morphing-based audio driven facial animation system. Based on an inc...
Abstract—This paper presents a novel multimodal system designed for multi-party human-machine intera...
Kopp S, Bergmann K, Wachsmuth I. Multimodal Communication from Multimodal Thinking - Towards an Inte...
www.dfki.de/~wahlster Abstract. We introduce the notion of symmetric multimodality for dialogue syst...
One of the fastest developing areas in the entertainment industry is digital animation. Television p...
Throughout the past several decades, much research has been done in the area of signal processing. T...
[[abstract]]Audio-to-visual synchronization is important for multimedia applications involving talki...
It is envisioned that autonomous software agents that can communicate using speech and gesture will ...
[[abstract]]Audio-to-visual synchronization is important for multimedia applications involving talki...
The research presents MARTI (Man-machine Animation Real-Time Interface) for the realisation of autom...
Bergmann K, Kopp S. Multimodal Content Representation for Speech and Gesture Production. In: Theune ...
In recent times, the importance of human-computer interaction is increasing. Users now prefer more i...
This paper presents an algorithm for the offline generation of lip-sync animation. It redefines vise...
For many audio-visual applications, the integration and synchronization of audio and video signals i...
Kopp S, Wachsmuth I. Synthesizing multimodal utterances for conversational agents. Computer Animatio...
Abstract—This paper describes a morphing-based audio driven facial animation system. Based on an inc...
Abstract—This paper presents a novel multimodal system designed for multi-party human-machine intera...
Kopp S, Bergmann K, Wachsmuth I. Multimodal Communication from Multimodal Thinking - Towards an Inte...
www.dfki.de/~wahlster Abstract. We introduce the notion of symmetric multimodality for dialogue syst...
One of the fastest developing areas in the entertainment industry is digital animation. Television p...
Throughout the past several decades, much research has been done in the area of signal processing. T...
[[abstract]]Audio-to-visual synchronization is important for multimedia applications involving talki...
It is envisioned that autonomous software agents that can communicate using speech and gesture will ...
[[abstract]]Audio-to-visual synchronization is important for multimedia applications involving talki...
The research presents MARTI (Man-machine Animation Real-Time Interface) for the realisation of autom...
Bergmann K, Kopp S. Multimodal Content Representation for Speech and Gesture Production. In: Theune ...
In recent times, the importance of human-computer interaction is increasing. Users now prefer more i...
This paper presents an algorithm for the offline generation of lip-sync animation. It redefines vise...
For many audio-visual applications, the integration and synchronization of audio and video signals i...
Kopp S, Wachsmuth I. Synthesizing multimodal utterances for conversational agents. Computer Animatio...
Abstract—This paper describes a morphing-based audio driven facial animation system. Based on an inc...