van Welbergen H, Ding Y, Sattler K, Pelachaud C, Kopp S. Real-Time Visual Prosody for Interactive Virtual Agents. In: Brinkman W-P, Broekens J, Heylen D, eds. Intelligent Virtual Agents. Lecture Notes in Computer Science. Vol 9238. Cham: Springer International Publishing; 2015: 139-151.Speakers accompany their speech with incessant, subtle head movements. It is important to implement such “visual prosody” in virtual agents, not only to make their behavior more natural, but also because it has been shown to help listeners understand speech. We contribute a visual prosody model for interactive virtual agents that shall be capable of having live, non-scripted interactions with humans and thus have to use Text-To-Speech rather than recorded sp...
This paper presents a model for automatically producing prosodically appropriate speech and correspo...
International audienceIntelligent Virtual Agents are suitable means for interactive sto-rytelling fo...
International audienceIn this article, we present two models to jointly and automatically generate t...
International audience<p>Speakers accompany their speech with incessant, subtle headmovements. It is...
This paper presents an implemented system for automatically producing prosodically appropriate speec...
International audienceHead and eyebrow movements are an important communication mean. They are highl...
This paper introduces a new model to generate rhythmically relevant non-verbal facial behaviors for ...
The work presented in this thesis addresses the problem of generating audio-visual expressive perfor...
The work presented in this thesis addresses the problem of generating audio-visual expressive perfor...
This paper introduces a new model to generate rhythmically relevant non-verbal facial behaviors for ...
International audienceAn important problem in the animation of virtual characters is the expression ...
The work presented in this thesis addresses the problem of generating audio-visual expressive perfor...
The work presented in this thesis addresses the problem of generating audio-visual expressive perfor...
This paper presents an implemented system for automatically producing prosodically appropriate speec...
International audienceAn important problem in the animation of virtual characters is the expression ...
This paper presents a model for automatically producing prosodically appropriate speech and correspo...
International audienceIntelligent Virtual Agents are suitable means for interactive sto-rytelling fo...
International audienceIn this article, we present two models to jointly and automatically generate t...
International audience<p>Speakers accompany their speech with incessant, subtle headmovements. It is...
This paper presents an implemented system for automatically producing prosodically appropriate speec...
International audienceHead and eyebrow movements are an important communication mean. They are highl...
This paper introduces a new model to generate rhythmically relevant non-verbal facial behaviors for ...
The work presented in this thesis addresses the problem of generating audio-visual expressive perfor...
The work presented in this thesis addresses the problem of generating audio-visual expressive perfor...
This paper introduces a new model to generate rhythmically relevant non-verbal facial behaviors for ...
International audienceAn important problem in the animation of virtual characters is the expression ...
The work presented in this thesis addresses the problem of generating audio-visual expressive perfor...
The work presented in this thesis addresses the problem of generating audio-visual expressive perfor...
This paper presents an implemented system for automatically producing prosodically appropriate speec...
International audienceAn important problem in the animation of virtual characters is the expression ...
This paper presents a model for automatically producing prosodically appropriate speech and correspo...
International audienceIntelligent Virtual Agents are suitable means for interactive sto-rytelling fo...
International audienceIn this article, we present two models to jointly and automatically generate t...