To be widely adopted, 3D facial avatars must be animated easily, realistically, and directly from speech signals. While the best recent methods generate 3D animations that are synchronized with the input audio, they largely ignore the impact of emotions on facial expressions. Realistic facial animation requires lip-sync together with the natural expression of emotion. To that end, we propose EMOTE (Expressive Model Optimized for Talking with Emotion), which generates 3D talking-head avatars that maintain lip-sync from speech while enabling explicit control over the expression of emotion. To achieve this, we supervise EMOTE with decoupled losses for speech (i.e., lip-sync) and emotion. These losses are based on two key observations: (1) defo...
Motion capture-based facial animation has recently gained popularity in many applications, such as m...
Speech-driven facial motion synthesis is a well explored research topic. However, little has been do...
Communication between humans deeply relies on our capability of experiencing, expressing, and recogn...
Speech-driven 3D face animation aims to generate realistic facial expressions that match the speech ...
With the continuous development of cross-modality generation, audio-driven talking face generation h...
It is in high demand to generate facial animation with high realism, but it remains a challenging ta...
This thesis aims to create a chat program that allows users to communicate via an animated avatar th...
The paper presents emotional voice puppetry, an audio-based facial animation approach to portray cha...
Lip synchronization of 3D face model is now being used in a multitude of important fields. It brings...
Thesis (Ph.D.)--University of Washington, 2019For decades, animation has been a popular storytelling...
AbstractEye movement combined with lip synchronization, eye movements, and emotional facial expressi...
This paper presents FaceXHuBERT, a text-less speech-driven 3D facial animation generation method tha...
Although significant progress has been made to audio-driven talking face generation, existing method...
Speech is one of the most important interaction methods between the humans. Therefore, most of avata...
Facial model lip-sync is a large field of research within the animation industry. The mouth is a com...
Motion capture-based facial animation has recently gained popularity in many applications, such as m...
Speech-driven facial motion synthesis is a well explored research topic. However, little has been do...
Communication between humans deeply relies on our capability of experiencing, expressing, and recogn...
Speech-driven 3D face animation aims to generate realistic facial expressions that match the speech ...
With the continuous development of cross-modality generation, audio-driven talking face generation h...
It is in high demand to generate facial animation with high realism, but it remains a challenging ta...
This thesis aims to create a chat program that allows users to communicate via an animated avatar th...
The paper presents emotional voice puppetry, an audio-based facial animation approach to portray cha...
Lip synchronization of 3D face model is now being used in a multitude of important fields. It brings...
Thesis (Ph.D.)--University of Washington, 2019For decades, animation has been a popular storytelling...
AbstractEye movement combined with lip synchronization, eye movements, and emotional facial expressi...
This paper presents FaceXHuBERT, a text-less speech-driven 3D facial animation generation method tha...
Although significant progress has been made to audio-driven talking face generation, existing method...
Speech is one of the most important interaction methods between the humans. Therefore, most of avata...
Facial model lip-sync is a large field of research within the animation industry. The mouth is a com...
Motion capture-based facial animation has recently gained popularity in many applications, such as m...
Speech-driven facial motion synthesis is a well explored research topic. However, little has been do...
Communication between humans deeply relies on our capability of experiencing, expressing, and recogn...