Can we teach a robot to recognize and make predictions for activities that it has never seen before? We tackle this problem by learning models for video from text. This paper presents a hierarchical model that generalizes instructional knowledge from large-scale text corpora and transfers the knowledge to video. Given a portion of an instructional video, our model recognizes and predicts coherent and plausible actions multiple steps into the future, all in rich natural language. To demonstrate the capabilities of our model, we introduce the \emph{Tasty Videos Dataset V2}, a collection of 4022 recipes for zero-shot learning, recognition and anticipation. Extensive experiments with various evaluation metrics demonstrate the potential of our m...
In this paper we consider the problem of classifying fine-grained, multi-step activities (e.g., cook...
The goal of this work is to build flexible video-language models that can generalize to various vide...
Recently, several approaches have explored the detection and classification of objects in videos to ...
People often watch videos on the web to learn how to cook new recipes, assemble furniture or repair ...
Image-based visual-language (I-VL) pre-training has shown great success for learning joint visual-te...
We study how visual representations pre-trained on diverse human video data can enable data-efficien...
We propose a novel method for learning visual concepts and their correspondence to the words of a na...
I present my work on learning from video and robotic input. This is an important problem, with numer...
Few-shot action recognition in videos is challenging for its lack of supervision and difficulty in g...
International audienceAutomatic assistants could guide a person or a robot in performing new tasks, ...
The growth of videos in our digital age and the users' limited time raise the demand for processing ...
We propose a novel method for learning visual concepts and their correspondence to the words of a na...
Humans have a surprising capacity to induce general rules that describe the specific actions portray...
This work explores an efficient approach to establish a foundational video-text model for tasks incl...
Video-and-language pre-training has shown promising results for learning generalizable representatio...
In this paper we consider the problem of classifying fine-grained, multi-step activities (e.g., cook...
The goal of this work is to build flexible video-language models that can generalize to various vide...
Recently, several approaches have explored the detection and classification of objects in videos to ...
People often watch videos on the web to learn how to cook new recipes, assemble furniture or repair ...
Image-based visual-language (I-VL) pre-training has shown great success for learning joint visual-te...
We study how visual representations pre-trained on diverse human video data can enable data-efficien...
We propose a novel method for learning visual concepts and their correspondence to the words of a na...
I present my work on learning from video and robotic input. This is an important problem, with numer...
Few-shot action recognition in videos is challenging for its lack of supervision and difficulty in g...
International audienceAutomatic assistants could guide a person or a robot in performing new tasks, ...
The growth of videos in our digital age and the users' limited time raise the demand for processing ...
We propose a novel method for learning visual concepts and their correspondence to the words of a na...
Humans have a surprising capacity to induce general rules that describe the specific actions portray...
This work explores an efficient approach to establish a foundational video-text model for tasks incl...
Video-and-language pre-training has shown promising results for learning generalizable representatio...
In this paper we consider the problem of classifying fine-grained, multi-step activities (e.g., cook...
The goal of this work is to build flexible video-language models that can generalize to various vide...
Recently, several approaches have explored the detection and classification of objects in videos to ...