We present a video-based gesture dataset and a methodology for annotating video-based gesture datasets. Our dataset consists of user-defined gestures generated by 18 participants from a previous investigation of gesture memorability. We design and use a crowd-sourced classification task to annotate the videos. The results are made available through a web-based visualization that allows researchers and designers to explore the dataset. Finally, we perform an additional descriptive analysis and quantitative modeling exercise that provide additional insights into the results of the original study. To facilitate the use of the presented methodology by other researchers we share the data, the source of the human intelligence tasks for crowdsourc...
International audienceIn this paper, we investigate new ways to understand and to analyze human gest...
International audienceIn this paper, we investigate new ways to understand and to analyze human gest...
We introduce the UC2017 static and dynamic gesture dataset. Most researchers use vision-based system...
We present a video-based gesture dataset and a methodol-ogy for annotating video-based gesture datas...
We present a video-based gesture dataset and a methodology for annotating video-based gesture datase...
We present a video-based gesture dataset and a methodology for annotating video-based gesture datase...
We present a video-based gesture dataset and a methodology for annotating video-based gesture datase...
Human communication is multimodal and includes elements such as gesture and facial expression along ...
International audienceIn this paper, we investigate new ways to understand and to analyze human gest...
International audienceIn this paper, we investigate new ways to understand and to analyze human gest...
We propose a pipeline to collect, visualize, annotate and analyze motion capture (mocap) data for ge...
8 pagesInternational audienceIn this paper we study the memorization of user created gestures for 3D...
International audienceIn this paper, we investigate new ways to understand and to analyze human gest...
This dataset provides valuable insights into hand gestures and their associated measurements. Hand g...
International audienceIn this paper, we investigate new ways to understand and to analyze human gest...
International audienceIn this paper, we investigate new ways to understand and to analyze human gest...
International audienceIn this paper, we investigate new ways to understand and to analyze human gest...
We introduce the UC2017 static and dynamic gesture dataset. Most researchers use vision-based system...
We present a video-based gesture dataset and a methodol-ogy for annotating video-based gesture datas...
We present a video-based gesture dataset and a methodology for annotating video-based gesture datase...
We present a video-based gesture dataset and a methodology for annotating video-based gesture datase...
We present a video-based gesture dataset and a methodology for annotating video-based gesture datase...
Human communication is multimodal and includes elements such as gesture and facial expression along ...
International audienceIn this paper, we investigate new ways to understand and to analyze human gest...
International audienceIn this paper, we investigate new ways to understand and to analyze human gest...
We propose a pipeline to collect, visualize, annotate and analyze motion capture (mocap) data for ge...
8 pagesInternational audienceIn this paper we study the memorization of user created gestures for 3D...
International audienceIn this paper, we investigate new ways to understand and to analyze human gest...
This dataset provides valuable insights into hand gestures and their associated measurements. Hand g...
International audienceIn this paper, we investigate new ways to understand and to analyze human gest...
International audienceIn this paper, we investigate new ways to understand and to analyze human gest...
International audienceIn this paper, we investigate new ways to understand and to analyze human gest...
We introduce the UC2017 static and dynamic gesture dataset. Most researchers use vision-based system...