We propose a novel approach to segment hand regions in egocentric video that requires no manual labeling of train-ing samples. The user wearing a head-mounted camera is prompted to perform a simple gesture during an initial cal-ibration step. A combination of color and motion analy-sis that exploits knowledge of the expected gesture is ap-plied on the calibration video frames to automatically la-bel hand pixels in an unsupervised fashion. The hand pix-els identified in this manner are used to train a statistical-model-based hand detector. Superpixel region growing is used to perform segmentation refinement and improve ro-bustness to noise. Experiments show that our hand detec-tion technique based on the proposed on-the-fly training ap-proac...
We present a fast and accurate algorithm for the detection of human hands in real-life 2D image sequ...
Portable devices for first-person camera views will play a central role in future interactive system...
In this project, we propose an action estimation pipeline based on the simultaneous recognition of t...
We propose a novel approach to segment hand regions in egocentric video that requires no manual labe...
Abstract Hand segmentation is one of the most fundamental and crucial steps for egocentric human-com...
Hands appear very often in egocentric video, and their appearance and pose give important cues about...
A large number of works in egocentric vision have concentrated on action and object recognition. Det...
Hand detection is one of the most explored areas in Egocentric Vision Video Analysis for wearable de...
We present a novel method for monocular hand gesture recognition in ego-vision scenarios that deals ...
We present a novel method for monocular hand gesture recognition in ego-vision scenarios that deals ...
Hand detection is one of the most explored areas in Egocentric Vision Video Analysis for wearable de...
Abstract—Hand detection is one of the most explored areas in Egocentric Vision Video Analysis for we...
Wearable cameras allow people to record their daily activities from a user-centered (First Person Vi...
The topic of this dissertation is the analysis and understanding of egocentric (firstperson) videos...
We address the task of pixel-level hand detection in the context of ego-centric cameras. Extracting ...
We present a fast and accurate algorithm for the detection of human hands in real-life 2D image sequ...
Portable devices for first-person camera views will play a central role in future interactive system...
In this project, we propose an action estimation pipeline based on the simultaneous recognition of t...
We propose a novel approach to segment hand regions in egocentric video that requires no manual labe...
Abstract Hand segmentation is one of the most fundamental and crucial steps for egocentric human-com...
Hands appear very often in egocentric video, and their appearance and pose give important cues about...
A large number of works in egocentric vision have concentrated on action and object recognition. Det...
Hand detection is one of the most explored areas in Egocentric Vision Video Analysis for wearable de...
We present a novel method for monocular hand gesture recognition in ego-vision scenarios that deals ...
We present a novel method for monocular hand gesture recognition in ego-vision scenarios that deals ...
Hand detection is one of the most explored areas in Egocentric Vision Video Analysis for wearable de...
Abstract—Hand detection is one of the most explored areas in Egocentric Vision Video Analysis for we...
Wearable cameras allow people to record their daily activities from a user-centered (First Person Vi...
The topic of this dissertation is the analysis and understanding of egocentric (firstperson) videos...
We address the task of pixel-level hand detection in the context of ego-centric cameras. Extracting ...
We present a fast and accurate algorithm for the detection of human hands in real-life 2D image sequ...
Portable devices for first-person camera views will play a central role in future interactive system...
In this project, we propose an action estimation pipeline based on the simultaneous recognition of t...