We present a machine learning technique for recognizing discrete gestures and estimating continuous 3D hand posi-tion for mobile interaction. Our multi-stage random forest pipeline jointly classifies hand shapes and regresses metric depth of the hand from a single RGB camera. Our tech-nique runs in real time on unmodified mobile devices, such as smartphones, smartwatches, and smartglasses, comple-menting existing interaction paradigms with in-air gestures. 1
Gesture is a promising mobile User Interface modality that enables eyes-free interaction without sto...
This contribution presents a novel approach of utilizing Time-of-Flight (ToF) technology for mid-air...
Abstract. The goal of this work is to build the basis for a smartphone application that provides fun...
Figure 1: Touch input is expressive but can occlude large parts of the screen (A). We propose a mach...
Figure 1: Touch input is expressive but can occlude large parts of the screen (A). We propose a mach...
We present a machine learning technique to recognize ges-tures and estimate metric depth of hands fo...
We present a light-weight real-time applicable 3D-gesture recognition system on mobile devices for i...
We present a machine learning technique to recognize ges-tures and estimate metric depth of hands fo...
The spread of IoT and wearable devices is bringing out gesture interfaces as a solution for a more n...
The spread of IoT and wearable devices is bringing out gesture interfaces as a solution for a more n...
The spread of IoT and wearable devices is bringing out gesture interfaces as a solution for a more n...
The spread of IoT and wearable devices is bringing out gesture interfaces as a solution for a more n...
The spread of IoT and wearable devices is bringing out gesture interfaces as a solution for a more n...
The need to improve communication between humans and computers has been instrumental in defining new...
We present a pipeline for recognizing dynamic freehand gestures on mobile devices based on extractin...
Gesture is a promising mobile User Interface modality that enables eyes-free interaction without sto...
This contribution presents a novel approach of utilizing Time-of-Flight (ToF) technology for mid-air...
Abstract. The goal of this work is to build the basis for a smartphone application that provides fun...
Figure 1: Touch input is expressive but can occlude large parts of the screen (A). We propose a mach...
Figure 1: Touch input is expressive but can occlude large parts of the screen (A). We propose a mach...
We present a machine learning technique to recognize ges-tures and estimate metric depth of hands fo...
We present a light-weight real-time applicable 3D-gesture recognition system on mobile devices for i...
We present a machine learning technique to recognize ges-tures and estimate metric depth of hands fo...
The spread of IoT and wearable devices is bringing out gesture interfaces as a solution for a more n...
The spread of IoT and wearable devices is bringing out gesture interfaces as a solution for a more n...
The spread of IoT and wearable devices is bringing out gesture interfaces as a solution for a more n...
The spread of IoT and wearable devices is bringing out gesture interfaces as a solution for a more n...
The spread of IoT and wearable devices is bringing out gesture interfaces as a solution for a more n...
The need to improve communication between humans and computers has been instrumental in defining new...
We present a pipeline for recognizing dynamic freehand gestures on mobile devices based on extractin...
Gesture is a promising mobile User Interface modality that enables eyes-free interaction without sto...
This contribution presents a novel approach of utilizing Time-of-Flight (ToF) technology for mid-air...
Abstract. The goal of this work is to build the basis for a smartphone application that provides fun...