Gaze is attractive for interaction, as we naturally look at objects we are interested in. As a result, gaze has received significant attention within human-computer interaction as an input modality. However, gaze has been limited to only eye movements in situations where head movements are not expected to be used or as head movements in an approximation of gaze when an eye tracker is unavailable. From these observations arise an opportunity and a challenge: we propose to consider gaze as multi-modal in line with psychology and neuroscience research to more accurately represent user movements. The natural coordination of eye and head movements could then enable the development of novel interaction techniques to further the possibilities of g...
This thesis investigates the affective and attentive gaze-based interaction with virtual humans and ...
This paper examines and seeks to enhance gaze based pointing and interaction in virtual 3D environme...
Gaze pointing is the de facto standard to infer attention and interact in 3D environments but is lim...
Humans perform gaze shifts naturally through a combination of eye, head and body movements. Although...
Eye gaze involves the coordination of eye and head movement to acquire gaze targets, but existing ap...
Gaze as a sole input modality must support complex navigation and selection tasks. Gaze interaction ...
We present a suite of interaction techniques that fundamentally leverages the user’s gaze direction ...
Pfeiffer T. Towards Gaze Interaction in Immersive Virtual Reality: Evaluation of a Monocular Eye Tra...
In this paper, we investigate the probability and timing of attaining gaze fixations on interacted o...
Hülsmann F, Dankert T, Pfeiffer T. Comparing gaze-based and manual interaction in a fast-paced gamin...
on how the interaction with virtual 3D interfaces may benefit from integrating gaze input. On the on...
This thesis deals with selection strategies in gaze interaction, specifically for a context where ga...
© 2017 IEEE. Inputs with multimodal information provide more natural ways to interact with virtual 3...
For efficient collaboration between participants, eye gaze is seen as being critical for interaction...
Renner P, Lüdike N, Wittrowski J, Pfeiffer T. Towards Continuous Gaze-Based Interaction in 3D Enviro...
This thesis investigates the affective and attentive gaze-based interaction with virtual humans and ...
This paper examines and seeks to enhance gaze based pointing and interaction in virtual 3D environme...
Gaze pointing is the de facto standard to infer attention and interact in 3D environments but is lim...
Humans perform gaze shifts naturally through a combination of eye, head and body movements. Although...
Eye gaze involves the coordination of eye and head movement to acquire gaze targets, but existing ap...
Gaze as a sole input modality must support complex navigation and selection tasks. Gaze interaction ...
We present a suite of interaction techniques that fundamentally leverages the user’s gaze direction ...
Pfeiffer T. Towards Gaze Interaction in Immersive Virtual Reality: Evaluation of a Monocular Eye Tra...
In this paper, we investigate the probability and timing of attaining gaze fixations on interacted o...
Hülsmann F, Dankert T, Pfeiffer T. Comparing gaze-based and manual interaction in a fast-paced gamin...
on how the interaction with virtual 3D interfaces may benefit from integrating gaze input. On the on...
This thesis deals with selection strategies in gaze interaction, specifically for a context where ga...
© 2017 IEEE. Inputs with multimodal information provide more natural ways to interact with virtual 3...
For efficient collaboration between participants, eye gaze is seen as being critical for interaction...
Renner P, Lüdike N, Wittrowski J, Pfeiffer T. Towards Continuous Gaze-Based Interaction in 3D Enviro...
This thesis investigates the affective and attentive gaze-based interaction with virtual humans and ...
This paper examines and seeks to enhance gaze based pointing and interaction in virtual 3D environme...
Gaze pointing is the de facto standard to infer attention and interact in 3D environments but is lim...