With eye-tracking increasingly available in Augmented Reality, we explore how gaze can be used to assist freehand gestural text entry. Here the eyes are often coordinated with manual input across the spatial positions of the keys. Inspired by this, we investigate gaze-assisted selection-based text entry through the concept of spatial alignment of both modalities. Users can enter text by aligning both gaze and manual pointer at each key, as a novel alternative to existing dwell-time or explicit manual triggers. We present a text entry user study comparing two of such alignment techniques to a gaze-only and a manual-only baseline. The results show that one alignment technique reduces physical finger movement by more than half compared to stan...
Proper feedback is essential in gaze based interfaces, where the same modality is used for both perc...
We present the VISAR keyboard: An augmented reality (AR) head-mounted display (HMD) system that supp...
© 2017 IEEE. Inputs with multimodal information provide more natural ways to interact with virtual 3...
Gaze and freehand gestures suit Augmented Reality as users can interact with objects at a distance w...
Purpose – In this paper we consider the two main existing text input techniques based on “eye gestur...
Purpose – In this paper we consider the two main existing text input techniques based on “eye gestur...
This paper introduces a bi-modal typing interface, HGaze Typing, which combines the simplicity of he...
Gaze-based text spellers have proved useful for people with severe motor diseases, but lack acceptan...
Gaze-based text spellers have proved useful for people with severe motor diseases, but lack acceptan...
While using VR, efficient text entry is a challenge: users cannot easily locate standard physical ke...
Despite its potential gaze interaction is still not a widely-used interaction concept. Major drawbac...
Despite its potential gaze interaction is still not a widely-used interaction concept. Major drawbac...
Although eye tracking technology has greatly advanced in recent years, gaze-based interaction is sti...
In this work, we investigate gaze selection in the context of mid-air hand gestural manipulation of ...
In this paper, we propose a calibration-free gaze-based text entry system that uses smooth pursuit e...
Proper feedback is essential in gaze based interfaces, where the same modality is used for both perc...
We present the VISAR keyboard: An augmented reality (AR) head-mounted display (HMD) system that supp...
© 2017 IEEE. Inputs with multimodal information provide more natural ways to interact with virtual 3...
Gaze and freehand gestures suit Augmented Reality as users can interact with objects at a distance w...
Purpose – In this paper we consider the two main existing text input techniques based on “eye gestur...
Purpose – In this paper we consider the two main existing text input techniques based on “eye gestur...
This paper introduces a bi-modal typing interface, HGaze Typing, which combines the simplicity of he...
Gaze-based text spellers have proved useful for people with severe motor diseases, but lack acceptan...
Gaze-based text spellers have proved useful for people with severe motor diseases, but lack acceptan...
While using VR, efficient text entry is a challenge: users cannot easily locate standard physical ke...
Despite its potential gaze interaction is still not a widely-used interaction concept. Major drawbac...
Despite its potential gaze interaction is still not a widely-used interaction concept. Major drawbac...
Although eye tracking technology has greatly advanced in recent years, gaze-based interaction is sti...
In this work, we investigate gaze selection in the context of mid-air hand gestural manipulation of ...
In this paper, we propose a calibration-free gaze-based text entry system that uses smooth pursuit e...
Proper feedback is essential in gaze based interfaces, where the same modality is used for both perc...
We present the VISAR keyboard: An augmented reality (AR) head-mounted display (HMD) system that supp...
© 2017 IEEE. Inputs with multimodal information provide more natural ways to interact with virtual 3...