Visual attention is one of the most important aspects of human social behavior, visual navigation, and interaction with the world, revealing information about their social, cognitive, and affective states. Although monitor-based and wearable eye trackers are widely available, they are not sufficient to support the large-scale collection of naturalistic gaze data in face-to-face social interactions or during interactions with 3D environments. Wearable eye trackers are burdensome to participants and bring issues of calibration, compliance, cost, and battery life. The ability to automatically measure attention from ordinary videos would deliver scalable, dense, and objective measurements to use in practice. This thesis investigates several c...
The central tenet of this paper is that by determining where people are looking, other tasks involve...
Unlike state-of-the-art batch machine learning methods, children have a remarkable facility for lear...
Description This database contains automatically extracted features (head pose, gaze, speaking stat...
Monitoring others’ actions, and our control over those actions, is essential to human social recipro...
Advances in sensor miniaturization, low-power computing, and battery life have enabled the first gen...
In an average human life, the eyes not only passively scan visual scenes, but most times end up acti...
MPhilThe aim of the thesis is to create and validate models of visual attention. To this extent, a ...
The ability to direct a viewer\u27s attention has important applications in computer graphics, data ...
Eye contact is fundamental to understanding many psychology and cognitive science questions. Human g...
Recent advances in camera technology have made it possible to build a comfortable, wearable system w...
Videos captured from wearable cameras, known as egocentric videos, create a continuous record of hum...
Visual processing areas form a hierarchical network, here the network consisting of the brain areas ...
Pfeiffer T. Measuring and visualizing attention in space with 3D attention volumes. In: Spencer SN, ...
Abstract—Systems based on bag-of-words models from image features collected at maxima of sparse inte...
This thesis examines the way in which meaningful facial signals (i.e., eye gaze and emotional facial...
The central tenet of this paper is that by determining where people are looking, other tasks involve...
Unlike state-of-the-art batch machine learning methods, children have a remarkable facility for lear...
Description This database contains automatically extracted features (head pose, gaze, speaking stat...
Monitoring others’ actions, and our control over those actions, is essential to human social recipro...
Advances in sensor miniaturization, low-power computing, and battery life have enabled the first gen...
In an average human life, the eyes not only passively scan visual scenes, but most times end up acti...
MPhilThe aim of the thesis is to create and validate models of visual attention. To this extent, a ...
The ability to direct a viewer\u27s attention has important applications in computer graphics, data ...
Eye contact is fundamental to understanding many psychology and cognitive science questions. Human g...
Recent advances in camera technology have made it possible to build a comfortable, wearable system w...
Videos captured from wearable cameras, known as egocentric videos, create a continuous record of hum...
Visual processing areas form a hierarchical network, here the network consisting of the brain areas ...
Pfeiffer T. Measuring and visualizing attention in space with 3D attention volumes. In: Spencer SN, ...
Abstract—Systems based on bag-of-words models from image features collected at maxima of sparse inte...
This thesis examines the way in which meaningful facial signals (i.e., eye gaze and emotional facial...
The central tenet of this paper is that by determining where people are looking, other tasks involve...
Unlike state-of-the-art batch machine learning methods, children have a remarkable facility for lear...
Description This database contains automatically extracted features (head pose, gaze, speaking stat...