SummaryUnderstanding how we spontaneously scan the visual world through eye movements is crucial for characterizing both the strategies and inputs of vision [1–27]. Despite the importance of the third or depth dimension for perception and action, little is known about how the specifically three-dimensional aspects of scenes affect looking behavior. Here we show that three-dimensional surface orientation has a surprisingly large effect on spontaneous exploration, and we demonstrate that a simple rule predicts eye movements given surface orientation in three dimensions: saccades tend to follow surface depth gradients. The rule proves to be quite robust: it generalizes across depth cues, holds in the presence or absence of a task, and applies ...
When an object is tracked with the eyes, veridical perception of the motion of that object and other...
Purpose. Can disconjugate eye movements be triggered by pictorial depth cues? If they could, this wo...
Accepted 11 August 2015Our ability to process information about an object's location in depth varies...
SummaryUnderstanding how we spontaneously scan the visual world through eye movements is crucial for...
Where we look when we scan visual scenes is an old question that continues to inspire both fundament...
In studies of 2D visual attention, eye-tracking data show a so-called “center-bias”, which means tha...
Previous evidence suggests that attention can operate on object-based representations. It is not kno...
We studied the influence of perceived surface orientation on vergence accompanying a saccade while v...
Visual attention has long been described in terms of the spotlight metaphor, which assumes that two-...
Eye movements provide insight on how visual system extracts specific information from the environmen...
Abstract: The ability to detect changes in the environment is necessary for appropriate interactions...
Previous studies have shown that spatial attention can shift in three-dimensional (3-D) space determ...
The visual perception of monocular stimuli perceived as 3-D objects has received considerable attent...
<div><p>Images projected onto the retinas of our two eyes come from slightly different directions in...
AbstractVisual perception is facilitated by the ability to selectively attend to relevant parts of t...
When an object is tracked with the eyes, veridical perception of the motion of that object and other...
Purpose. Can disconjugate eye movements be triggered by pictorial depth cues? If they could, this wo...
Accepted 11 August 2015Our ability to process information about an object's location in depth varies...
SummaryUnderstanding how we spontaneously scan the visual world through eye movements is crucial for...
Where we look when we scan visual scenes is an old question that continues to inspire both fundament...
In studies of 2D visual attention, eye-tracking data show a so-called “center-bias”, which means tha...
Previous evidence suggests that attention can operate on object-based representations. It is not kno...
We studied the influence of perceived surface orientation on vergence accompanying a saccade while v...
Visual attention has long been described in terms of the spotlight metaphor, which assumes that two-...
Eye movements provide insight on how visual system extracts specific information from the environmen...
Abstract: The ability to detect changes in the environment is necessary for appropriate interactions...
Previous studies have shown that spatial attention can shift in three-dimensional (3-D) space determ...
The visual perception of monocular stimuli perceived as 3-D objects has received considerable attent...
<div><p>Images projected onto the retinas of our two eyes come from slightly different directions in...
AbstractVisual perception is facilitated by the ability to selectively attend to relevant parts of t...
When an object is tracked with the eyes, veridical perception of the motion of that object and other...
Purpose. Can disconjugate eye movements be triggered by pictorial depth cues? If they could, this wo...
Accepted 11 August 2015Our ability to process information about an object's location in depth varies...