This paper presents a novel approach to visual saliency that relies on a contextually adapted representation produced through adaptive whitening of color and scale features. Unlike previous models, the proposal is grounded on the specific adaptation of the basis of low level features to the statistical structure of the image. Adaptation is achieved through decorrelation and contrast normalization in several steps in a hierarchical approach, in compliance with coarse features described in biological visual systems. Saliency is simply computed as the square of the vector norm in the resulting representation. The performance of the model is compared with several state-of-the-art approaches, in predicting human fixations using three different e...
International audienceBottom-up saliency models have been developed to predict the location of gaze ...
Visual Saliency aims to detect the most important regions of an image from a perceptual point of vie...
Copyright © 2014 Shahzad Anwar et al.This is an open access article distributed under the Creative C...
Many successful models for predicting attention in a scene involve three main steps: convolution wit...
This version of the article has been accepted for publication, after peer review (when applicable) a...
A new bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is proposed. It consists ...
Inspired by the primate visual system, computational saliency models decompose visual input into a s...
Saliency-based visual attention models provide visual saliency by combining the conspicuity maps rel...
Saliency-based visual attention models provide visual saliency by combining the conspicuity maps rel...
We propose a saliency model termed SIM (saliency by induction mechanisms), which is based on a low-l...
Saliency-based visual attention models provide visual saliency by combining the conspicuity maps rel...
A salient image region is defined as an image part that is clearly different from its surround in te...
To detect visually salient elements of complex natural scenes, computational bottom-up saliency mode...
A salient image region is defined as an image part that is clearly different from its surround in te...
This paper addresses the bottom-up influence of local image information on human eye movements. Most...
International audienceBottom-up saliency models have been developed to predict the location of gaze ...
Visual Saliency aims to detect the most important regions of an image from a perceptual point of vie...
Copyright © 2014 Shahzad Anwar et al.This is an open access article distributed under the Creative C...
Many successful models for predicting attention in a scene involve three main steps: convolution wit...
This version of the article has been accepted for publication, after peer review (when applicable) a...
A new bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is proposed. It consists ...
Inspired by the primate visual system, computational saliency models decompose visual input into a s...
Saliency-based visual attention models provide visual saliency by combining the conspicuity maps rel...
Saliency-based visual attention models provide visual saliency by combining the conspicuity maps rel...
We propose a saliency model termed SIM (saliency by induction mechanisms), which is based on a low-l...
Saliency-based visual attention models provide visual saliency by combining the conspicuity maps rel...
A salient image region is defined as an image part that is clearly different from its surround in te...
To detect visually salient elements of complex natural scenes, computational bottom-up saliency mode...
A salient image region is defined as an image part that is clearly different from its surround in te...
This paper addresses the bottom-up influence of local image information on human eye movements. Most...
International audienceBottom-up saliency models have been developed to predict the location of gaze ...
Visual Saliency aims to detect the most important regions of an image from a perceptual point of vie...
Copyright © 2014 Shahzad Anwar et al.This is an open access article distributed under the Creative C...