Several spatiotemporal feature point detectors have been recently used in video analysis for action recognition. Feature points are detected using a number of measures, namely saliency, cornerness, periodicity, motion activity etc. Each of these measures is usually intensity-based and provides a different trade-off between density and informa-tiveness. In this paper, we use saliency for feature point detection in videos and incorporate color and motion apart from intensity. Our method uses a multi-scale volumetric representation of the video and involves spatiotemporal op-erations at the voxel level. Saliency is computed by a global minimization process constrained by pure volumetric con-straints, each of them being related to an informativ...
Action recognition, aiming to automatically classify actions from a series of observations, has attr...
International audienceThis paper proposes a superpixel-based spatiotemporal saliency model for salie...
International audienceIn this paper, we propose a novel spatiotemporal saliency model based on super...
Computer vision applications often need to process only a representative part of the visual input ra...
A video saliency detection algorithm based on feature learning, called ROCT, is proposed in this wor...
In a world of ever increasing amounts of video data, we are forced to abandon traditional methods of...
Recognizing actions is one of the important challenges in computer vision with respect to video data...
Local spatio-temporal salient features are used for a sparse and compact representation of video con...
Human actions are spatio-temporal patterns. A popular representation is to describe the action by fe...
In this paper, we propose a computationally efficient and consistently accurate spatiotemporal salie...
Human action recognition is valuable for numerous practical applications, e.g., gaming, video survei...
International audienceIn this paper we propose a method for automatic detection of salient objects i...
Multimedia applications like retrieval, copy detection etc. can gain from saliency detection, which ...
An adaptive spatiotemporal saliency algorithm for video attention detection using motion vector deci...
Human visual system actively seeks salient regions and movements in video sequences to reduce the se...
Action recognition, aiming to automatically classify actions from a series of observations, has attr...
International audienceThis paper proposes a superpixel-based spatiotemporal saliency model for salie...
International audienceIn this paper, we propose a novel spatiotemporal saliency model based on super...
Computer vision applications often need to process only a representative part of the visual input ra...
A video saliency detection algorithm based on feature learning, called ROCT, is proposed in this wor...
In a world of ever increasing amounts of video data, we are forced to abandon traditional methods of...
Recognizing actions is one of the important challenges in computer vision with respect to video data...
Local spatio-temporal salient features are used for a sparse and compact representation of video con...
Human actions are spatio-temporal patterns. A popular representation is to describe the action by fe...
In this paper, we propose a computationally efficient and consistently accurate spatiotemporal salie...
Human action recognition is valuable for numerous practical applications, e.g., gaming, video survei...
International audienceIn this paper we propose a method for automatic detection of salient objects i...
Multimedia applications like retrieval, copy detection etc. can gain from saliency detection, which ...
An adaptive spatiotemporal saliency algorithm for video attention detection using motion vector deci...
Human visual system actively seeks salient regions and movements in video sequences to reduce the se...
Action recognition, aiming to automatically classify actions from a series of observations, has attr...
International audienceThis paper proposes a superpixel-based spatiotemporal saliency model for salie...
International audienceIn this paper, we propose a novel spatiotemporal saliency model based on super...