In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3%, outperforming a recent HMM-based approach which obtained 71.2%
We present a new method for multi-agent activity analysis and recognition that uses low level motion...
This paper introduces a method for human action recognition based on optical flow motion features ex...
In this paper, we propose a novel feature type to recognize human actions from video data. By combin...
In this paper we propose a novel approach to multi-action recognition that performs joint segmentati...
In this paper we propose a novel approach to multi-action recognition that performs joint segmentati...
We propose a hierarchical approach to multi-action recognition that performs joint classification an...
Hidden Markov models (HMMs) provide joint segmentation and classification of sequential data by effi...
Hidden Markov models (HMMs) provide joint segmentation and classification of sequential data by effi...
Abstract: A framework for action representation and recognition based on the description of an actio...
A learning-based framework for action representation and recognition relying on the description of a...
We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting pro...
The recognition of actions and activities has a long history in the computer vision community. State...
This paper proposes a novel method for recognition and classification of events represented by Mixtu...
We present a new method for multi-agent activity analysis and recognition that uses low level motion...
Automatic video segmentation and action recognition has been a long-standing problem in computer vis...
We present a new method for multi-agent activity analysis and recognition that uses low level motion...
This paper introduces a method for human action recognition based on optical flow motion features ex...
In this paper, we propose a novel feature type to recognize human actions from video data. By combin...
In this paper we propose a novel approach to multi-action recognition that performs joint segmentati...
In this paper we propose a novel approach to multi-action recognition that performs joint segmentati...
We propose a hierarchical approach to multi-action recognition that performs joint classification an...
Hidden Markov models (HMMs) provide joint segmentation and classification of sequential data by effi...
Hidden Markov models (HMMs) provide joint segmentation and classification of sequential data by effi...
Abstract: A framework for action representation and recognition based on the description of an actio...
A learning-based framework for action representation and recognition relying on the description of a...
We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting pro...
The recognition of actions and activities has a long history in the computer vision community. State...
This paper proposes a novel method for recognition and classification of events represented by Mixtu...
We present a new method for multi-agent activity analysis and recognition that uses low level motion...
Automatic video segmentation and action recognition has been a long-standing problem in computer vis...
We present a new method for multi-agent activity analysis and recognition that uses low level motion...
This paper introduces a method for human action recognition based on optical flow motion features ex...
In this paper, we propose a novel feature type to recognize human actions from video data. By combin...