Convolutional Neural Networks (ConvNets) have recently shown promising performance in many computer vision tasks, especially image-based recognition. How to effectively apply ConvNets to sequence-based data is still an open problem. This paper proposes an effective yet simple method to represent spatio-temporal information carried in 3D skeleton sequences into three 2D images by encoding the joint trajectories and their dynamics into color distribution in the images, referred to as Joint Trajectory Maps (JTM), and adopts ConvNets to learn the discriminative features for human action recognition. Such an image-based representation enables us to fine-tune existing ConvNets models for the classification of skeleton sequences without training t...
Human action recognition (HAR) by skeleton data is considered a potential research aspect in compute...
This paper has been presented at : 25th IEEE International Conference on Image Processing (ICIP)We p...
In this paper, we present a method (Action-Fusion) for human action recognition from depth maps and ...
With the advance of deep learning, deep learning based action recognition is an important research t...
Recognizing human actions in untrimmed videos is an important challenging task. An effective 3D moti...
1994-2012 IEEE.Motivated by the promising performance achieved by deep learning, an effective yet si...
Action recognition using depth sequences plays important role in many fields, e.g., intelligent surv...
International audienceDesigning motion representations for 3D human action recognition from skeleton...
2016 IEEE. This letter presents an effective method to encode the spatiotemporal information of a sk...
© 2017 IEEE. This paper presents a new method for 3D action recognition with skeleton sequences (i.e...
It remains a challenge to efficiently represent spatial-temporal data for 3D action recognition. To ...
Action recognition based on a human skeleton is an extremely challenging research problem. The tempo...
This paper proposes a new method, i.e., weighted hierarchical depth motion maps (WHDMM) + three-chan...
© 2020, Springer Science+Business Media, LLC, part of Springer Nature. RGB-D based action recognitio...
International audienceThe computer vision community is currently focusing on solving action recognit...
Human action recognition (HAR) by skeleton data is considered a potential research aspect in compute...
This paper has been presented at : 25th IEEE International Conference on Image Processing (ICIP)We p...
In this paper, we present a method (Action-Fusion) for human action recognition from depth maps and ...
With the advance of deep learning, deep learning based action recognition is an important research t...
Recognizing human actions in untrimmed videos is an important challenging task. An effective 3D moti...
1994-2012 IEEE.Motivated by the promising performance achieved by deep learning, an effective yet si...
Action recognition using depth sequences plays important role in many fields, e.g., intelligent surv...
International audienceDesigning motion representations for 3D human action recognition from skeleton...
2016 IEEE. This letter presents an effective method to encode the spatiotemporal information of a sk...
© 2017 IEEE. This paper presents a new method for 3D action recognition with skeleton sequences (i.e...
It remains a challenge to efficiently represent spatial-temporal data for 3D action recognition. To ...
Action recognition based on a human skeleton is an extremely challenging research problem. The tempo...
This paper proposes a new method, i.e., weighted hierarchical depth motion maps (WHDMM) + three-chan...
© 2020, Springer Science+Business Media, LLC, part of Springer Nature. RGB-D based action recognitio...
International audienceThe computer vision community is currently focusing on solving action recognit...
Human action recognition (HAR) by skeleton data is considered a potential research aspect in compute...
This paper has been presented at : 25th IEEE International Conference on Image Processing (ICIP)We p...
In this paper, we present a method (Action-Fusion) for human action recognition from depth maps and ...