Background: Gesture recognition has attracted significant attention because of its wide range of potential applications. Although multi-modal gesture recognition has made significant progress in recent years, a popular method still is simply fusing prediction scores at the end of each branch, which often ignores complementary features among different modalities in the early stage and does not fuse the complementary features into a more discriminative feature. Methods: This paper proposes an Adaptive Cross-modal Weighting (ACmW) scheme to exploit complementarity features from RGB-D data in this study. The scheme learns relations among different modalities by combining the features of different data streams. The proposed ACmW module contains ...
Dynamic gestures have attracted much attention in recent years due to their user-friendly interactiv...
Multi-modal or multi-view dataset that was captured from various resources (e.g. RGB and Depth) of a...
9 pages, 4 figures, accepted at the 13th IEEE Conference on Automatic Face and Gesture Recognition (...
International audienceWe present a method for gesture detection and localisation based on multi-scal...
Gesture recognition is a much studied research area which has myriad real-world applications includi...
Multimodal input is a real-world situation in gesture recog-nition applications such as sign languag...
Video-based gesture recognition has a wide spectrum of applications, ranging from sign language unde...
Gesture can be used as an important way for human–robot interaction, since it is able to give accura...
Hand gesture recognition (HGR) based on surface electromyogram (sEMG) and Accelerometer (ACC) signal...
Abstract — We present a new approach to multi-signal ges-ture recognition that attends to simultaneo...
© Springer International Publishing Switzerland 2015. We describe in this paper our gesture detectio...
Abstract Gesture recognition has attracted considerable attention owing to its great potential in a...
Abstract — Recent advances in multiple-kernel learning (MKL) show the effectiveness to fuse multiple...
RGB and depth modalities contain more abundant and interactive information, and convolutional neural...
Noise and constant empirical motion constraints affect the extraction of distinctive spatiotemporal ...
Dynamic gestures have attracted much attention in recent years due to their user-friendly interactiv...
Multi-modal or multi-view dataset that was captured from various resources (e.g. RGB and Depth) of a...
9 pages, 4 figures, accepted at the 13th IEEE Conference on Automatic Face and Gesture Recognition (...
International audienceWe present a method for gesture detection and localisation based on multi-scal...
Gesture recognition is a much studied research area which has myriad real-world applications includi...
Multimodal input is a real-world situation in gesture recog-nition applications such as sign languag...
Video-based gesture recognition has a wide spectrum of applications, ranging from sign language unde...
Gesture can be used as an important way for human–robot interaction, since it is able to give accura...
Hand gesture recognition (HGR) based on surface electromyogram (sEMG) and Accelerometer (ACC) signal...
Abstract — We present a new approach to multi-signal ges-ture recognition that attends to simultaneo...
© Springer International Publishing Switzerland 2015. We describe in this paper our gesture detectio...
Abstract Gesture recognition has attracted considerable attention owing to its great potential in a...
Abstract — Recent advances in multiple-kernel learning (MKL) show the effectiveness to fuse multiple...
RGB and depth modalities contain more abundant and interactive information, and convolutional neural...
Noise and constant empirical motion constraints affect the extraction of distinctive spatiotemporal ...
Dynamic gestures have attracted much attention in recent years due to their user-friendly interactiv...
Multi-modal or multi-view dataset that was captured from various resources (e.g. RGB and Depth) of a...
9 pages, 4 figures, accepted at the 13th IEEE Conference on Automatic Face and Gesture Recognition (...