International audienceWe present baseline results for a new task of automatic segmentation of Sign Language video into sentence-like units. We use a corpus of natural Sign Language video with accurately aligned subtitles to train a spatio-temporal graph convolutional network with a BiLSTM on 2D skeleton data to automatically detect the temporal boundaries of subtitles. In doing so, we segment Sign Language video into subtitle-units that can be translated into phrases in a written language. We achieve a ROC-AUC statistic of 0.87 at the frame level and 92% label accuracy within a time margin of 0.6s of the true labels
We present a novel approach to automatic Sign Language Production using recent developments in Neura...
International audienceThe automatic recognition of Sign Languages is the main focus of most of the w...
Sign languages, vital for communication among the deaf and hard-of-hearing (DHH) people, face a sign...
International audienceWe present baseline results for a new task of automatic segmentation of Sign L...
International audienceThe goal of this work is to temporally align asynchronous subtitles in sign la...
The goal of this work is to temporally align asynchronous subtitles in sign language videos. In part...
This paper introduces a fully-automated, unsupervised method to recognise sign from subtitles. It do...
International audienceThe objective of this work is to determine the location of temporal boundaries...
How well can a sequence of frames be represented by a subset of the frames? Video sequences of Ameri...
Millions of hearing impaired people around the world routinely use some variants of sign languages t...
Recent progress in fine-grained gesture and action classification, and machine translation, point to...
The objective of this work is to annotate sign instances across a broad vocabulary in continuous sig...
The focus of this work is sign spotting–given a video of an isolated sign, our task is to identify w...
We present a novel approach to automatic Sign Language Production using stateof- the-art Neural Mach...
International audienceThe objective of this work is to annotate sign instances across a broad vocabu...
We present a novel approach to automatic Sign Language Production using recent developments in Neura...
International audienceThe automatic recognition of Sign Languages is the main focus of most of the w...
Sign languages, vital for communication among the deaf and hard-of-hearing (DHH) people, face a sign...
International audienceWe present baseline results for a new task of automatic segmentation of Sign L...
International audienceThe goal of this work is to temporally align asynchronous subtitles in sign la...
The goal of this work is to temporally align asynchronous subtitles in sign language videos. In part...
This paper introduces a fully-automated, unsupervised method to recognise sign from subtitles. It do...
International audienceThe objective of this work is to determine the location of temporal boundaries...
How well can a sequence of frames be represented by a subset of the frames? Video sequences of Ameri...
Millions of hearing impaired people around the world routinely use some variants of sign languages t...
Recent progress in fine-grained gesture and action classification, and machine translation, point to...
The objective of this work is to annotate sign instances across a broad vocabulary in continuous sig...
The focus of this work is sign spotting–given a video of an isolated sign, our task is to identify w...
We present a novel approach to automatic Sign Language Production using stateof- the-art Neural Mach...
International audienceThe objective of this work is to annotate sign instances across a broad vocabu...
We present a novel approach to automatic Sign Language Production using recent developments in Neura...
International audienceThe automatic recognition of Sign Languages is the main focus of most of the w...
Sign languages, vital for communication among the deaf and hard-of-hearing (DHH) people, face a sign...