2017 marked the release of a new version of SignStream® software, designed to facilitate linguistic analysis of ASL video. SignStream® provides an intuitive interface for labeling and time-aligning manual and non-manual components of the signing. Version 3 has many new features. For example, it enables representation of morpho-phonological information, including display of handshapes. An expanding ASL video corpus, annotated through use of SignStream®, is shared publicly on the Web. This corpus (video plus annotations) is Web-accessible—browsable, searchable, and downloadable—thanks to a new, improved version of our Data Access Interface: DAI 2. DAI 2 also offers Web access to a brand new Sign Bank, containing about 10,000 examples of about...
There have been recent advances in computer-based recognition of isolated, citation-form signs from ...
American Sign Language (ASL) is a visual gestural language which is used by many people who are deaf...
Recent progress in fine-grained gesture and action classification, and machine translation, point to...
The American Sign Language Linguistic Research Project (ASLLRP) provides Internet access to high-qu...
The WLASL purports to be “the largest video dataset for Word-Level American Sign Language (ASL) reco...
A significant obstacle to broad utilization of corpora is the difficulty in gaining access to the sp...
The American Sign Language Lexicon Video Dataset (ASLLVD) consists of videos of >3,300 ASL signs in ...
The American Sign Language Lexicon Video Dataset (ASLLVD) consists of videos of>3,300 ASL signs i...
We are four researchers who study psycholinguistics, linguistics, neuroscience and deaf education. O...
We report on the high success rates of our new, scalable, computational approach for sign recognitio...
ASL-LEX is a publicly available, large-scale lexical database for American Sign Language (ASL). We r...
One of the factors that have hindered progress in the areas of sign language recognition, translatio...
Existing work on sign language translation--that is, translation from sign language videos into sent...
We present a new approach for isolated sign recognition, which combines a spatial-temporal Graph Con...
Technology to automatically synthesize linguistically accurate and natural-looking animations of Ame...
There have been recent advances in computer-based recognition of isolated, citation-form signs from ...
American Sign Language (ASL) is a visual gestural language which is used by many people who are deaf...
Recent progress in fine-grained gesture and action classification, and machine translation, point to...
The American Sign Language Linguistic Research Project (ASLLRP) provides Internet access to high-qu...
The WLASL purports to be “the largest video dataset for Word-Level American Sign Language (ASL) reco...
A significant obstacle to broad utilization of corpora is the difficulty in gaining access to the sp...
The American Sign Language Lexicon Video Dataset (ASLLVD) consists of videos of >3,300 ASL signs in ...
The American Sign Language Lexicon Video Dataset (ASLLVD) consists of videos of>3,300 ASL signs i...
We are four researchers who study psycholinguistics, linguistics, neuroscience and deaf education. O...
We report on the high success rates of our new, scalable, computational approach for sign recognitio...
ASL-LEX is a publicly available, large-scale lexical database for American Sign Language (ASL). We r...
One of the factors that have hindered progress in the areas of sign language recognition, translatio...
Existing work on sign language translation--that is, translation from sign language videos into sent...
We present a new approach for isolated sign recognition, which combines a spatial-temporal Graph Con...
Technology to automatically synthesize linguistically accurate and natural-looking animations of Ame...
There have been recent advances in computer-based recognition of isolated, citation-form signs from ...
American Sign Language (ASL) is a visual gestural language which is used by many people who are deaf...
Recent progress in fine-grained gesture and action classification, and machine translation, point to...