An Isolated-Signing RGBD Dataset of 100 American Sign Language Signs Produced by Fluent ASL Signers

Abstract

We have collected a new dataset consisting of color and depth videos of fluent American Sign Language (ASL) signers performing sequences of 100 ASL signs from a Kinect v2 sensor. This directed dataset had originally been collected as part of an ongoing collaborative project, to aid in the development of a sign-recognition system for identifying occurrences of these 100 signs in video. The set of words consist of vocabulary items that would commonly be learned in a first-year ASL course offered at a university, although the specific set of signs selected for inclusion in the dataset had been motivated by project-related factors. Given increasing interest among sign-recognition and other computer-vision researchers in red-green-blue-depth (RBGD) video, we release this dataset for use by the research community. In addition to the RGB video files, we share depth and HD face data as well as additional features of face, hands, and body produced through post-processing of this data.

Publication
In Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages (SLRP at LREC 2020)
Saad Hassan
Saad Hassan
Assistant Professor

My research interests include human-computer interaction (HCI), accessibility, and computational social science.

Larwan Berke
Larwan Berke
Former Graduate Research Assistant at RIT
Elahe Vahdini
Elahe Vahdini
PhD Student at CUNY
Longlong Jing
Longlong Jing
Research Scientist at Waymo LLC
Yingli Tian
Yingli Tian
Professor at CUNY
Matt Huenerfauth
Matt Huenerfauth
Professor and Dean at RIT