Design and Evaluation of Hybrid Search for American Sign Language to English Dictionaries: Making the Most of Imperfect Sign Recognition

Abstract

Searching for the meaning of an unfamiliar sign-language word in a dictionary is difficult for learners, but emerging sign-recognition technology will soon enable users to search by submitting a video of themselves performing the word they recall. However, sign-recognition technology is imperfect, and users may need to search through a long list of possible results when seeking a desired result. To speed this search, we present a hybrid-search approach, in which users begin with a video-based query and then filter the search results by linguistic properties, e.g., handshape. We interviewed 32 ASL learners about their preferences for the content and appearance of the search-results page and filtering criteria. A between-subjects experiment with 20 ASL learners revealed that our hybrid search system outperformed a video-based search system along multiple satisfaction and performance metrics. Our findings provide guidance for designers of video-based sign-language dictionary search systems, with implications for other search scenarios.

Publication
In CHI Conference on Human Factors in Computing Systems (CHI ’22), April 29-May 5, 2022, New Orleans, LA, USA.
Saad Hassan
Saad Hassan
Assistant Professor

My research interests include human-computer interaction (HCI), accessibility, and computational social science.

Akhter Al Amin
Akhter Al Amin
Software Engineer at Amazon
Alexis Gordon
Alexis Gordon
Former Research Assistant at RIT
Sooyeon Lee
Sooyeon Lee
Assistant Professor at NJIT
Matt Huenerfauth
Matt Huenerfauth
Professor and Dean at RIT