Sign Spotter: Design and Initial Evaluation of an Automatic Video-Based American Sign Language Dictionary System

Abstract

Searching unfamiliar American Sign Language (ASL) words in a dictionary is challenging for learners, as it involves recalling signs from memory and providing specific linguistic details. Fortunately, the emergence of sign-recognition technology will soon enable users to search by submitting a video of themselves performing the word. Although previous research has independently addressed algorithmic enhancements and design aspects of ASL dictionaries, there has been limited effort to integrate both. This paper presents the design of an end-to-end sign language dictionary system, incorporating design recommendations from recent human–computer interaction (HCI) research. Additionally, we share preliminary findings from an interview-based user study with four ASL learners.

Publication
In Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2022)
Matyáš Boháček
Matyáš Boháček
BS CS Student at Stanford
Saad Hassan
Saad Hassan
Assistant Professor

My research interests include human-computer interaction (HCI), accessibility, and computational social science.