ASL-LEX: Mapping the ASL Lexicon
ASL-LEX is a lexical database that catalogues information about signs in American Sign Language (Caselli et al., 2016; Sevcikova, Caselli, Cohen-Goldberg, Emmorey, 2021). It currently includes information about frequency (how often signs are used in everyday conversation), iconicity (how much signs look like what they mean), and phonology (which handshapes, locations, movements etc. are used). ASL teachers can use ASL-LEX to support vocabulary acquisition (e.g., to develop vocabulary lessons that prioritize commonly used signs). Students can also look up signs based on their sign form, without knowing a sign’s English translation, and begin to learn about linguistic patterns in the forms of signs. It can also be used by ASL researchers to develop experiments, or to develop technology. This project is supported by the National Science Foundation (1625793, 1918252, and 1749384).
ASL Vocabulary Acquisition
Many deaf children have limited access to language early in life: they often do not have signing role models, and cannot hear the sounds of spoken language. This puts them at risk of language deprivation. This line of research explores how deaf children learn vocabulary, and how these early experiences with language affect language learning. The goal is to identify how children learn signs, the factors that promote vocabulary acquisition, and to develop assessment tools for measuring early ASL vocabulary. Find out more about this project. This project is supported by the National Institute of Deafness and other Communication Disorders (R21DC016104 and R01DC018279).
Sign Language Computing
The vast majority of communication technologies are designed for written and spoken languages (e.g., voice recognition, automatic translation), and exclude people who use sign languages. With advances in machine learning, computer vision, and human pose estimation, there is rapidly growing interest among computer scientists in sign language computation. In my work, I explore the ethical landscape of this emerging field: Do deaf people want sign language technologies? Who has a seat at the table in decision-making about new technologies? What concerns may emerge about privacy, fairness, and audism? I am also interested leveraging resources like ASL-LEX to address some of the practical barriers in developing linguistically informed sign language technologies.