ASL-LEX: Mapping the ASL Lexicon
ASL-LEX is a lexical database that catalogues information about signs in American Sign Language (Caselli, Sevcikova Sehyr, Cohen-Goldberg, & Emmorey, 2016). It currently includes information about frequency (how often signs are used in everyday conversation), iconicity (how much signs look like what they mean), and phonology (which handshapes, locations, movements etc. are used). Many deaf children in the US unfortunately do not know ASL. Teachers can use ASL-LEX to support vocabulary in deaf and students who are learning ASL (e.g., to develop vocabulary lessons that prioritize commonly used signs). Students can also look up signs based on their sign form, without knowing a sign’s English translation, and begin to learn about linguistic patterns in the forms of signs. It can also be used by ASL researchers to develop experiments. This project is supported by the National Science Foundation (1625793).
ASL Vocabulary Acquisition
Many deaf children have limited access to language early in life: they often do not have signing role models, and cannot hear the sounds of spoken language. These children are at risk of incomplete acquisition of their first language; this is called language deprivation. My work explores the trajectory of vocabulary development in deaf children with or without language deprivation, because early vocabulary is a critical building block in language acquisition. The goal is to identify the signs that children learn, the factors that promote vocabulary acquisition (Caselli & Pyers, in press), and to develop assessment tools for identifying children who have limited ASL vocabularies. With these tools in hand, researchers and educators will be better able to develop interventions to mitigate the effects of language deprivation. Find out more about this project. This project is supported by the National Institute of Deafness and other Communication Disorders (R21DC016104).
Lexical Access in Sign Language
Most of what we know about how people perceive and produce words comes from studies of spoken language. We ask whether these theories reflect modality-general principles or modality-specific ones. We use a combination of behavioral studies and computational modeling (Caselli & Cohen-Goldberg, 2014) to understand how signers perceive and produce signs. We are currently exploring the long-term effects of language deprivation on how people perceive and produce signs. This project is supported by the National Science Foundation (1625793).