New research is helping scientists around the world understand what drives language change, especially when languages are in their infancy. The results will shed light on how the limitations of the human brain change language and provide an understanding of the complex interaction between languages and the human beings who use them.
The project is funded by a $344,000 National Science Foundation grant and is led by principal investigator Matthew Dye, an assistant professor and director of the Deaf x Laboratory at Rochester Institute of Technology’s National Technical Institute for the Deaf.
Dye and his research team are examining Nicaraguan Sign Language, which was “born” in the 1970s. Using machine learning and computer vision techniques, the team is looking at old video recording of the language and measuring how it has changed over the past 40 years. The recent birth and rapid evolution of Nicaraguan Sign Language has allowed them to study language change from the beginning, on a compressed time scale. They are asking whether languages change so they are easier to produce, or whether they change in ways that make them easier for others to understand. Initial results challenge a long-held notion that signs move toward the face in order to be easier to understand.
“Languages change over time, such that the way we speak English now is very different than the speech patterns of elder generations and our distant ancestors,” said Dye. “While it is well documented that languages change over time, we’re hoping to answer some fundamental theoretical questions about language change that cannot be addressed by simply analyzing historical samples of spoken languages.”
Dye explains that by using an existing database of Nicaraguan Sign Language, composed of 2D videos of four generations of Nicaraguan signers, his research team will be able to assess the extent to which linguistic changes occur and why. The team will also create computational tools that allow 3D human body poses to be extracted from the 2D videos.
Ultimately, these tools could be used to aid in the development of automated sign-language recognition, promoting accessibility for deaf and hard-of-hearing people, and for developing automated systems for recognizing and classifying human gestures. In addition, Dye says that deaf and hard-of-hearing students will participate in the research, helping to increase the diversity of the nation’s scientific workforce.
“We are fortunate that our study enables us to utilize the visual nature of sign language to gain a greater understanding of how all languages may evolve,” adds Dye.
###
Co-principal investigators on the project are Corrine Occhino, research assistant professor at NTID; Andreas Savakis, professor, RIT’s Kate Gleason College of Engineering; and Matt Huenerfauth, professor, RIT’s Golisano College of Computing and Information Sciences. The project is a collaboration with Naomi Caselli, assistant professor, Boston University, and Norm Badler, professor, University of Pennsylvania.
For more information, contact Vienna McGrain at 585-475-4952 or Vienna.Carvalho@rit.edu.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.