Spoken Language Rules Work In Signed Communication, Too
Language is language, regardless of the way you communicate. A new study by Psychology and Linguistics Professor Iris Berent at Northeastern University demonstrates that similar structures rule communication, and whether communication is via speech or sign is of secondary importance.
Basically, people adhere to certain patterns for what’s permissible in language and reject structures that “seem wrong.” By observing that research subjects with no knowledge of sign language mapped the rules of spoken language onto signs they were shown, researchers learned that ingrained rules play a bigger role than previously thought.
Sign language was already known to have its own grammar and rules for pronunciation, word order, and usage before this study. Beyond that, American Sign Language (ASL) has a very different vocabulary and set of rules from sign languages used in other countries, and there are different regional accents and dialects within one country, just like in spoken language.
Berent said that her research aimed “to reveal the complex structure of sign language, and in so doing, disabuse the public of this notion [that sign language is not really a language].”
How do you research language without taking the time to make people learn a whole new language?
Berent’s lab approached the problem by focusing on words and signs that had the same basic structure. Then, they extended that structure to meaningless sounds and signs. The researchers showed signs with similar patterns to participants with no knowledge of sign language and asked the participants to rate whether certain patterns seemed to make sense.
The main pattern was doubling: words and signs with a sound or sign that was repeated. Here are some examples:
If these words seem like they might be right at home in Dr. Seuss, you’re not far off the mark. These combinations are nonsensical, and participants in the study recognized that. The exciting thing for the researchers is that their participants recognized and reacted to this type of pattern in both speech and signs.
The subjects were asked to respond to signs the same way they would to words, judging whether they made sense in certain contexts. If a word was given as a name for a single object, people gave lower ratings to words with doubling than ones without doubling. For example, slaflaf got worse ratings than slafmak. Sure, they both sound like gibberish, but one sounds more likely to be a word. One exception: if subjects were given a word or sign with doubling and told that the doubling signaled plurality, they were more likely to give it a higher rating.
In short: people’s responses to specific forms change based on the linguistic context of those forms.
By finding that people with no knowledge of sign language reacted in the same way to both words and signs with similar patterns, Berent showed that the governing rules for spoken language and sign language aren’t as different as people may think.
Berent’s study shows that sign languages aren’t just based on things like the shapes of objects described in individual words: instead, they rely on abstract rules just like spoken languages do. The idea that the same mechanisms in the brain are at work for both spoken and sign languages is big news for neurologists, psychologists, and linguists alike. In the task of uncovering the mysteries of language, we’re just scratching the surface.