/ Seeing Signs: Language Experience and Handshape Perception

LANGUAGE: American Sign Language

Thousands of times every day you see signing around you.  But what does it mean to "see" a sign?  In this article we will discuss the difference between "seeing" and "perceiving".  You see with your eyes, but you perceive with your mind.  In this special issue on "Science and the Senses" we address what happens in our mind when we "see" signs, and how our experiences growing up around sign language shape the way our minds make sense of visual experience. We report the results of an experiment investigating whether deaf and hearing signers who learned to sign at different ages perceive handshapes differently using synthetic signs produced via animation. All signers tested categorized the handshapes in a similar manner, but signers differed in discrimination performance. Early learners of ASL tend to ignore meaningless phonetic variation in handshape tokens more than individuals who learned to sign later in life, potentially allowing faster and more efficient recognition of handshapes.

TRANSLATION

Chapter 1 Taking a Closer Look at a Complex Behavior: What we see when we see a sign 

Why does something that is so easy, like watching sign language, become so complicated when we look at the mechanisms involved? You're chatting with a friend, and the friend tells you she saw a white duck. That sign sequence - WHITE DUCK – let’s focus on that for a moment. You look at it and suddenly are thinking about a color. But in the few hundred milliseconds that pass between seeing the sign and imagining the color, what happens in your mind? There are many intervening steps before you understand what you saw. Let's take a closer look at this complex process. There are at least three mental steps before you know what the sign means: Perception, Segmentation, and Lexical Access. Let’s consider each of these in a bit more detail. 

The first step in comprehending a signed utterance is perception. How does perception work? Let’s return to our example. Suppose you see someone sign WHITE DUCK. A closer look at the sign WHITE may reveal that the phonetic realization of the sign was not entirely canonical. It’s possible, for example, that a variant of the standard handshape was used in which the pinky was raised instead of contacting the thumb with the other fingers. Your eyes are perfectly capable of seeing this type of variation in the handshape. But what does your mind do in this case? Your mind maps the visual input to a perceptual category, essentially eliminating specific visual details while preserving a schematic representation of the handshape that fits prior phonological knowledge. 

The next step in comprehending a sign sequence is called segmentation. If you see someone sign WHITE DUCK, do you see two separate signs? No, in natural conversation we flow from one sign to the next. There are no pauses between the signs so we have to decide in our mind where the sign WHITE is ending, and where the next sign, DUCK is starting. The mind analyzes one fluid continuous motion, all captured by our eyes, and inserts boundaries in the fluid signal so that we’re left with the impression of having seen separate signs. 

A final step in comprehension that we’ll consider is lexical access. Suppose you are watching someone initiate the sign sequence WHITE DUCK. As soon as you have seen the beginning of the sign WHITE, you start to search through your mental lexicon to figure out all the possible signs it could be. Is it MY? COMPLAIN? PLEASE? WHITE? The sign continues as you consider these possibilities. With more visual input you can eliminate some of the signs under consideration, slowly narrowing the set of lexical competitors until you have just one possible sign left. Generally, signers already have figured out what the sign is before it is even complete. This process of searching the lexicon and considering possible signs prior to recognition is called lexical access. 

So your mind is very busy in those few hundred milliseconds between "seeing" a sign and "understanding" what it means. We have described these 3 steps as though they occur one after another, but it is possible that they unfold simultaneously. In the rest of this article, we will focus just on the first of these 3 steps - Perception. 

Chapter 2 The mind’s role in perception 

Recall the distinction that we drew between "seeing" with our eyes but "perceiving" with our minds. In this chapter, we'll explore this difference - seeing vs. perceiving. Consider a continuum of gradually changing handshapes with two contrasting handshapes, such as the Flat-O handshape and the 8-handshape, as the endpoints of the continuum. Signers are perfectly capable of seeing all the minor variations in form between these two endpoints. But when we perceive signs, we don’t attend to all that variation. Instead, we impose a category structure, in this case two categories: one for /Flat-O/, and one for /8/. But where do these categories come from? 

One theory proposes that perceptual categories are built up on the basis of the distribution of phonetic experience acquired across the lifespan (e.g., Pierrehumbert, 2003). Categories are centered around the densest areas of perceptual experience, extending into more sparsely populated areas of perceptual space. While most theories agree that sub-lexical units like handshapes are stored as categories that group a set of possible tokens that differ slightly in their form, theorists don't agree on the nature of those categories. According to the view presented here, categories have internal structure. There are some exemplars that are more central to the category, and others that are more peripheral. One piece of evidence for the graded structure of categories is called the Perceptual Magnet Effect (Kuhl, 1991). Kuhl found that peripheral exemplars of a phoneme category have a tendency to be perceived as equivalent to more central members of a category. She proposes that the central members, which are more frequent and more prototypical of the category, attract the peripheral members in perceptual space. We don't just see handshapes; our experience with language forms the categories in our minds that drive the way we perceive handshapes, such that peripheral tokens seem to us to be more like central tokens than they actually are. Clearly then, experience shapes perception. But each of us differs in our range of experience seeing signs. Does this mean that we also each have different perceptual categories? We explored this research question in an empirical study of handshape perception. 

Experimental Method. To explore whether signers have similar or different perceptual categories of handshape, we recruited three groups of signers with different background characteristics (see Table 1). One group of signers was deaf and had been exposed to ASL from birth; a second group was deaf and had been exposed to ASL since adolescence; a third group was hearing, and had acquired ASL as a second language in early adulthood. The two groups of deaf signers had been using ASL for a similar number of years even though they started learning ASL at different ages. 

Participants completed two tasks: Identification and Discrimination. In the identification task, participants were seated in front of a computer. They saw a target sign, and had to make a forced choice between two possible responses. For example, they might see the two sign glosses for WHITE and LIKE, differing only in handshape (/Flat-O/ vs. /8/). They were then instructed to select the response that matched the target sign best. Targets consisted of synthetic signs, as shown in Figure 1. The discrimination task involved watching a series of three synthetic signs, a target sign followed by two response options, as in Figure 2. Participants had to decide if the second or the third sign was the same as the first target sign. 

Results. Turning to the results, we consider first the identification task. We did not find an effect of experience on the categorization of handshapes on the identification task. The three groups responded similarly on this task despite different histories of exposure to ASL. This indicates that deaf native signers, deaf non-native signers and hearing second language signers all group tokens of handshapes into similar categories. But what about the internal structure of those categories? Are they also the same? On the discrimination task, we did find differences in performance. Deaf non-native signers tended to perceive handshape tokens outside the central area of the category to be more like category prototypes than they actually were. Deaf non- native signers showed a somewhat mitigated effect of prototypes, but only on those tokens that were still fairly close to the category center, while peripheral tokens were not attracted to the center. Hearing second language signers show the weakest effects of category structure on perception. Although they categorized handshape tokens in the same way as the other two groups, they continued to distinguish handshapes easily across the central and peripheral regions of the category (cf. Morford et al., 2008). 

Chapter 3 Conclusions: Perceptual categories are structured by language exposure 

Individuals of all ages can and do learn to sign fluently. But you may notice that sometimes you are signing with someone who will ask you to repeat yourself. One explanation for these breakdowns in comprehension may be related to the way the perceptual categories are influencing language processing. Our study has shown that individuals who are exposed to ASL from birth, and grow up constantly exposed to sign language develop perceptual categories that are finely tuned to the distribution of handshapes in ASL, and that this experience actually facilitates the detection of the handshapes of ASL according to their functional value in the language. By contrast, individuals who are first exposed to ASL later in life experience fewer benefits of perceptual experience during handshape discrimination. Thus, when seeing a sign that differs only minimally from another sign in handshape, a non-native signer may need to be more attentive to the phonetic realization of the sign to be sure they have recognized the sign correctly, while native signers recognize signs more efficiently and with more automated Processing of the phonetic form. One implication of these research results is that it is critical to encourage early exposure to a signed language. Early exposure impacts the smallest details of how we “see” signs. 

Acknowledgements: 

We thank the participants of our research, as well as Sarah Hafer for help in collecting data, Angus Grieve-Smith for the development of synthetic signer Genie, and James MacFarlane and Gabriel Waters for contributions to data analysis. Special thanks to Jo Santiago and Melissa Malzkuhn for AV support. 

This research was supported by NIH Grant R03 DC03865 to Jill P. Morford, and by the National Science Foundation Science of Learning Center Program, under cooperative agreement number SBE-0541953. Any opinions, findings, and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of the National Institutes of Health or the National Science Foundation.

REFERENCES

  • Grieve-Smith, Angus B. (2002). SignSynth: A Sign Language Synthesis Application Using Web3D and Perl. In I. Wachsmuth & T. Sowa (Eds.), Gesture and Sign Language in Human-Computer Interaction, pp. 37-53. Heidelberg: Springer.
  • Kuhl, P. (1991). Human adults and human infants show a ‘perceptual magnet effect’ for the prototypes of speech categories, monkeys do not. Perception and Psychophysics, 50, 93-107.
  • Morford, J. P., Grieve-Smith, A. B., MacFarlane, J., Staley, J. & Waters, G. S. Effects of language experience on the perception of American Sign Language. Cognition, 109, 41-53, 2008.
  • Pierrehumbert, J.B. 2001. Exemplar dynamics: word frequency, lenition and contrast. In J. Bybee & P. Hopper (eds.), Frequency and the Emergence of Linguistic Structure. Amsterdam: John Benjamins, 137–157.