• 沒有找到結果。

CHAPTER 6 GENERAL DISCUSSION

6.1 Summary of findings

The purpose of the present study is to investigate how viewpoint can be jointly expressed by both language and the embodied gesture in the descriptions of third-person past events in conversational contexts. In addition, following McNeill’s

notion that language and gesture are co-expressive in viewpoints (1992), the present study also attempts to see whether gesture collaborates with the accompanying speech in expressing the same or different viewpoints. In answering these questions, the present study provides a multi-modal representation of viewpoints, in which we can not only see the linguistic representations of viewpoints, but also the gestural representations. In this study, we have identified three viewpoints might arise from the descriptions of third-person past events in ongoing conversations—speaker, observer and character viewpoint. In terms of the linguistic representations of viewpoints, speakers can make use of various linguistic structures or paralinguistic devices in representing the three viewpoints. With regard to the gestural representations, five gestural features—gestural space, handedness, stroke duration, frequency, and the involvement of other body parts are identified as indicative criteria in representing the three viewpoints.

In addition to the qualitative study, this study presents the quantitative study of linguistic and gestural viewpoints produced by speakers talking about third-person past events in conversational contexts. The distribution of the linguistic representations of each type of viewpoint suggests that observer viewpoint is the most often adopted (60.5%, see Table 1). Speakers usually talk about the events in the role of an outside-the-event observer by making use of a plain statement, without

indicating their current status as a speaker in a conversation to other co-conversationalists or re-enacting the roles of characters in the original event.

Character viewpoint, in contrast, is rarely seen in speech when speakers are talking about the third-person past events (5.9% in Table 1). This suggests that speakers seldom use language in order to act as a character in a past event by enacting the speech or thoughts of the character.

With respect to the quantitative study of gestural viewpoints, the distribution of each viewpoint suggests that character viewpoint—which is the least frequently expressed in language, is found to be the one most often expressed in gesture (52.9%, see Table 9). While a speaker in talking about third-person past events also often shows his/her concern about the ongoing conversation and reveals his/her here-and-now state as a speaker which suggests that speaker viewpoint is being expressed in speech, the speaker rarely use a speech-accompanying gesture which will help them to do so (3.4% in Table 9). Speakers rarely make use of gesture to suggest that they are engaging in a conversation with other co-conversationalists while at the same talking about others’ past events. While observer viewpoint is the most

unmarked way in representing third-person past events; it is, on the other hand, common in both language and gesture.

From the quantitative analyses of linguistic and gestural viewpoints, the

respective distribution of each viewpoint in each modality suggests a division of labor in that certain viewpoints tend to be expressed through only one channel rather than the other. In addition, the finding of different distributional patterns in each modality also implies that speech-accompanying gestures might not always collaborate with speech in conveying the same viewpoint.

The quantitative and combined study of the collaborative expressions of linguistic and gestural viewpoints in the description of the same event further suggested that 64.7% of all gestures produced in the current data convey different viewpoints from those conveyed in the accompanying speech. Mismatching expressions of viewpoints in language and gesture occur more frequently than matching expressions. This finding not only shows us how speech and gesture collaborate with each other to represent viewpoints while talking about third-person past events, but also leads us to see the cognitive process underlying both linguistic and gestural channels when people communicate. In the following sections, two hypotheses of gesture production—the Lexical Semantics Hypothesis and the Interface Hypothesis, would be provided to explain the findings of the current study.

Theories of gesture production provide different hypotheses in an attempt to explain the processing of gesture and its relationship to the speech production process.

Three hypotheses related to this issue—the Free Imagery, the Lexical Semantics, and

the Interface Hypothesis, respectively have views with regard to how gestures are informationally related to the accompanying speech and at the level at which the content of the gesture is determined. In fact, in discussing the collaborative expressions of viewpoints in language and gesture in the current study—how speech-accompanying gesture collaborates with language in expressing viewpoints, we are exploring how language and gesture organize and structure the information concerning the third-person past events. For in either language or gesture, how speakers organize and structure the information suggests the ways speakers can choose to represent past events, or from which perspective the events are seen by the speakers—which is indeed how we define the notion of viewpoint. In other words, speakers’ different ways of organizing and structuring the information concerning the

third-person past events in either the linguistic or gestural channel suggest different viewpoints are being represented. For example, when the past event concerns the character’s speech, structuring it into a direct speech or indirect reported speech in

language might lead to different linguistic viewpoints. Likewise, different gestural viewpoints might be represented, when the way the information that is structured into an embodied gesture shows different performances of gestural features. The discussion on the collaborative expressions of viewpoints in language and gesture therefore lies in how we are able to explain the way that language and gesture work in

coordination with each other in organizing and structuring the information concerning third-person past events. The Lexical Semantics Hypothesis and the Interface Hypothesis, regarding this issue, will provide different explanations concerning the informational coordination between speech and gesture in expressing viewpoints. The Free Imagery Hypothesis, due to the fact that its prediction of how the gesture coordinates with the accompanying speech in expressing the viewpoints is irrelevant concerning the current data, is excluded from the general discussion.

In the following sections, a comparison of the current study with McNeill’s gestural study on viewpoints will first be discussed. Then, we will see the theoretical accounts provided by the Lexical Semantics Hypothesis and the Interface Hypothesis.

On the other hand, we will also see how the current data can provide evidence that is compatible with one of these hypotheses, and as a result come to support one of the theories of gesture production.