• 沒有找到結果。

Speech text preparation and pilot test

Chapter 3 Research Method

3.2 Speech Materials

3.2.1 Speech text preparation and pilot test

In the said scenario, there were three different versions of the panelist’s answer.

The practice speech in the study represents a shorter response, lasting 47 seconds; the slower speech in the study lasted for two minutes, and was delivered at a slower speech rate; lastly, the faster speech represents a response delivered at a faster pace, which lasted for two minutes. The transcripts of the three speeches used in the experiment were adapted from authentic speeches.

As reviewed in Chapter 2, speeches with different speech rates (SRs) were chosen in prior studies as fast speech rate (FSR) and slow speech rate (SSR) speech materials in simultaneous interpreting (SI) experiments.

Originally, since the purpose of the study is to find out whether student interpreters will use different strategies in note-taking when coping with different input speech rates,

the faster speech in the experiment was designed to be not so fast that the subjects could not handle at all. Thus, for the FSR speech, the researcher intended to set the SR at a level where the subjects could still manage to keep up while being aware of the presence of a faster speed, so that they could consciously or subconsciously adopt strategies to cope, without completely surrendering to the speed. The researcher was aware of the fact that the descriptions proposed by Setton and Dawrant (2016b)

included professional interpreters, whose interpreting skills and experience may exceed those of student interpreters. However, Boéri and de Manuel Jerez (2011, p. 56)

observed that interpreting students could “work quite comfortably at

intermediate-advanced stages of both consecutive and simultaneous training with speeches ranging between 120 and 140 wpm, and even more comfortably at the end of their training.” Since student interpreters should be capable of handling the “moderate”

SR after at least the intermediate level of training, Setton and Dawrant’s SR levels should be able to be applied to them, too. As the subjects of this study are master’s students studying conference interpreting who had received at least 1.5 semester of training in CI, they should represent interpreting students at “intermediate-advanced stages” fairly well. Based on the analysis above, the version-one faster speech in this study was set at 150 wpm, within the range of a “challenging speed.”

After the version-one speech materials were prepared, the researcher conducted a pilot test on the data collection procedure, including both the experiment and the retrospective interview. The purpose of the pilot test was to, first, determine if the procedures were clear to the participants; second, determine whether the speech

materials were suitable for generating the data needed to answer the research questions;

third, check whether the questions asked during the retrospective interview were unambiguous and comprehensive; and last, calculate the time needed to complete the data collection.

Three student interpreters who were in the first, second, and third year of their study in graduate programs of translation and interpretation respectively were invited to take part in the pilot test. The third-year student did not meet the inclusion criteria for this study, but the researcher included her in the pilot test because the researcher wanted feedback from a more advanced student who has already received two years of CI training in a graduate program of translation and interpretation.

As can be seen in Table 3.1, the version-one FSR speech was delivered at 150 wpm, which falls under the category of challenging speech rates for interpreters according to Setton and Sawrant (2016). In order to produce statistically significant results, the wpm gap between the two speeches was set to be as wide as possible. Therefore, the

version-one slower speech was decided to be delivered at 100.5 wpm, which is almost the slowest SR under the category of easy SRs.

Table 3.1 The difficulty level of version-one and version-two speech materials

wpm 100-120 120-140 140-160 >160

difficulty level according to Setton and Dawrant (2016b)

easy moderate challenging difficult

Version one

In the pilot test, the participants responded that they believed the FSR speech was not fast enough for this study. They believed that when interpreting the version-one FSR speech, they did not have to implement any note-taking strategies that were different from those implemented in interpreting the SSR speech, and that they thought the SR difference between the two speeches did not lead to any differences in their output performance. The researcher, after discussing with the advisor and the pilot test participants, listed two reasons why the SRs in the version-one speeches had to be modified. First, the content itself in the two version-one speeches was not difficult to understand. Thus, even though the FSR fell under the category of challenging speeches, the overall difficulty level was not that high. However, due to the limited scope of this study, it was not feasible to ask the participants to prepare for a more technical speech before they came to the experiment. In addition, if the speeches were designed to

contain more difficult words and sentence structures, it would be hard to determine whether an error or omission in the interpreting output was due to FSR or difficulty in understanding the content of the speech. Secondly, if an interpreter only has to interpret one two-minute fast-paced segment instead of interpreting many fast segments in a roll as is often the case in real-life CI settings, it is more likely that the SR would not be that much a difficulty to the interpreter since their energy level is high enough to enable attention to move between the Listening and Analysis Effort and the Note-taking Effort very fast. The researcher hypothesized that the difficulty levels proposed by Setton and Sawrant (2016) applies more to continuous interpreting tasks than a controlled

interpreting experiment such as in this study, where participants would only do one or two segments.

Since the version-one faster speech, which was in the category of a challenging speech rate, was not considered that challenging in the case of this speech material and experiment design, the version-two faster speech was set to be delivered at a difficult speech rate (see Table 3.1). The version-two faster speech was 164 wpm, which was slightly over the threshold between challenging and difficult speech rates. As for the slower speech, it can be surmised that the “easy” speech rate according to Setton and Sawrant (2016) did not have to be tested in the experiment because of the following reason. The speech rate in the version-one slower speech, which was 100.5 wpm, was manifestly considered easy. This was shown by the fact that the first two pilot

participants responded finding the difficulty level brought by the two speech rates very similar. Sawyer proposed three different levels of competency in interpreting, “novice,”

“advanced beginner” and “competent” (2004, p. 110). Students at the novice level are those who are qualified to start learning in the program; advanced beginners are those who are at an intermediate level after receiving certain amount of training; and reaching the competent level means the students are competent enough to enter the work market.

The formal and pilot participants in this study were student interpreters who had taken one and a half semesters and three and a half semesters of CI courses in their programs.

This indicates that they were not novices, but advanced beginners in CI note-taking.

Some were even close to competent level. Therefore, they did not have to be tested on the considered easy SR, which is something even untrained novices could possibly handle very well. The version-two SSR was therefore set to be delivered at a “moderate”

SR, which was 134.5 wpm in this case. The FSR was set at a rate only slightly higher than the threshold between “challenging” and “difficult” speeches, which is 160 wpm.

After the version-two speech materials were produced, the researcher asked the same speaker who recorded the first versions to record them. The researcher then asked two student interpreters in the third year of graduate study to listen to and evaluate the speech materials. Two of them both responded finding the FSR fast enough to impel them to adopt strategies when doing CI, but not so fast that chunks of information would be completely dropped. The second versions were thus decided to be the final version of the speech materials in the formal experiment.

Since the independent variable in the study is SR, the difficulty level of the speech materials is required to be consistent. Four measures were taken to maintain the

consistency of the difficulty level of the speeches (Table 3.2).

Table 3.2 Difficulty assessment of the speech materials

Difficulty fairly difficult fairly difficult fairly difficult Dale-Chall Readability

Average words per sentence 17.00 16.69 16.55

Proposition/words 31.06% 33.09% 33.94%

The Flesch-Kincaid reading-ease test and the Dale-Chall readability test were both used to evaluate how difficult the speeches are to understand. The results showed that the three speeches had similar readability scores in both readability tests. Next, the average words per sentence of each speech were calculated. Sentence length is an indicator of difficulty level. Dam (2001) stated that a longer sentence tends to contain more information, and is usually syntactically more complex, making the sentence harder to be processed. Lai (2010) also reported that longer sentences seemed to cause difficulty for sight translators in anticipating the upcoming messages, possibly because

longer sentences have a higher information density. The results showed that three speeches all had similar sentence length. Last, the proposition density of each speech was calculated. The propositions in each speech were listed out according to the instructions proposed by Bovair and Kieras (1985). For each speech, the proposition density was calculated by dividing the number of propositions by the total word count.

The three speeches scored similarly on proposition density, with the practice speech having a slightly lower density. Based on the four measurements, the level of difficulty of the three speeches5 can be considered very similar.