• 沒有找到結果。

Error Analysis in Language Teaching

Lado's (1957) Contrastive Analysis Hypothesis, which became popular in applied linguistics during the 1960s, applied structural linguistics to language teaching and hypothesized that second language learners’ difficulties arose from interference from their native language. It was thought that if the two languages could be compared and analyzed, learners’ errors could be predicted and clarified. However, in the late 1960s and early 1970s, contrastive analysis began to fall out of favor with linguists and those in the field of second language acquisition due to the fact that the theory could not account for, explain, or predict many errors made by second

language learners (Richards, Platt, & Platt, 1992).

In response to this problem, Corder (1967), in his seminal paper, “The

Significance of Learners’ Errors,” proposed the basis for what would become the field of error analysis. One of the most important contributions of the paper was his

differentiation of the terms error and mistake, defining “errors of performance as mistakes [and] reserving the term error to refer to the systemic errors of the learner from which we are to reconstruct his knowledge of the language…” (Corder, 1967, p.

167). Before this important distinction was made, errors and mistakes were seen as interchangeable and were to be avoided at all costs.

Equally important, Corder delineated the value of learners’ errors. He noted that errors are significant to the teacher because they give him a benchmark by which to measure students’ progress, to the researcher, for errors provide him with

“evidence of how language is learned or acquired” and to the learner, who can use

errors to improve his language ability by testing his hypotheses regarding the new language and then adjusting them accordingly (Corder, 1967, p. 167).

Another important early contribution to the field of error analysis was made by Selinker (1972) in his influential paper entitled “Interlanguage,” where he

theorized that there exists a “psychological structure… latent in the brain, [which is]

activated when one attempts to learn a second language”. (Selinker, 1972, p. 211) Starting from this “latent psychological structure”, Selinker notes that target language utterances of non-native speakers are generally different from those of native

speakers. He therefore posits that separate linguistic systems are at work when adults attempt to produce a second language (p. 214). If true, this has a direct bearing on error analysis in translation, as learners with the same native language should produce similar, predictable errors in the target language.

Selinker further postulates the concept of interlanguage, claiming that approximately 95% of learners speak an imperfect intermediary language with characteristics of both their native (L1) and the foreign language (L2) they are trying to learn. At any point in the process of language acquisition, learners may fossilize errors, backslide into incorrect renditions when new material is introduced,

overgeneralize language rules they have learned, develop their own (incorrect) rules, and even cease learning. Selinker asserts that, “Many IL linguistic structures are never really eradicated for most second-language learners…” (Selinker, 1972, p. 221).

Assuming this is the case, identifying specific errors for specific groups of students should be immensely beneficial to both teachers and students.

Although error analysis has contributed much to the fields of applied linguistics and second language acquisition, it has also been criticized for its

imprecision. In the words of Huang (2002), “…very often classes of errors overlap,

and occasionally some errors simply do not lend themselves to a clear-cut

categorization. There seems to be no ideal model of classification… All models leak, in one way or another” (p. 29).

Although there is no perfect error typology, researchers must still strive to develop an error classification system for their working language that is both feasible and objective. Various error classification systems for translation have been proposed.

Pym (1992) states “errors may be attributed to numerous causes… located on numerous levels…[and] the terms often employed to describe such errors…lack commonly agreed distinctions or fixed points of reference” (p. 282). Pym therefore developed a distinction between errors, categorizing them as binary and non-binary.

Pym writes, “For binarism, there is only right and wrong; for non-binarism there are at least two right answers and then the wrong ones” (1992, p. 282). This important distinction has been incorporated into the current study, which begins by categorizing student translation errors as either stylistic or linguistic errors.

Renowned Taiwanese studies involving English-to-Chinese error analyses of translations, including Chen (1999), Her (1997), Lai (2009) and Liao (2010), typically use Pym’s error taxonomy as a starting point, and then fine-tune the error categories according to their needs. Unfortunately, there have been few studies done concerning translation error analysis of Chinese-to-English texts, especially in Taiwan and with regard to English language pedagogy. The most relevant ones are outlined below.

Student Translations and Error Analysis

Lim (1995) analyzed 200 Chinese-English translation exercises done by high-school students of Ethnic Chinese origin in Singapore. Errors were classified into three categories: 1. syntactic errors, 2. vocabulary errors and 3. semantic errors.

Syntactic errors were found to be the most problematic at 55.84%, and the main problems were misuse (53.98%), omission (25.74%), awkwardness (11.24%), and redundancy (9.04%). Semantic errors accounted for 22.25%, with the main errors consisting of misinterpretation (56.72%) and modified or partially misinterpreted meanings (43.28%). Vocabulary errors ranked last at 21.91%, with nearly all errors being inappropriate word usage (97.28%). (pp. 477-8)

Chen (1999) conducted a preliminary study which examined a variety of student translation errors, including: referential meaning, specificational meaning, relational meaning, collocation, diction, variety, mood, style, ambiguity, figurative meaning, tone, and language interference. Chen concludes with suggestions as to how students can improve their translations.

Pan (2011) recommends that student translations pass through two stages: first, the comprehension phase, where students focus on breaking the text into logical parts and identifying subjects and verbs, and then the reformulation phase, where students utilize the English structure and vocabulary to perform their translations.

A 2011 study at Beijing University of Posts and Telecommunications analyzed the English translation errors of two groups of students—a control group and an experimental group. Errors were classified into 14 distinct categories. The experimental group was made aware of their errors, while the control group was not.

For the control group’s final assignment, the top three types of errors committed were vocabulary misuse (15.52%), verb tense errors (13.79%), and article errors (12.93%).

The experimental group’s most common errors were verb tense errors (14.86%), vocabulary misuse (14.19%) and Chinglish (12.84%) (Zhang & Wang). This study suggests, then, that giving students feedback and making them aware of their errors can improve translation performance and help eliminate at least some common

mistakes. It therefore appears that individualized feedback is ideal in the translation classroom.

Wu (2014) analyzed the translations of 68 high school students from New Taipei City. A total of 16 sentences, 8 from the GSAT (大學學科能力測驗) and 8

from the Advanced Subjects Test (入學指定科目考試), were used. Wu found that

students encountered problems with the present perfect, article use, the suffix –s, and prepositions. Wu also discovered that some of the challenges affecting high-scoring students were: stressful test conditions, lack of collocation knowledge, and the use of word-for-word translation strategies.

Student Compositions and Error Analysis

Although Chinese-to-English translation and English writing may produce slightly different errors, due to the fact that there have been so few studies conducted concerning error analysis in Chinese-to-English student translations, and because Taiwanese students often equate writing in English with translating their Chinese thoughts into English, it is worth examining the many error analysis studies of English compositions written by Taiwanese students.

Chen (1979) studied 80 randomly selected English compositions from a collection of 632. The students in the study were all English majors studying at

Kaohsiung Teacher’s College. Chen divided errors into three main categories: local (7 subcategories) global (7 subcategories) and miscellaneous errors. He found that the most common types of errors were: verb errors (22.86%), noun errors (18.20%), determiner errors (11.36%), preposition errors (8.24%), and adjective errors (6.71%).

Seah (1980) analyzed the errors from the compositions of Vancouver

Community College students studying at the English Language Training Department.

A total of 27 Mandarin and Cantonese-speaking students participated; 9 each were chosen from the elementary, intermediate and advanced English classes. Seah categorized errors into four categories and found that verbs (23.3%) were the most common error type, followed by articles (16.4%), prepositions (16.1%) and word order (5.7%).

Chiang (1981) collected 1, 589 essays from 732 students enrolled in the English Department at National Taiwan Normal University. Twenty compositions were collected from each year of the day division students, while ten compositions were collected from each year of the evening division students, for a total of 120 compositions. The sampling rate was 7.55% (p.31). The compositions were then analyzed according to four main error categories: 1. Lexical errors (3 subcategories) accounted for 4.76% of total errors, 2. Grammatical/syntactic errors (30

subcategories) accounted for 80.64%, 3. Semantic errors (15 subcategories) accounted for 13.09%, and 4. Miscellaneous errors made up 1.5% of the total number of errors.

Of these error types, verb use errors (lexemes) were the most problematic at 9.55%, followed by article usage at 7.64%, prepositions and particles at 7.10%, verb tense at 6.84%, and nouns (number and countability) at 5.82%.

Horney (1998) analyzed the errors of 80 English compositions written by Taiwanese students in Taipei and Kaohsiung who had achieved a minimum score of 500 on the TOEFL. He divided errors into three main categories: local (7

subcategories) global (7 subcategories) and other errors (9 subcategories). He discovered that articles accounted for the greatest number of errors at 11%.

Interestingly, he also found that students tended to omit “a” and use “the” when no

article was necessary. Moreover, “the” was used instead of “a” at a rate of 94%.

Article errors were followed by preposition errors and verb usage errors, which corresponded to 9% each. Noun errors and pronoun errors were also equal at 5%.

Kao (1999) analyzed the errors of 169 compositions written by 53 students majoring in English in Taipei, Taiwan. Of the participants, 22 studied at Soochow University and 31 were enrolled at Fu Hsing Kang College. Kao divided errors into three main categories: lexical errors (4 subcategories), grammatical errors (26 subcategories), and semantic errors (2 subcategories). The five most common error types in order of frequency were: verb tense, rhetoric, spelling, punctuation, and number and countability of nouns.

Chen’s 2006 error analysis study of Taiwanese beginning English learner compositions examined the impact a multimedia tutorial would have on learning grammar. For both the control group and the experimental group, Chen found that the top five error types were: verb usage, punctuation, lexicon, syntax, and capitalization.

Verb usage and punctuation errors ranked first and second respectively for both groups. Verbs (4.59%), punctuation (3.73%) and lexicon (2.52%) troubled the control group the most, while verbs (5.44%), lexicon (3.61%), and punctuation (3.41%) proved most difficult for the experimental group.

Research conducted in southern Taiwan by Yang (2006) employed error analysis to explore the differences in English compositions among 113 freshman, junior, and senior high school students. There were five main categories: lexical errors (9.5%), grammatical errors (64.8%), noun errors (12%), semantic errors (24.9%), and miscellaneous errors (0.7%). These five main categories were then parsed into 31 subcategories. Since there were only two subcategories for semantic errors, rhetoric errors (13.5%) and stylistic errors (11.4%), these two ranked as the

first and second most common error type. Grammatical errors, on the other hand, were divided into 29 different subcategories, resulting in much lower percentages for each error. The top five grammatical errors were: conjunction errors (7.1%), number and countability of nouns errors (6%), spelling errors (5.7%), and tense errors equal with adjective errors (5.5%).

A 2007 study in California analyzed the essays of native Mandarin Chinese speakers. The 27 participants were all graduate students at Azusa Pacific University and the majority of them were from Taiwan. The study classified errors into 19 different categories. The most significant finding was that the main errors

committed—conjunctions (13.1%), articles (11.4%) and prepositions (11.2%)—were all related to Mandarin Chinese interference specifically (Chou & Bartz).

Although the manner in which the researcher classifies and defines error categories will ultimately have some bearing on the final outcome of a study, based on the findings above, it seems reasonable to assume that the current study will find that the Taiwanese participants also have difficulties with verb tense, articles, prepositions, vocabulary use, noun declension and conjunctions. It also seems

reasonable to hypothesize that individualized feedback will improve translation scores over the course of the study.

Error Analysis for Testing and Certification

The American Translators Association (ATA) Framework for Standardized Error Marking Explanation of Error Categories lists 23 different errors: addition, ambiguity, capitalization, cohesion, diacritical marks/accents, faithfulness, faux ami, grammar, illegibility, indecision, literalness, mistranslation, misunderstanding,

omission, punctuation, register, spelling, style, syntax, terminology, unfinished, usage,

and word form/part of speech (ATA, 2015). The ATA uses an error-deduction method that ranges from 2 to 16 points and a complicated flowchart to score professional translations.

Lai (2011) in “Reliability and Validity of a Scale-based Assessment for Translation Tests” reports on previous scales used and offers a justification for the scoring scale used by the MOE for the Chinese and English Translation and

Interpretation Competency Examinations (中英文翻譯能力檢定考試) from 2007 to 2011. According to Lai, the study on the scoring scale started with research by Liu (2005), who adapted her scale from Carroll (1966). Both scales took the sentence, and not the entire text, as the unit of measurement. Liu employed two 5-point scales (5/5 scales): readability and fidelity. Each sentence was assigned a score out of five for both categories, and then the results were tallied. However, Liu found that the readability scale was not as reliable as the one for fidelity (p. 714). In Lai’s study, Liu’s papers were re-analyzed, and scored four different ways: with Liu’s scales, with error analysis, with 5/5 scales, and with 6/4 (accuracy and expression) scales. Lai concluded that “accuracy was the more valid measure of translation ability,” and that for the correlations between accuracy and expression “both 5/5 scales (0.724) and 6/4 scales (0.751) were high than Liu’s scales (0.486)”. Furthermore, Lai found that the pass rate for the translations used in Liu’s studies were 0% for Liu’s scales, 16.7% for error analysis, 16.7% for 5/5 scales, and 13.3% for 6/4 scales (p. 719). Lai also

discovered that the highest inter-rater correlation for both English-Chinese and Chinese-English translations was between raters using the 6/4 scales (p.721).

While considerable research has been conducted regarding error analysis with respect to second language acquisition, English writing, and learner corpora, there is

especially with regard to Taiwan. Therefore, the purpose of this study is to attempt to analyze and identify errors commonly committed by adult Taiwanese students. It is hoped this will improve translation pedagogical methods in Taiwan by allowing teachers to hone in on typical problem areas, that it will give students themselves an awareness of frequent mistakes, thereby helping them to improve their English ability and study habits, and that it will contribute to the current study of error analysis in Chinese-to-English student translations and suggest opportunities for further research.

Chapter 3: Research Methodology

Research Design

This study used both quantitative and qualitative methods to analyze the Chinese-to-English translations of 30 adult Taiwanese English language school students in Taipei, Taiwan who did not have any professional translation experience.

First, students were recruited for the study and three Chinese and English Translation and Interpretation Competency Examinations (中英文翻譯能力檢定考試) from the

Ministry of Education (MOE) were chosen and administered in order of the year they had been held (2007, 2010, and 2011). Then Task 1 (Translation 1) was sent to the study participants, who were given one month to complete it and email it to the researcher. Once the first set of translations had been collected, the researcher used the comment feature of Microsoft Word to provide the participants with direct

feedback and corrections. These steps were repeated for Tasks 2 and 3. As soon as all the data had been collected, the top five and bottom five translators for Tasks 1 and 3 (Translations 1 and 2) were invited to partake in retrospective interviews. After the interviews were over, a voluntary biographical questionnaire was sent to the participants, the majority of whom completed it and returned to the researcher via Google Forms. Next, an error typology was developed, tested, modified and finalized.

The translations were then hand coded in Microsoft Word using the error tags from the error typology. Once this was complete, a scoring scale was developed, tested, modified and finalized. Scores for each translation were calculated using the scoring scale. Finally, the error codes and translation scores were statistically analyzed.

相關文件