• 沒有找到結果。

2.3.1 The definitions of accuracy, fluency, and complexity

As noted previously, the research aims to improve second language learners’ writing ability by adopting PI, TBI, and the eclectic instruction. In order to examine the effectiveness of the three instructions, we need to measure learners’ progress in writing ability. Since language development is changes of one’s language system and could only be inferred from learners’ language examples, it is necessary for us to define the ways to describe learners’ writing ability first. According to Wolfe-Quintero, Inagaki, and Kim

(1998) and A. Housen and Kuiken (2009), complexity, accuracy, and fluency (CAF) have been used to measure and describe language learners’ progress in writing. The study generalized common definitions of CAF from different studies. Accuracy is defined as the ability to produce error-free written text (A. Housen & Kuiken, 2009). Fluency is often thought of as the speed to produce written language. However, it could also mean the ability to produce coherent written text (A. Housen & Kuiken, 2009; Latif, 2013). Complexity is the most ambiguous one of the three components. Complexity commonly involves the ability to use a range of vocabulary and structures in written text. Because it refers to learners’ willingness of taking risks to use both different structures and more difficult language, it could be sub-categorized into syntactical and lexical levels (Wolfe-Quintero et al., 1998).

After knowing the definitions of CAF, the study could again connect them with the core elements of PI and TBI and discuss in which aspects the two instructions could improve learners’ writing ability.

Processing instruction focuses mainly on meta-linguistic knowledge processing. For example, Chi (2011) found that processing instruction can activate learners’ awareness and involve their processing mechanism so it could be an effective pedagogical instruction to treat learners’ minor errors in their writing. Thus, PI is assumed to benefit learners’ writing ability in accuracy. On the other hand, according to Brown (2007), “In task-based

instruction, fluency may have taken on more importance than accuracy in order to keep learners meaningfully engaged in language use (p.241).” Thus, we suppose TBI would benefit learners’ writing ability more in fluency and complexity aspects. In sum, though both PI and TBI could help learners improve their writing ability, they benefit learners in different aspects of writing. Again, the complementary benefits of the two instructions give opportunity to the eclectic instruction.

2.3.2 The ways of measuring accuracy, fluency, and complexity

In order to give complete descriptions of learners’ language development, Latif (2013) recommended researchers and teachers to use multiple measures to assess CAF. In this vein, CAF would be examined from both macro and micro perspectives in the study. For accuracy, researches need to first consider how to define errors because it would have an impact on the results of studies. Since grammatical and spelling accuracy are essential components in pre-intermediate ESL learners’ writing, learners’ progress would be judged in grammatical correctness from macro perspective and learners’ spelling, mechanics from micro perspective in the study. For fluency, Wolfe-Quintero et al. (1998) concluded that the production of learners’ writing would gradually increase with language development so the study would view content as an indicator from macro perspective. Besides, the organization/coherence of learners’ writing such as transitions would be considered as micro perspective in the study. For complexity, learners’ syntactical complexity would be observed

from macro perspective for more skilled learners would take risks to use different or more complex structures in their writing. Learners’ lexical complexity would also be examined from micro perspective.

According to Heaton (1990), there are three kinds of methods for measuring learners’

writing, the error count method, the impression method, and analytic method. For the error count method, raters just count up the numbers of errors in a writing product and give a score based on the number of errors. The method does not consider different error types which writers made; neither does it consider content and the purpose of writing, communication, and thus may not be an ideal option.

For the impression method, raters do not actually read writing products but just give scores based on total impression of them so the method requires multiple raters to ensure its reliability. Due to its characteristics, it is often employed when raters need to mark a large number of writing products in a short time and thus could not serve pedagogical purposes.

For the analytic method, teachers could identify the features they want to examine first.

They can even assign different weight to different features. Then, they can rate learners’

writing product according to a set of predetermined rubrics. For example, the writing test in General Scholastic Ability Test (GSAT) focuses on five features, content, organization, grammar/structure, vocabulary/spelling, and mechanics when rating examinees’ writing products. The five features are individually assigned 5, 5, 4, 4, 2 points so the total score is

20 points. If an examinee got 5 in content, 4 in organization, 3 in structure, 3 in vocabulary, and 2 in mechanics and his/her total score would be 17/20. Researches showed that the GSAT rubrics contain both validity and reliability, and there are also studies using the GSAT rubrics as the tool to measure participants’ progress in writing instruction. (Huang, 2006;

Yang, 2012).

It is more difficult to rate complexity by analytic scoring for its ambiguous concept. As noted earlier, complexity involves the ability to use a range of vocabulary and structures in written text so it is commonly be sub-categorized into syntactical and lexical levels. For macro perspective, mean length of sentence is often adopted to measure syntactical complexity since one could express more sophisticated meanings in longer sentences (Norris & Ortega, 2009). For micro perspective, the ratio of topic-related words could be adopted to measure lexical complexity for it shows learners’ lexical density (Alex Housen, Kuiken, & Vedder, 2012). However, the current study focused only on syntactical complexity based on the following reasons. First, it is difficult to define if the words are topic-related or not in a piece of writing because the meanings of words are fluid in different proficiency levels. Take the word, presentation, for example, it could mean talking to a group of people to give information but it could also mean the way in which the food is set in the dishes. Thus, a word may not be topic-related according to the meaning of lower proficiency level but could be according to the meaning of higher proficiency level. Second,

there are studies using syntactical complexity to measure or evaluate learners’ writing ability (Ortega, 2003).

相關文件