• 沒有找到結果。

A CONTINGENT L2 WRITING LESSON AND LEARNER RESPONSE

The following AfL lesson plan was implemented in the Fall 2011 semester as part of an integrated-skill Freshman English course in a university in Taiwan. It was the author’s initial effort in a contingent instruction plan, by redirecting effort, spending less time on areas where learners would not benefit and more time on areas that may facilitate AfL. In this plan, the following objectives were targeted in order to meet the AfL principles discussed above.

First, give learners enough guidance and practice to decipher the criteria of good work, by providing exemplars of various gradations and ample time for discussion. As Davison and Leung (2009) assert about teacher-based assessment, “…trustworthiness comes more from the process of expressing disagreements, justifying opinions, and so on than from absolute agreement” (p. 409).

Second, move the major part of teacher instruction from before learner performance to after it. Teaching without learner performance as a reference point is like sounding the bell without a striker. Effective teaching is an act contingent upon learners’ lacks and needs (Black & Wiliam, 2009). Situate the major part of teacher preparation between the time when learner work is collected and when it is returned for revision on the learners’ side. This is, in fact, much more challenging for the teacher in that the lesson plan incorporates knowledge of where learners are and where they are going.

Third, devote more class time to feedback discussion, as strategically planned and not random. Instead of writing feedback for each individual without follow-up elaboration, the teacher organises feedback from the pervasive patterns found in learner performance and brings it to class for face-to-face discussion with the learner group.

Based on the above literature review and student learning history, a mini instructional program catering to principles of AfL was designed. This AfL writing program has an opinion essay as the target genre and expects learners, by the end of the instruction, to be able to write a coherent multi-paragraph essay of at least 300 words discussing the

S.-C. Huang Like a bell responding to a striker: Instruction contingent on assessment

pros and cons of an issue and expressing clear personal opinions with adequate supporting details. Major procedures are described in Table 1. How each step is to be carried out and the rationale behind it are discussed below.

Time Procedures Main

agents

S.-C. Huang Like a bell responding to a striker: Instruction contingent on assessment

Table 1. An AfL instructional plan for L2 writing revision

To tide learners over from their previous EFL writing experiences, the teacher first probed learners to reflect upon their past writing assignments in a whole-class discussion. Points of discussion included the type of writing prompts, the length of time given, the length of writing in terms of number of words expected of them, how they prepared to write, what they did during writing, the type of feedback they got from teachers, what they did after getting feedback, and what was considered to constitute a good piece of writing. This discussion brought learner awareness to the

S.-C. Huang Like a bell responding to a striker: Instruction contingent on assessment

surface and at the same time provided the instructor important information for future planning of writing lessons contingent on learner needs.

At the beginning, the target genre (in this case, an opinion essay) and the difference between it and the learners’ past writing genres was introduced. For the convenience of communication, the TOEIC writing component, and specifically its standard question type eight, an opinion essay, was introduced. To prepare learners for an assessment-sensitive writing lesson, success criteria had to be deliberately communicated by the instructor and clearly felt by the learners. According to O’Donovan et al. (2004), both explicit verbal descriptions of criteria and the more tacit knowledge of what constitutes a good piece of work should be taught through various information channels. In order to do that, the author/instructor, by gauging the writing tasks on the TOEIC opinion essay, first introduced the official scoring criteria and the various standards ranging from five (the highest) to zero (the lowest) as released by the examination institution on its website (Educational Testing Service, 2011). The verbal descriptions provided by ETS, encompassing content, organisation, lexical usage and grammar, were explained to students by the instructor. Another useful resource was a set of five examinee samplers matching a range of high to low scores against the official criteria (Trew, 2006). Instead of showing the given scores directly along with the samplers, the instructor engaged students in a collective rating exercise. A sample piece was shown to students first. They were given a few minutes to read and evaluate the work and assign a score to it individually without discussion.

Once they were ready, the instructor asked for a show of hands and counted the class result on the blackboard. Learners were then invited to justify their choices and discuss their disagreement. After discussion, the actual score assigned in the TOEIC preparatory text, together with its explanation, was revealed to students. This whole-class exercise was repeated five times so learners were exposed to a variety of performance standards under the same writing prompt and the same scoring scale.

The discrepancy among student raters and the collective tendency of rater scores were also highlighted to point to them the nature of qualitative rating. It was hoped that these introductory procedures prepared learners well for their own writing by understanding the criteria and feeling a sense of ownership for their assessment capability.

Before Unit 2, three documents had been prepared for use in class. First, the instructor translated the examiner-scale used in the previous week into an instructional rubric that was expected to better serve instructional purposes (Andrade, 2000). In addition to the holistic score, four sub-scores on argument, organisation, lexical use, and grammar were added. Each of the five-scale levels were mapped to a 100-point scale, which learners were more familiar with from their past school experiences, and further divided into three finer levels. Verbal descriptions of the four subcategories were written concisely in learners’ first language in the hope that learners could understand easily. The second document was a sample essay written by the course teaching assistant who was at that time a senior student at the same university and whose English ability was comparatively high among the university’s entire student population. She was told to write under the same prompt but was not given any instruction. It was hoped that her writing could be given to learners once they had completed their own, to serve both as a high-standard sample and, since it would not be a perfect piece of work, a reference on how the instructional rubric could be used in assessment for learning. The third document was an assessment table to be used in

S.-C. Huang Like a bell responding to a striker: Instruction contingent on assessment

peer assessment workshops. The table required learners to write down the name of the author, the name of the rater, a holistic score ranging from 15 to 1, four sub-scores ranging from 15 to 1, two positive comments, and one constructive comment on what needed to be improved.

In the second class meeting, learners first wrote individually for thirty minutes. After the first draft, the instructor introduced and explained the instructional rubric. The TA writing sample was at this point distributed so learners could see how the same writing prompt was responded to differently by a more proficient peer writer. The instructor, after giving a few minutes of silent reading time to the class, then used the sample to demonstrate ways of using the instructional rubric and giving comments.

More specific to what existed in this TA sample, the teacher illustrated how the writing responded to the prompt nicely in its organisation and argumentation. Among the three points used to argue the writer’s position, it was pointed out to the class that the first point was well supported and developed. In contrast, the second point manifested a typical example of lack of a meaningful connection between sentences and consequently it became less coherent. Other minor sentential errors were also elaborated and explained. By way of the teacher’s analysis of the sample work, learners observed a good sample and learned why it was good. They also noticed what was not so good and could be improved. After this demonstration of rating and giving comments, learners were then put into groups of three or four for peer review. Each student received three rating table slips and their first drafts were rotated among group members. Each student was required to read two to three peer drafts and write down rating scores and comments. The student author then collected rating tables for his/her work from other group members, read and clipped the filled forms on top of his/her writing paper, and submitted the work to the teacher.

The instructor’s actual writing lesson had not been planned prior to this moment. The teacher’s time was not spent on grading and commenting on each individual piece of work, since past studies have shown that effort spent in doing so is not so effective.

Rather, student work was reviewed in order to find common and pervasive problem patterns. After teaching points had been identified, learner excerpts were selected and areas in need of improvement were highlighted. These materials were designed into problem sets for the next class session so as to probe learners to tackle the problem and to foreground for the teacher her instruction on revision strategies.

In the third class meeting, the first drafts were returned to the learners. They reviewed peer comment sheets again and performed self-assessment against the same instructional rubric – a way to refresh their memory from the previous week and to connect the learned lesson to the new one. Following self-assessment, the rest of the first session was an EFL writing revision instruction unit contingent on learner performance exhibited in their first draft. An important point is that areas for improvement were always presented to students in the form of a problem set. The teacher allowed ample time for learner groups to ponder the problem and discuss possible answers and solutions. After group discussions, in which learners clarified and consulted one another’s opinions, their ideas were then elicited and challenged by the teacher in whole-class discussion. It was at this point that the teacher demonstrated how a seasoned writer tackled the same problem with better, established strategies. The instructor eventually summarised the discussion into a few practical strategies for learners to apply in the next period of revision. Comprehensive coverage

S.-C. Huang Like a bell responding to a striker: Instruction contingent on assessment

of all problems identified was not the aim of the instructional content of this session.

How much learners could take up was more important than how much there was to be taught.

The follow-up revision session was a time when learners worked individually with all resources nearby. Consultation of dictionaries, small-group discussions, and asking for help from the instructor and TA were all encouraged. When the time for revision was coming to an end, learners were asked to perform a final self-assessment on their revised version. To round off this writing and revision experience, they were asked to reflect and record strategies applied. This writing and revision cycle was expected to be repeated a few more times for more practice. Hopefully, each practice would feed forward to the subsequent cycles of writing exercises.

The contingent EFL writing revision lesson illustrated above is believed to refocus an instructor’s effort from the laborious and seemingly ineffective individual paper marking to a holistic analysis of the bigger picture of common areas in need of teacher instruction. Instead of trying to correct each and every particular learner mistake, the instructor put problems into a few manageable entries and used learner excerpts as a point of departure for tailor-made instruction. Moreover, learners were invited to try problem-solving, group discussion and articulating their ideas, before the teacher offered her strategies. This step in engaging learners provided a chance to activate learner knowledge and awareness and, at the same time, gave the teacher a chance to know what learners actually could and could not do. This is what makes a contingent instructional lesson, one that derives from AfL principles, stand out from other instructional programs. Table 2. Learners were generally quite positive about this learning experience as the vast majority indicated the materials and tasks as very helpful or somewhat helpful.

No one referred to any of the 16 items as “not at all useful”. Means of each item were calculated and listed on the rightmost column. In particular, learners seemed to rate instruction and facilitation guided by the teacher much more positively than interactions with their peers. Other than items involving the teacher, TA model essays, revising learners’ own first drafts, and writing the first drafts were rated at around 3.50 out of a possible maximum of 4. This information on learners’ perception of the usefulness of various components in the contingent writing lesson could be a practical reference in refining future courses aiming to promote assessment for learning for a similar learner population.

S.-C. Huang Like a bell responding to a striker: Instruction contingent on assessment

How useful was each of these items in improving your English writing

1. TOEIC essay criteria and descriptors

4. The instructional rubrics and score sheets

10. Teacher’s mini-lectures on revision

42 69% 19 31% 0 0% 0 3.69

11. Examples used in teacher’s mini-lectures

41 67% 20 33% 0 0% 0 3.67

12. Teacher’s demonstrations of revision

32 52% 29 48% 0 0% 0 3.52

13. Writing resources introduced by the teacher

29 48% 28 46% 4 7% 0 3.34

14. Revision checklist 12 20% 38 62% 11 18% 0 2.84

15. Revising my own drafts 39 64% 18 30% 4 7% 0 3.51

16. Selected peer sample essays 22 36% 38 62% 1 2% 0 3.33

1 Calculated by assigning 4, 3, 2, and 1 to each response of “very useful”, “somewhat useful”, “not quite useful”, and “not at all useful” respectively.

Table 2. Results of the end-of-term survey

CONCLUSION

The concept of teacher contingency demands a great deal more from teachers, for they need to have a much greater capacity than those who teach with a pre-determined syllabus and from a well-structured textbook. The teacher needs to know his/her student population well, understand their past learning history and beliefs, diagnose their difficulties, induce from learner work common patterns so as to prioritise teaching points, probe learners to think and elaborate, and make real-time decisions to provide useful guidance. It was, however, not such an overwhelming mission. As the sample L2 writing revision lesson has shown, what learners may learn is not so

S.-C. Huang Like a bell responding to a striker: Instruction contingent on assessment

foreign to an experienced teacher. The thrust of such instruction lies in tying instructional content to learners, including learner sample work and learner’s demonstrated capability. It makes the teaching directly relevant to student learning.

This is what assessment for learning is trying to achieve.

As quoted at the beginning of this paper, ancient Confucian wisdom addressed how a teacher could best respond to learner inquiries. The responder does not provide all the knowledge he/she has on the subject, since doing so would run the risk of overwhelming and discouraging the inquirer. Like a bell responding to a bell striker, the teacher takes the strength of the strike into consideration, gives just enough so that the learner can take it in, allowing him or her time and leisure to ponder on the response so that the sound may linger and go afar.

ACKNOWLEDGEMENT

This work was supported by the National Science Council, Taiwan. [grant number NSC100-2410-H-004-186-MY2]

REFERENCES

Alexander, R. (2006). Towards dialogic teaching: Rethinking classroom talk. (3rd ed.) Cambridge, England: Dialogos.

Andrade, H. G. (2000). Using rubrics to promote thinking and learning. Educational

Leadership, 57(5), 13-18.

Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment for

learning: Putting it into practice. Buckingham, England: Open University

Press.

Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139-148.

Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment.

相關文件