• 沒有找到結果。

instructor’s beliefs and practices of assessment.

N/A
N/A
Protected

Academic year: 2021

Share "instructor’s beliefs and practices of assessment. "

Copied!
37
0
0

加載中.... (立即查看全文)

全文

(1)

CHAPTER FIVE DISCUSSION

This chapter delineates important issues about the two instructors’ planning, implementing, and evaluating assessment. Among them are the relationship between the two instructors’ beliefs and practices in assessment, the effects of their beliefs on assessment, constraints on the implementation of assessment and their strategies to cope with these constraints, their instruments of classroom assessment and their grading practices, follow-ups after assessment, and the difference between the instructors’ beliefs about assessment.

The Instructors’ Beliefs and Practices in Assessment Drawing on the interview data and classroom observation, similarities and discrepancy between the instructors’ beliefs and practices in assessment are found.

The following tables illustrate the congruence and incongruency between each

instructor’s beliefs and practices of assessment.

(2)

Table 5-1 Congruence between Instructor A’s beliefs and practices

Beliefs Practices Expressiveness is more important than

linguistic accuracy. Emphasized fluency in delivery over accuracy when grading oral presentation; encouraged students to express their thoughts as much as possible in journal writing; corrected only global errors in oral presentation

Students should follow a step-by-step sequence and devote regular efforts when they learn English.

Asked students to write a unified paragraph the first semester and a unified essay the second semester; considered students’ regular efforts and in-class performance when grading

Encouragement leads to higher motivation, and motivation influences learning most.

Assigned interesting assessment activities; gave much encouragement to students’ performance; tried hard to discover students’ strengths

Language learning requires more

practice. Assigned group role-plays, journal writing, outside reading units to promote students’ practice in speaking, writing, and reading in Freshman English; assigned weekly journals to promote practice in writing in Guided Writing

Language learning requires more English input.

Encouraged students to visit the language lab and do self-study to earn extra points after class

Attending to students’ affect helps

promote their learning. Encouraged students to discover the strengths of their peers’ writings;

offered encouragement before making comments; used non-threatening measures to correct students’ errors

Assessment is to promote learning. Tested only important points in the textbook in Freshman English;

considered students’ regular efforts and participation in class when grading; assigned students to do peer proofreading to learn from their peers in Guided Writing; gave much descriptive and concrete feedback to students’ writings to let them know where they did well and where they needed improvement

Helping students experience success in assessments helps enhance their motivation and confidence in learning.

Helped students prepare for the mid-term exam; gave chances of make-up;

used strategies to deal with students’ difficulties in assessments; specified the grading policies before students worked on their writings

To ensure fairness in grading, objective

scoring is better than subjective one. Assigned lower percentage to the subjective scores obtained from in-class performance in the four skills; assigned higher percentage to the objective scores obtained from quizzes and summative assessments

(3)

Writing is a process. Required revisions of in-class writings; considered students’ efforts in

revising when grading; considered students’ progress in the revisions when grading

Attend to the accuracy of writing, not speaking.

Picked grammatical errors in students’ writings; corrected the drafts of oral reports

Table 5-2 Congruence between Instructor B’s beliefs and practices

Beliefs Practices Thoughts are more important than

linguistic accuracy.

Emphasized the content over linguistic accuracy of students’ oral

performance; encouraged students to express profound meanings by using simple words

English learning should move from

simplicity to complexity. Gave quiz questions with definite answers in the first semester and gave more open-ended quizzes in the second semester; decreased the use of Chinese as students made progress

Motivation and confidence influence

students’ English learning most. Frequently offered students encouragement; attended to students’ needs;

designed interesting assessment activities

Language learning requires more

practice. Assigned self-introduction, group role-play, journal writing, and outside reading to promote students’ practice in speaking, writing, and reading;

played videos to help students practice listening in Freshman English;

assigned weekly journals to promote practice in writing in Guided Writing

Language learning requires more English input.

Encouraged students to visit the language lab and do self-study to earn extra points after class

Assessment is to promote learning. Tested only important points in the textbook; reviewed students’ common

errors through assessment; considered students’ regular efforts and participation in class when grading; used analytic scoring of writing to let students know in what aspects they need improvement or have made progress

(4)

Helping students experience success in assessments helps enhance their motivation and confidence in learning.

Used Chinese to explain important points in the textbook

to help students understand them completely; used strategies to help students answer questions; gave chances of make-up; considered five highest scores among all the quizzes

Writing is a process. Allowed revising; considered students’ progress when grading writing

Attend to the accuracy of writing, not speaking.

Corrected errors in grammar and word usage in quizzes; corrected errors in the drafts of self-introduction or group role-play

Table 5-3 Incongruency between Instructor A’s belief and practice

Belief Practice

Formative assessment is better than summative one in promoting students’

learning.

Assigned higher percentage to summative assessments and lower

percentage to formative ones in her grading policies in order to ensure the fairness of assessment

Table 5-4 Incongruency between Instructor B’s belief and practice

Belief Practice

She rarely reflects on assessments. Reflected on assessments after class, after the sharing of opinions with other instructors; changed her beliefs about assessment under the influence of her one-year life experience abroad and her prior teaching experience

Effects of Beliefs on Assessment

The data on the table show that the instructors have three major beliefs about the purpose or nature of assessment: (1) enhancing motivation and confidence, (2)

promoting learning, (3) being student-centered, and (4) being procedural.

(5)

(1) Enhancing motivation and confidence

This belief influenced both instructors’ choice of assessment activities. It also made the instructors take many measures to help students experience success in assessments and attend to students’ affect when correcting their errors or dealing with their difficulty. This belief further influenced both instructors’ evaluation of

assessment.

Both instructors thought that motivation influenced students’ learning most. To promote students’ motivation for learning, Instructor A thought, a teacher should endeavor to emphasize what students had achieved, rather than focusing on what they did not accomplish. Emphasizing and giving encouragement to what students had achieved helped enhance their motivation for learning. When discussing students’

group oral reports, role-plays, and writings, she tried her best to discover students’

strengths and encourage the other students to show their appreciation for their peers’

works. Instructor B thought that to promote students’ motivation for learning and practicing, encouragement from the teacher was very important. Guided by the belief, she gave much encouragement to students when they performed well in assessments.

The belief that motivation helped promote learning influenced the instructors’

choice of assessment activities. In Freshman English, both instructors employed interesting assessment activities that helped enhance students’ motivation, including group role-plays, group oral reports, self-introductions, and group competition events.

In addition, both instructors attended to students’ thoughts over linguistic accuracy in oral presentations to promote their motivation and self-esteem. They both encouraged students to express as much as possible in the oral presentations. This made students become less self-prohibited and more willing to practice speaking.

This belief also made both instructors attend to students’ affect. Aside from

frequently reinforcing students’ good performance with much encouragement, both

(6)

instructors employed many strategies to reduce students’ embarrassment when detecting students’ errors or dealing with their difficulty in class. They offered encouragement before giving cognitive feedback. Instructor A used non-threatening measures to correct students’ errors such as asking students to think twice about the answers, repeating the sentence or the answer in the correct forms, or providing examples to help students infer the correct answers. Instructor B did not pinpoint or correct students’ errors directly; instead, she asked other students to provide correct answers or repeat the answer in correct forms. If students could not answer the questions, she tried to narrow down the questions or make them more concrete.

Furthermore, both instructors promoted students’ motivation and confidence by employing multiple measures to help students experience success in assessment activities. These strategies included making students psychologically prepared for the incoming assessments, helping students prepare for the mid-term or final exam, giving many opportunities for make-up to help students pass the course, and

specifying the grading policies before students worked on their writings. Instructor A introduced the format of the mid-term exam and told students how to prepare for the exam. She even gave a quiz concerning the outside reading units that were going to be included in the mid-term to help students review the important points of the reading texts. Besides, students could redo their drafts of group oral reports or group

role-plays, and Instructor A would pick several highest scores among the drafts and revisions when assigning students’ final scores. Before students started to work on their in-class writings of the GEPT writing test, she specified the requirements and criteria of the GEPT writing test to help students perform better in the writings.

Instructor B discussed with students the format of the quizzes and summative

assessments before the tests. She used Chinese to re-explain the requirements and

criteria of assessment activities to ensure that students clearly understood the

(7)

requirements and criteria. Sometimes she even told students where they could find the answers to the questions in the final exam in the textbook and even wrote the answers of the end-of-unit quiz down on the blackboard. She provided many opportunities for make-ups. Students could redo the end-of-unit quiz voluntarily if they were not satisfied with their performances. Instructor B would consider the five highest scores among all the quizzes when she assigned the final score to a student. When students competed answering questions in Freshman English, she asked the groups falling behind in group competition questions to help increase their points, for depressing students was not her purpose of assessment.

This belief influenced both instructors’ evaluation of assessment. Instructor A thought that her composition teacher in college laid too heavy emphasis on the accuracy of writing, giving her much pressure when she wanted to express her thoughts in writing and making her unconfident in writing. This unpleasant experience made Instructor A reflect on her focus of writing assessment and thus shifted the focus of students’ journals from accuracy to expressiveness. Instructor B started to reflect on her beliefs about learning after joining in the Freshman English Project. She used to think that providing students with knowledge was important, but after that she started to think that for low-achievers, motivation and confidence were more important for their learning. Thus, the change in beliefs about language learning was reflected in the change in those about assessment, so she started to assign many interesting assessment activities when teaching. She thought this helped promote students’ motivation and confidence in learning English.

(2) Promoting learning

The belief is reflected in the two instructors’ design of assessment activities,

grading policies, the way they give feedback, and their evaluation of assessment. Both

instructors thought that the purpose of assessment was to promote learning, and they

(8)

valued the positive washback effect of assessment. They also thought that learning was a step-by-step, continuous process, in which students learned by devoting regular efforts. They thought that in order to promote language learning, students should be given many opportunities of practicing language.

Both instructors designed many assessment activities to promote students’

learning. They only tested important concepts in the textbook or handouts in the end-of-unit quizzes in Freshman English, which helped students learn more about the important points in the textbook when preparing for assessment. Both instructors offered much suggestive feedback to students’ oral performances, drafts of oral

performances, and in-class and take-home writings to let students know their strengths and limitations concretely, thus making students learn more about language skills and perform better in future assessment activities. Instructor A asked students to do peer proofreading for their in-class writings to help them learn from peers. Instructor B assigned each paragraph in the texts to different groups and asked students in those groups to teach the paragraph in Freshman English. Students had to consult the dictionary and have group discussions about the texts, which made them have deeper understanding of the unit. She thought students could learn more in the process. In addition, she provided students with ample opportunities for make-ups to improve their bad grades, for she thought that students could learn more through the process of doing make-up activities. She even used assessments to review students’ common errors in Guided Writing. She gave a quiz on agreements, pronouns, and antecedents in Guided Writing in the second semester, for she found that students made a lot of mistakes in these categories.

In addition, both instructors considered students’ regular efforts and participation

in class when grading, reflecting in their efforts doing in-class assessment activities,

their participation in classroom discussion, or attendance because they thought that

(9)

language should be learned by regularly devoting efforts, and assessments should help promote students’ efforts in learning. The belief influenced Instructor B’s assignment of the percentage for different assessment activities in her grading policy. She set fair percentage for different categories of assessment activities in the two courses

according to the time spent on each category, which revealed that Instructor B thought that hard work alone counted in her grading.

Furthermore, both instructors thought that practice contributed to learning. To promote practice, they both implemented oral presentations, group role-plays, the writing of the drafts of oral presentations, and outside readings to promote students’

practice in four skills in Freshman English. Besides, both instructors encouraged students to visit the language lab to have more practice in English, which also helped them earn extra points. In Guided Writing, both instructors assigned journals to promote students’ practice in writing.

Besides guiding the instructors’ designing and implementation of assessments, the belief of promoting students’ learning further guided the instructors’ evaluation of assessment. Instead of correcting errors for students, Instructor A only identified the types of errors in students’ in-class writings and asked students to correct the errors by themselves. This was influenced by her reflections after years of teaching experience.

Her teaching experience told her that helping students correct the errors in spelling or grammar in writings did not prevent them from making repeated errors. Instructor B changed her beliefs about language learning after the one-year experience in England.

The experience made her become aware that communication was more important than

accuracy. Such change influenced her teaching, and she started to lay more emphasis

on students’ thoughts and expressions. In addition, Instructor B’s teaching experience

told her that voluntary attendance policy resulted in poor attendance rate, and students

might learn less if they did not attend to class regularly, so she started to require

(10)

regular attendance. Besides, she thought language learning should follow a proper sequence from simplicity to complexity, and this belief made her reflect on the employment of many interesting assessment activities. She was worried about the consequence of assigning too many interesting assessment activities that made

students learn less except have fun. After reflection, she decided to give more lectures in the second semester because she thought that lectures made students learn more and in greater depth. Furthermore, Instructor B adopted assessment activities that promoted learning and abandoned what helped little following the suggestions of her colleagues. For example, self-introduction was adopted, and journal writing was canceled in Freshman English.

To sum up, it is obvious that the two instructors value the positive washback effects of assessment. They consider these effects when they design assessment activities, write quiz questions, set criteria, and give feedback. The belief in promoting the washback effect also guides the decisions on objectives, criteria, grading policies, and the evaluation of assessment.

(3) Being student-centered

The belief in student-centeredness influenced both instructors’ planning, implementation, and evaluation of assessment. This belief was reflected in both instructors’ planning and implementation of assessment in the following aspects: (a) variety of assessments, (b) flexible timetable, and (c) freedom of choice. Besides the three common aspects, one aspect was unique to Instructor A: (d) different standards.

(a) Variety of assessments

Both instructors catered to students’ individual needs when designing,

implementing, and evaluating assessment. Both instructors assigned other assessment

(11)

activities to students who had completed exercises at home while the other students were still working on the exercises. Instructor A asked these students to find partners to check their answers to the exercises or to prepare for the end-of-unit quiz. Besides, Instructor A assigned a group that was fully prepared for their role-plays to present the play before the originally assigned group, for the latter did not have their draft settled.

Instructor B asked students who had finished their assessment tasks to do another exercise in the textbook, and she also asked them to preview the new lesson that was going to be taught. These measures helped students of different English proficiency levels make good use of the class time to learn.

(b) Flexible timetable

Besides, both instructors had flexible timetable for assessments to cater to

different students’ needs. Instructor A allowed students to submit the revisions of their in-class writings one day later than the scheduled date for she thought students needed more time to revise their writings in Guided Writing. Instructor B let those students who wanted to start their winter vacations earlier take the final examination one week earlier than the scheduled date in Guided Writing.

(c) Freedom of choice

Both instructors let students choose whatever they wanted to write on for

journals in Guided Writing. In Guided Writing, Instructor A provided students with

three topics for in-class writing to meet the interests of different students because she

thought that giving only one topic might undermine students’ writing ability for

students might lack inspiration for the particular topic. Instructor B gave students the

freedom to choose to work on their writings in the classroom or at home and to

choose the date they wanted to hand in their journals. In Freshman English, she

considered students’ individual differences in English proficiency, so she allowed

(12)

some of the students to complete their quizzes at home. This was because some students had difficulty completing the quiz in the limited quiz time.

(d) Different standards

This aspect was unique to Instructor A. She varied her standards with students’

levels. In Guided Writing, she thought that the students were English minors and had considerable problems in grammar, so she wanted them to start from writing a paragraph, rather than an essay.

(4) Being procedural

Both instructors designed assessment activities to promote step-by-step learning.

Instructor A thought that the objective of Guided Writing was to write a unified and a coherent “paragraph”, rather than an “essay”, for she thought that students should learn to write a paragraph before a definitely more difficult essay. Instructor B gave questions with definite answers in the first semester and gave more open-ended quizzes in the second semester, for she thought language should be learned from simplicity to complexity. In addition, as mentioned above, she planned to give more lectures in the second semester in Freshman English because she thought as students made progress in English, they should learn the texts in greater depth, and she thought this could be achieved through lectures. Furthermore, Instructor B decreased the use of Chinese as students made progress, for she thought that practicing listening to English more often may help improve students’ English proficiency. These were influenced by the two instructors’ belief about language learning in that they thought language should be leaned following a sequence from simplicity to complexity.

In sum, both instructors harbored similar beliefs about assessment, and these

beliefs were guided by their beliefs about language and language learning, subjects,

students, teaching, and curriculum, and their assessment theories. Besides, the

instructors’ beliefs about assessment underlay their choices, implementation, the

(13)

grading policy of assessments, and the evaluation of assessment. These confirm the findings by other researchers. They contend that teachers’ beliefs about assessment are guided by their beliefs about the subject, students, assessment principles, and instructional practices (Reuda & Garcia, 1994; Brookhart, 1997; Wu, 1999), and teachers’ beliefs about assessment guide their choice of the format, frequency, and the instructional function of assessment (Brookhart, 1997). The finding of both

instructors’ beliefs about assessment also reveals that there is interconnection between teachers’ belief systems. Teachers’ beliefs about assessment are related to their beliefs about teaching, learning, subjects, curriculum, and their assessment theories. This echoes one major finding of research on teachers’ beliefs; that is, beliefs are not independent systems. For example, teachers’ beliefs about the subject matter and learning and those about teaching may be closely associated with each other (Pajares, 1992; Calderhead, 1996). In addition, the finding that teachers’ beliefs about

evaluating assessment influenced those about designing future assessments gains support in literature. It is found that different beliefs within a teacher may inform each other or even form a cycle; for example, teachers’ reflections on teaching may

influence his or her planning of teaching the next time (Clark & Peterson, 1986). To conclude, the findings of both instructors’ beliefs about assessment conform to those in literature.

Incongruency between Teachers’ Beliefs and Practices of Assessment Generally speaking, both instructors based their assessment practices on their beliefs about assessment. However, mild incongruency was identified in the two instructors.

Although Instructor A claimed that formative assessment was more helpful than

summative one in promoting learning. In practice, students’ performances in the

(14)

formative assessment activities (e.g., four-skill-related assessment activities) constituted only 10 percent of students’ final scores of Freshman English, while students’ performances in summative assessments (e.g., the mid-term and the final exam) took up 90 percent of students’ final scores. This decision was made to ensure the fairness of assessment. In other words, the pursuit of fairness outweighed her original belief.

Instructor B claimed that she never reflected on her assessment activities.

Actually, she did reflect on her assessment activities, and such reflections were

influenced and guided by her beliefs about language learning, her assessment theories, her own teaching experience, her living experience abroad, and the suggestions from other instructors. Her beliefs changed after the reflections. For example, after the one-year experience living in England, she started to think that receiving native-like input and being able to communicate were more important than linguistic accuracy.

This changed her beliefs about assessment. She started to emphasize expressiveness when grading students’ performance. This shows that teachers may be unaware of their reflections. This points out the need to make teachers aware of their reflections and help them engage in self-reflections on their beliefs. Kagan (1992) draws on much research and contends that teachers tend to employ their preexisting belief systems to filter and interpret the new information they obtain. However, these preexisting belief systems may often be inappropriate, composed of good and bad models of teaching they observed as students. Kagan (1992) strongly suggests pre-service and in-service teachers engage in self-reflections on their preexisting beliefs. In the self-reflections, teachers should examine and challenge the

appropriateness of their preexisting belief systems to avoid the negative effects of

these belief systems. In a similar fashion, teachers should also reflect on their beliefs

about assessments. It is good that Instructor B reflects on her beliefs and practices of

(15)

assessment, and this prevents her from sticking to her inappropriate beliefs such as attending predominantly to students’ linguistic accuracy in speaking.

Other Factors Influencing the Two Instructors’ Assessment Practices Besides the three guiding beliefs, some common factors influenced the

instructors’ beliefs about assessment, including institutional requirements, the purpose of assessment, the beliefs about speaking and writing, and the load of grading.

(1) Institutional requirements

The design and implementation of assessment of both instructors in Freshman English were highly influenced by the Freshman English Project. As mentioned in the previous chapters, instructors involved in the Freshman English Project had to

administer assessment activities aiming to measure students’ ability in four skills in class and assign a score to every student’s performance in these four-skill-related assessment activities. The Freshman English Project regulated and scheduled the number of units and outside readings in the semester. The Freshman English Project also regulated that instructors employ and write unified test questions cooperatively in the mid-term and final exam. Besides, instructors of the low-intermediate level

decided together that the format of the quiz in the outside readings was true-false questions in their group discussion at the beginning of the semester. The Freshman English Project regulated a major framework for the instruction and assessment of the fifteen Freshman English classes, thereby having great influences on both instructors’

design and implementation of assessment in class. Instructors, however, were allowed to only choose what instructional activities and the way of assessment to include in the framework. For example, they could choose to include group oral reports, journal writing, and role-plays in their speaking assessment activities in class.

There were no constraints to the requirements or the grading criteria of both

(16)

instructors’ assessments in Guided Writing. Instructors could design and implement assessments they like in the course.

(2) Purposes of assessment

Although it was all the instructors of the low-intermediate class in the Freshman English Project that decided together the format of outside reading quizzes, both instructors believed that the format of the quiz should vary with their purposes. They both agreed that the purpose of outside reading was to train extensive reading skills, so true or false was a proper format. On the other hand, the purpose of end-of-unit quiz was to test students’ thorough understanding of the important points in the texts, so both instructors chose the format that they thought would fulfill the function.

Instructor A chose the combination of cloze summary of the text, vocabulary in context, and sentence completion, etc. Instructor B chose open-book short-answer questions.

(3) Beliefs about speaking and writing

The two instructors held similar beliefs about reading and writing. They both emphasized fluency over accuracy and corrected only global errors in students’ oral performances. However, they emphasized accuracy of writing works. They both thought that writing was a process through which students made progress. Thus, they gave students much feedback in writing and encouraged effortful revisions. Instructor A did not assign a letter grade or a score to students’ journals and in-class writings in Guided Writing because she thought that the process of writing and revision, and the teacher’s feedback to the writings were more helpful for students’ writing than scores.

Instructor B asked students to rewrite their take-home writings and used analytic scoring to provide directions for improvement in Guided Writing.

(4) The load of grading

Both instructors took into account the load of grading when they designed

(17)

assessment activities. Instructor A did not ask every student in a group to submit a writing draft of group oral report in Freshman English; instead, she asked them to take turns writing drafts and presenting the oral report. This was because she thought she did not have time to read all the drafts. In Guided Writing, she did not use analytic scoring to grade students’ in-class writings, for she thought it was too tiring and time-consuming. Instructor B thought the true or false format of the quiz on outside reading was good because it was easy to grade.

Besides the four common factors, there were some other factors that influenced both instructors’ beliefs about assessment. For Instructor A, these factors included the insufficiency of the textbook and the need of variety in class. Instructor A designed assessment activities to compensate for the insufficiency of the textbook. She stated that the texts in the textbook of Freshman English were too simple, so it was difficult to promote students’ reading ability through reading the texts. To make up for the insufficiency, she designed some text-based assessment activities such as group oral reports and the writing of the drafts of group oral reports to promote students’ practice in speaking and writing. Besides, Instructor A assigned group role-plays because she thought these could add variety to the class, thus enhancing students’ motivation.

Other factors that influenced Instructor B’s beliefs about assessment included opinions from the colleagues and the different purposes of assessment. Opinions from other instructors in the Freshman English Project influenced Instructor B’s design of assessment. For example, she adopted the assessment activity of self-introduction following the idea of another instructor in the Freshman English Project. On the other hand, she canceled journal writing in Freshman English, for some instructors in the Freshman English Project said that journals did not promote students’ writing ability.

Another factor was the different purposes of assessment. Instructor B set different

criteria for the writing pieces of students in Guided Writing and Freshman English.

(18)

One grammatical error in students’ take-home writings in Guided Writing caused a deduction of 0.5 point of the final score of a writing piece. However, in Freshman English, the same error in students’ writing drafts of role-plays caused a deduction of only 0.3 point of the final score of a draft. This may be explained by the different purposes of the writing assessment activities. In Guided Writing, accuracy of a writing piece is very important, and Instructor B attends to grammar, vocabulary, and

mechanics of the take-home writings. The writing pieces in Freshman English are the writing drafts of group role-plays. The focus is whether the role-play could arouse good response from the audience, not the accuracy of the writing drafts. Thus, it is reasonable that errors in accuracy in Freshman English cost fewer points than those in Guided Writing.

The finding reveals that when designing and implementing assessment activities, teachers do not solely rely on their beliefs about assessment. They also consider other contextual factors. In this study, institutional requirements have considerable

influences on the instructors’ beliefs about assessment. Similar findings can be found in literature. Some research pinpoints that factors influencing the choice of the assessment activity may include different subject areas, the particular classroom

circumstances, purpose teachers want to fulfill, and students’ age level (Mavrommatis, 1997). Therefore, it is evident that to have a deeper understanding of how teachers’

beliefs influence their assessment, these factors must be taken into consideration.

Constraints on the Implementation of Assessment and the Coping Strategies

Some of the above-mentioned factors might become constraints when the two

instructors implemented their beliefs about assessment in the classroom. These

constraints included the concern of keeping to the instructional schedule and some

contextual factors such as the limitation of the class time, students’ failure to

(19)

accomplish the assigned assessment activities, and the availability of instructional resource, the nature of assessment, negative suggestions from the colleagues, and the difficulty in grading.

The Instructors’ Coping Strategies and Prioritizing Principles to Solve the Constraints on the Implementation of Assessment

The instructors employed some common strategies to cope with the constraints, including canceling and postponing assessment activities. In addition, Instructor A also employed the strategy of integrating, and Instructor B used the strategy of simplifying.

(1) Canceling

The first strategy that both instructors used was canceling some planned

assessment activities. In Freshman English, in one class before the mid-term exam, Instructor A canceled the planned end-of-unit quiz because she wanted to save time to teach another new lesson that was going to be tested in the mid-term exam. In another class, she canceled the planned discussion of some post-reading questions, for she wanted to use the class time to give a quiz on outside reading units to help students prepare for the quiz on outside reading in the mid-term exam. Instructor B also used the same strategy. In Freshman English, she canceled journal writing following the negative suggestions of her colleague and due to the lack of time to grade the journals.

The colleague thought that journal writing did not promote students’ writing ability. In addition, she thought that there was no time for discussing journals in class.

(2) Postponing

The second common coping strategy that both instructors used was postponing.

Instructor A postponed an end-of-unit quiz until the mid-term exam, for she wanted to

use the class time to finish all the units included in the mid-term exam. In another

(20)

class, she postponed the activity of the sharing of assignments until the next class because there was only one group of students having finished the assignment in advance. Instructor B also postponed some planned assessment activities. When there were some students who did not finish doing their quizzes, Instructor B asked them to leave their quizzes to the break time, for she wanted to teach a new lesson to keep to the instructional schedule. Sometimes she asked students who did not complete their quizzes to submit the quizzes in the next meeting, for she wanted to take advantage of the audio-video equipment in the classroom to play video. At another time, she

planned to check the answers to the assignment, but there were many students who did not complete the assignment. Thus, Instructor B asked those students to complete their assignment and check the answers in the next meeting.

(3) Integrating

The third common strategy was to integrate quizzes of two skills into one.

Instructor A originally planned to give listening quizzes regularly in Freshman English, but the quizzes were only implemented for two times due to the tight instructional schedule. To cope with the time constraint, Instructor A integrated the listening quiz into the end-of-unit reading comprehension quiz and the quiz on outside reading. She read the comprehension questions and students answered orally after listening to the questions. To cope with the same constraint, Instructor B integrated the listening quiz and the end-of-unit reading comprehension quiz. Instructor B read the questions of the end-of-unit quiz, and students copied them down. This served as a listening quiz. After copying down the questions, students wrote their answers in the quiz sheet.

(4) Simplifying

The strategy was unique to Instructor B. Instructor B simplified some planned

assessment activities. Instructor B simplified the compulsory make-up quiz in

(21)

Freshman English to a voluntary one, for she thought there was not enough class time for the compulsory make-up quiz. She wanted to use the limited class time to teach important content in the textbook. In addition, in Freshman English, Instructor B simplified the listening quiz. She planned to administer listening quizzes regularly. In practice, she gave the listening quiz only two times at the beginning of the semester.

This was because the instructional schedule was tight, so there was no time in class for giving the quiz so many times. Instructor B also simplified self-introduction in Freshman English. She planned to listen to two or three tapes of students’

self-introduction in every meeting. In practice, the class time was limited, so there were only a few tapes being played in class within the semester. In addition, she originally planned to give quizzes on outside reading before and after the mid-term exam respectively. In practice, the quiz after the mid-term exam was canceled due to time limitation. Furthermore, Instructor B simplified quizzes on writing. In Guided Writing, Instructor B originally planned to give quizzes on important concepts in the textbook. In practice, the quiz was not given in the first semester. She thought there were many important points to teach in the textbook, so there was no time for quizzes.

Another reason was that Instructor B thought the purpose of the writing class was to have students write, and quizzes were less important. However, quizzes were given in the second semester. This was because after a semester of teaching, she found that many students made the same errors in some grammatical components, so she decided to give a quiz to help students review their errors.

Besides, Instructor B simplified grading practices. In Freshman English, she originally planned to grade the writing drafts of group role-plays and

self-introductions. In practice, she assigned a holistic score representing students’

performances in the writing drafts and the oral performances. Instructor B said that

although the Freshman English Project required the instructors involved to assign a

(22)

percentage to students’ performance in the four skills, she found that oral performance could only be graded holistically, and it was difficult to attend to the quality of the writing drafts and the oral performances at the same time when grading.

Canceling, postponing, and integrating were three most common coping strategies of both instructors. Aside from the three common strategies, Instructor B also had the strategy of simplifying.

As implied above, these coping strategies were adopted for two major priorities:

the observance of the instructional schedule and the observance of course objectives.

The analysis in the previous section reveals that the observance of the instructional schedule is the top priority when both instructors cope with the constraints on the implementation of assessment in Freshman English. Instructor A canceled quizzes on listening, a group role-play and a group discussion of the post-reading questions in order to keep to the instructional schedule. Instructor B simplified the listening quiz and students’ self-introduction and canceled journal writing and the quiz on outside reading after the mid-term exam in Freshman English to serve the same purpose. In both instructors’ classes, they chose from strategies of canceling, postponing, integrating, or simplifying when implementing assessment in order to fulfill the goal of observing the instructional schedule.

The second one is the observance of course objectives. In Freshman English, both instructors implemented all scheduled reading-related assessment activities, except for Instructor B’s quizzes on outside reading after the mid-term exam due to time

limitation. A possible explanation may be that both instructors thought that the objective of Freshman English should lean more toward reading, so they endeavored to implement all the reading-related assessment activities.

The findings have several implications. First, instruction comes before assessment.

When instructional schedule was tight, the two instructors chose to cancel, simplify,

(23)

or postpone some planned assessment activities, rather than sacrificing the class time for four-skill-related assessment activities. This reveals that the lack of time is the major factor that makes the two instructors deviate from their original plans for assessment. This confirms what Stiggins and Bridgeford (1985) find; that is, one of teachers’ major concerns about the use of assessment in the classroom is the lack of time to effectively manage assessments. Coming back to the current study, it is

evident that to understand the factors that influence the implementation of instructors’

beliefs about assessment and how they cope with these constraints are very important.

The finding of this study indicates that teachers may use some strategies to cope with the heavy load of classroom assessment. The need to attend to these strategies has been the focus of some research. Brown, Bull and Pendlebury (1997) point out that universities are faced with the problem of having to assess an increasing number of students. They suggest that a good way to solve such a problem was to seek and implement assessment strategies to help instructors save more time in the long run.

Housell (1997) identifies several major strategies that serve such a function, including reducing assessment (in scale, scope or formal status), delegating assessment,

rescheduling demands, refocusing efforts, capitalizing on IT and other technologies, and reviewing and recasting approaches to assessment (cited in Brown, Bull &

Pendlebury, 1997). Strategies for assessment identified in the current study also fit into the categories in the classification of Housell (1997), including reducing

assessment (in scale, scope or formal status) and rescheduling demands. Brown, Bull and Pendlebury (1997) conclude that universities should reflect on their strategies of reducing instructors’ heavy load of assessment and try hard to promote the

effectiveness of those strategies. This shows that there may exist many other

assessment strategies among university instructors, and more research should be done

to explore how different assessment strategies could function effectively in classroom.

(24)

Instruments of Assessment and Grading Practices

This section discusses assessment instruments employed by both instructors in classroom assessment and the scope of their grading, namely, factors they take into consideration when they assign a score.

Instruments of Assessment

Both instructors used many objective and subjective instruments to gather information about a student in classroom assessment, and they tried hard to strike a balance between the two kinds of instruments.

Both instructors employed objective and subjective instruments of assessment in class. Objective instruments they used in Freshman English included the listening quizzes, teacher-made end-of-unit reading comprehension quizzes, the unified

mid-term and final exam and the unified proficiency test. Subjective instruments they used included speaking assessment activities such as group discussion of the

questions in the textbook, self-introductions, oral reports of the result of group discussions of the post-reading questions, and group role-plays; reading assessment activities such as quizzes on outside reading and assigning groups to teach the texts;

writing assessment activities such as journal writing and the writing drafts of oral presentations. Both instructors also used direct classroom observation of students’

in-class performance and in-class comprehension questions to check students’

understanding of the text and the task.

In Guided Writing, objective instruments used by the instructors included the

exercises and assignments in the textbook, quizzes that helped students review their

common errors, and the first part of the mid-term and final exam, namely, the

important concepts in the textbook. Subjective instruments the instructors used

(25)

included in-class or take-home writings, paragraph writing in the mid-term and final exam, the revision of the in-class writings, weekly journals, and direct observation of the discussion of the model paragraphs by students.

The finding shows the tendency that both instructors employ subjective as well as objective classroom assessment instruments and strive to strike a balance between the two. Instructor A assigned relatively higher percentage for objectively-scored assessment activities but lower percentage for subjectively-scored assessment activities in Freshman English. She stated that scores from subjective assessment should only represent less than ten percent of a student’s final score. If subjective assessment was inevitable such as the assessment of students’ writings, Instructor A conducted some ways to make the assessment more “objective”. For example, the first part of the mid-term exam of Guided Writing tested students’ comprehension of important concepts in the textbook. This shows that she does not use a single writing piece to determine a student’s achievement in writing. Besides, she followed the items in the checklist when grading and discussing students’ writings in Guided Writing.

Although she did not assign a score to each category, she did follow some guidelines when grading writing. This helps reduce the degree of subjectivity in grading writing.

Instructor B also used a measure to make assessments more objective. She used analytic scoring to grade students’ writings. This also helps reduce the degree of subjectivity in grading writing.

The finding conforms to that of Cizek et al. (1995). They contend that teachers usually employ a wide range of objective and subjective factors to “maximize the likelihood that students obtain good grades (cited in McMillan, Myran, & Workman, 2002, p.205)”. In the current study, the use of multiple measures to sample students’

achievement also contributes to the fairness of assessment, a factor that both

instructors consider when designing and implementing assessment. As pointed in Ory

(26)

and Ryan (1993), the crucial point to contribute to the fairness of assessment is to provide students with a wide range of tasks to demonstrate their achievement (cited in Gredler, 1999).

Grading Practices

In this study, the instructors had a hodgepodge of factors which they considered in grading. They both considered students’ academic and nonacdemic performance when grading.

Academic performance included students’ performance in the four-skill-related assessment activities and their performance in the mid-term, final and quizzes in Freshman English. In Guided Writing, both instructors considered the quality of students’ writing works and students’ performances in the quiz and the mid-term and final exam.

Besides students’ academic performance, the instructors considered their nonacademic performance, including efforts, participation in class, and extra credit work when grading. Students’ effortful behavior referred to regular attendance, their efforts and improvement in make-up activities such as the rewriting of the drafts of oral presentation or voluntary make-up quizzes, teaching the texts, and efforts in preparing and performing group role-plays in Freshman English. In Guided Writing, they considered students’ efforts in writing journals, in revising the writings, and the improvement in their revisions. Participation in class included the profundity of thoughts students expressed, their initiatives in asking questions, and their responsiveness in class discussion. Works for extra credit included visiting the language lab to do self-study and memorizing texts to make up for bad grades.

The findings reveal that the instructors consider students’ academic and

non-academic performance when grading. This is also consistent with McMillan,

(27)

Myran, and Workman’s (2002) finding that aside from academic achievement, efforts, participation, and extra credit work are important factors in grading. Another

important point revealed in the findings of the current study is the significance of students’ efforts in the two instructors’ grading. This conforms to Brookhart’s (1993) finding that teachers rarely rely on students’ achievement alone to determine students’

grades. She contends that formally or informally, teachers will include students’

efforts into grading, for they are concerned with student’ motivation, self-esteem, and the school consequences of giving the grades.

The Gap between Grading Policies and Grading Practices

There were two kinds of gaps between the instructors’ grading policies and practices, including the grading of students’ performance in certain unlisted criteria and the omission of certain planned grading criteria.

(1) The grading of students’ performance in certain unlisted criteria

Instructor A assessed students’ performance in the categories that were not listed in the grading policies on the syllabi. In Freshman English, Instructor A graded students’ in-class performance, including attendance, participation in discussion, the profundity of the opinions they posed in class. In Guided Writing, Instructor A graded students’ attendance. Actually, these categories were not listed in the grading policies on the syllabi of the two courses. In Freshman English, Instructor B did not list students’ participation in the grading policy; however, she did grade students’ in-class participation such as their regular efforts devoted to the class and the initiatives when answering questions. She graded this by impressions.

(2) The omission of certain planned grading criteria

Both instructors omitted some planned grading categories listed on the syllabi

when they actually graded due to some practical factors. For Instructor A, categories

(28)

listed in the grading policies on the syllabi sometimes served more as measures to promote students’ learning than actual assessment components. In Guided Writing, Instructor A believed that assessment was to promote learning, so she listed activities that were not actually graded in the grading policies such as “textbook exercises” and

“participation in classroom discussion”. This was because she wanted students to know that these activities were important and wanted them to devote more efforts to performing in these activities. In this sense, for Instructor A, assessment served the function of classroom management. As Stiggins and Bridgeford (1985) point out, oral questions serve as a tool of classroom management to maintain students’ attention.

Like oral questioning, criteria omitted by Instructor A were actually tools to promote students’ active involvement in classroom assessment activities.

In Guided Writing, Instructor A listed journal writing and in-class writings in the grading policies; however, she did not assign these writing works a score or a letter grade in practice, and she simply read them through. This was because she thought that scores and letter grades did little help in promoting students’ writing ability.

Another reason was that the contents of journals were students’ diaries, and it was difficult to grade diaries.

In a similar vein, Instructor B originally assigned 7 percent to the writing category in Freshman English. In practice, the two tasks representing this 7 percent were not graded. There were only a few tapes of self-introduction being played, so the draft of self-introduction was not graded as well. Journal writing was canceled under the negative suggestion of her colleague, so it was not graded, either.

The phenomenon reveals that the instructors tend to exclude subjectively scored

categories from their grading policies. This may be because it is difficult to set

specific criteria for grading students’ in-class performances, and instructors often

grade them by impressions. Besides, the phenomenon shows that these subjectively

(29)

scored categories reside in the very center of the instructors’ beliefs about assessment.

They consider these categories important in grading. Even if they do not include these categories in the grading policies originally, they grade students’ performance in these categories in practice.

The gap also reveals that teachers need to make their grading criteria explicit. It is pinpointed in the research that grading criteria should be fully communicated to students, so that they will know how to achieve the teachers’ expectations and successfully fulfill the requirements (Phye, 1995). The clarification of criteria is very important in promoting students’ learning. As Brookhart (1999) contends, “students prefer, value, and work harder for learning that is assessed by comparisons with clear standards (p. 72).”

The Two Instructors’ Follow-ups after Assessment

Both instructors provided some follow-up instructions after the assessment of students’ performance. These follow-ups include sharing good works of students, re-teaching points and parts that confuse students, taking remedial measures to help students learn, and offering feedback.

Sharing

Both instructors emphasized the positive washback effect of sharing following the assessment. In Freshman English, both instructors shared well-done exercises or quizzes by a group or a student with the other students. In Guided Writing, both instructors asked students to discuss or share their good writings with their classmates.

Re-teaching

Instructor A spent much time re-teaching writing after assessing students’ writing

(30)

drafts of group oral reports and group role-plays in Freshman English. Re-teaching often followed the sharing of better students’ writings. After the sharing, she pinpointed and corrected errors in students’ writings, provided suggestions for

students, and taught them how to improve their writings. Once she used nearly half of the class time to re-teach students the way to refine their writings. In Guided Writing, Instructor A also used most of the class time to re-teach writing skills. After students’

in-class writing every other week, she spent almost the entire class clarifying students’

erroneous grammatical concepts, pinpointing improper style, and offering suggestions for students’ writings. Instructor B did not re-teach students how to write in Freshman English. In Guided Writing, she re-taught important concepts when students had wrong answers to exercises.

Taking Remedial Measures

After assessment, both instructors took remedial measures to help students make up for their bad performance. Both instructors encouraged students to earn extra points by visiting the language lab to help them pass in Freshman English. Instructor A allowed students to revise their writings and improve their scores after re-teaching how to compose a good writing piece in Freshman English and in Guided Writing.

She would pick better writings when she graded students’ summative performance at the end of the semester. If students did really bad in the mid-term, Instructor A gave them extra points by giving them opportunities to memorize good passages or sentences in the text to prevent them from being failed.

Instructor B allowed students to redo the end-of-unit reading comprehension quizzes voluntarily to make up for their bad scores in Freshman English. She would take the make-up scores into consideration when deciding on students’ final scores.

After the mid-term exam in Guided Writing, she found that no student had a particular

(31)

question correct, and she thought that it was because students did not understand what was being asked in the question. Therefore, she re-explained the question and asked students to redo the question. Re-teaching and remedial measures show that the two instructors do not simply assign a score or pick out errors in students’ writings. They value the positive washback effect of assessment. This also shows that they attend to students’ self-esteem and motivation for learning English by providing many

opportunities to help students experience success in assessments.

Offering Feedback

Both instructors often offered feedback after assessment. The feedback included encouragement and cognitive, suggestive feedback. Both instructors gave much encouragement to students’ good performance in assessment activities.

Instructor A offered encouragement when students wrote good drafts or

performed well in group oral reports and group role-plays. For example, she offered students encouragement by saying “Very good. Very clear.” when they performed well in group oral reports. She also pinpointed the strengths in students’ group oral reports.

She said, “Very good. The answer is to the point.” In Guided Writing, she encouraged students’ good performance in peer proofreading by saying, “I found some of you did a good job in peer proofreading. You pinpointed errors in your partners’ writings. Very good.”

Instructor B gave a great deal of encouragement to students’ good performance

in the two courses. In Freshman English, she gave students encouragement after

students’ self-introductions, translations of the texts, and group role-plays. For

example, after a student’s self-introduction, she said, “It’s very good. Don’t feel

embarrassed.” She further pinpointed the strengths of students’ performance

concretely. She said, “You gave us a lot of very interesting details about you, your

(32)

family, and many other related matters about you. Very good.” In Guided Writing, she gave encouragement to a student’s well-written paragraphs. She said, “It gives a very detailed description. Let me show it to your classmates. It’s going to be very helpful to them.”

Besides encouragement, both instructors gave students cognitive and suggestive feedback to inform them directions for improvement. Usually, Instructor A’s

suggestive feedback preceded or followed her encouragement in the two courses. For example, after a group’s role-play in Freshman English, she said, “The script is quite good and very dramatic. You are supposed to give a very dramatic ending, but it seems that you are not sure when the play should end. It’s a flaw in your play.” The feedback concretely pointed out the drawback in the group’s performance. In Guided Writing, when she gave feedback to a student’s in-class writing, she said, “Well, it’s a good method to develop your main idea by telling a story of your own. However, there is something wrong with the tense.”

Instructor B seldom gave students cognitive feedback. Most of the time, she provided encouragement. There was only one episode observed when she gave suggestions to a writing piece. “It’s already good enough. If we give one hundred points to the paragraph, and we adopt John’s (a pseudonym) suggestions, it will become more than perfect.”

Within the twenty-hour classroom observations of the two instructors’

assessment practices respectively, Instructor A gave twelve times of encouragement and thirty-three times of cognitive feedback preceded or followed by encouragement.

Instructor B provided twenty-three times of encouragement and one cognitive feedback followed by encouragement. Compared with Instructor B, Instructor A laid more emphasis on cognitive feedback. This could be explained by the instructors’

academic backgrounds.

(33)

Although both instructors stated that they thought expressiveness and thoughts were more important than linguistic accuracy, it was evident that Instructor A laid heavier emphasis on accuracy than Instructor B did. Instructor A majored in

linguistics, so it was reasonable that she would attend more closely to the language and organization of the writing scripts and the role-play. Instructor B majored in literature, and she might emphasize students’ thoughts over accuracy.

Both instructors administered follow-up activities following the results of assessment, which contributed to the positive washback effect of assessment. As Guskey (2003) contends, to contribute to learning, assessment must be followed by

“high-quality, corrective instruction designed to remedy whatever learning errors the assessment identified.” Besides, the two instructors provided students with many opportunities to engage in make-up assessments after the follow-up activities. This consolidated learning by giving students second chances to demonstrate their new competence and knowledge (Guskey, 2003).

The Difference between the Instructors’ Beliefs about Assessment Generally speaking, both instructors held similar beliefs in designing,

implementing, and evaluating assessment. There were, however, minor differences between the two instructors with regard to beliefs about assessment, as reflected in the percentage of their grading components and the nature of follow-ups after assessment.

(1) The percentage of their grading components

For Instructor A, assessment is mainly a tool to promote learning. She laid much

emphasis on how much students have learned in reality than the scores or letter grades

students received. Two aspects of Instructor A’s assessment practices illustrated this

(34)

proposition. First, although Instructor A spent much time on assessment of students’

four skills in Freshman English, she simply assigned 10 percent of the final grade to the four-skill-related assessment activities because she thought that these assessment activities were more like exercises than actual assessments. She emphasized what students learned in these assessment activities. This finding is inconsistent with what is suggested in research that to make a fair assessment, percentage in students’ final score should reflect the amount of instructional time invested on different

instructional focus (McMillian, 1997).

Instructor B decided on the percentage of the grading components of the four-skill-related assessment activities in Freshman English according to the

instructional time that the class was involved in those assessments. This is consistent with some teachers’ belief of “grades as currency” (Brookhart, 1993). The concept denotes that grades are “earned” by students, reflecting activities that students engage in or efforts they invest, and grades do not solely represent students’ achievement (Brookhart, 1993). In this sense, grades are considered as “academic token economy and function in classroom management as the reward for work done (Brookhart, 1993, p.142).”

If we consider the negative effects of excluding students’ efforts and time invested on the four-skill-related classroom assessments, Instructor A’s percentage of grading components may demand revision to ensure the fairness of the assessment. It is reasonable to suggest that teachers examine the agreement between their grading percentage and the time students spend on different grading components after a period of instruction. Adjustment to the grading percentage should be made to cater to the instructional time in order to enhance the fairness of the assessment (McMillian, 1997).

(2) The nature of follow-ups after assessment

(35)

Compared with Instructor B, Instructor A spent much more time re-teaching writing in Freshman English. She also attended to the accuracy, style, organization, and contents in the writings. This was because Instructor A thought that writing was one of the objectives of Freshman English, and if she did not teach writing, students would never know how to write a paragraph in English. On the other hand, although Instructor B picked out errors in grammar in students’ writings, she laid more emphasis on the contents and thoughts than accuracy in students’ drafts of oral presentations.

In Guided Writing, Instructor A’s follow-ups were mainly re-teaching. She attended to almost every error in accuracy or word usage in students’ writings.

However, Instructor B did not re-teach writing; instead, she discussed writings with students. The focus of the discussion was the contents, rather than the linguistic accuracy of the writings. As is the case in the two instructors’ feedback, the

phenomenon may also be explained by the two instructors’ backgrounds. Instructor A majors in linguistics, and it is reasonable that she would emphasize the form in students’ writings. Instructor B majors in literature, so she may emphasize students’

thoughts over the linguistic accuracy. However, further research is needed to clarify and establish the relationship between teachers’ academic backgrounds and their beliefs about assessment or giving feedback. Thus, a more holistic understanding of teachers’ beliefs about assessment may be established.

Summary

This chapter discusses the major findings of the two instructors’ beliefs about

assessment. First, generally speaking, the two instructors’ assessment practices were

guided by their beliefs about assessment. In addition, four major beliefs underlay both

instructors’ beliefs about assessment. However, some restricting factors might have

(36)

hindered the instructors’ assessment activities, including the need to keep to the instructional schedule, time limitation, and students’ failure to accomplish the assigned assessment activities, etc. The instructors employed some strategies to cope with the constraints, including canceling, postponing, integrating, and simplifying.

When it came to grading, the two instructors employed both subjective and objective assessment tools to assess students’ performance. They considered both students’

academic and nonacademic performance such as efforts. Besides, after assessment, the two instructors administered some follow-up activities such as re-teaching, taking remedial measures, sharing, and offering feedback to help consolidate students’

learning. Furthermore, the two instructors’ beliefs about assessment were very similar,

except in two aspects: the percentage of their grading components and the nature of

the follow-ups after assessment.

(37)

參考文獻

相關文件

• Assessment Literacy Series: Effective Use of the Learning Progression Framework to Enhance English Language Learning, Teaching and Assessment in Writing at Primary Level. •

• Assessment Literacy Series - Effective Use of the Learning Progression Framework to Enhance English Language Learning, Teaching and Assessment in Writing at Primary Level.

 Encourage students to ‘retell’ the water cycle afterwards – speaking and writing (individual and/or group work)... In nature, water keeps changing between liquid water and

 When citing a foreword/introduction/preface/afterword, begin the citation with the name of the person who wrote it, then the word “Foreword” (or whatever it is), without

Using Information Texts in the Primary English Classroom: Developing KS2 Students’ Reading and Writing Skills (New). Jan-Feb 2016

incorporating creative and academic writing elements and strategies into the English Language Curriculum to deepen the learning and teaching of writing and enhance students’

“I don’t want to do the task in this

 A genre is more dynamic than a text type and is always changing and evolving; however, for our practical purposes here, we can take genre to mean text type. Materials developed