• 沒有找到結果。

Participants in the study were the two instructors in the English department of a prestigious university in northern Taiwan.

N/A
N/A
Protected

Academic year: 2021

Share " Participants in the study were the two instructors in the English department of a prestigious university in northern Taiwan. "

Copied!
12
0
0

加載中.... (立即查看全文)

全文

(1)

CHAPTER THREE METHODOLOGY

This chapter delineates the methodology of the current study. In order to have a complete understanding of teachers’ beliefs and classroom assessment, qualitative research methods are employed in the study. The three sections in this chapter specify details regarding the participants, instruments of data collection, and methods of data analysis.

Participants

Participants in the study were the two instructors in the English department of a prestigious university in northern Taiwan.

The first instructor (Instructor A), majoring in linguistics, was teaching five courses in the semester when the study started. Among them were two “Freshman English” classes, two “Basic Aural/Oral Training” classes for English minors and for Social Education majors respectively, and “Guided Writing” for English minors. The second instructor (Instructor B), majoring in literature, was also teaching two

“Freshman English” classes, “Basic Aural/Oral Training” and “Guided Writing” for English majors, and “Pronunciation Practice” for English minors. There are two reasons why the two instructors were chosen. First, they were teaching three identical classes, though to different groups of students, so it was easier and more meaningful to discuss and compare the two instructors’ beliefs about assessment in identical courses. Besides, the two instructors were most willing to help and kindly allowed the author to observe their classes. Third, the difference in educational backgrounds between the two instructors may contribute to the difference in their beliefs and practices of assessment, and this difference deserved discussions and explorations.

Four classes were observed in this study. Originally, the research planned to

(2)

observe all of the three identical classes taught by both instructors, namely,

“Freshman English”, “Guided Writing”, and “Basic Aural/Oral Training”. However, after the five-week observation, the data gathered from the three classes by Instructor A was beyond the scope the author could handle. The author decided to exclude

“Basic Aural/Oral Training” from the research, for the class followed the textbook closely, which led to comparatively inflexible assessment activities or fixed

interactions between the students and the instructor. Accordingly, “Basic Aural/Oral Training” taught by Instructor B was not observed, either. There were thus totally four classes observed.

One year before the study began, this university hosted a four-year school-based project called “Construction and Implementation of College English Language

Curriculum Project” (referred to as “Freshman English Project” in the following parts of the study”) sponsored and directed by the Ministry of Education with the goal of enhancing the English proficiency in listening, speaking, reading, and writing of its non-English-major freshmen. Fifteen Freshman English classes were selected to take part in the project. These students were given a placement test in two skills (listening and speaking) before the semester started along with the scores they obtained in the Joint College Entrance Examination reading and writing, and they were placed in different levels according to the results of the placement test. The listening and speaking tests were based on GEPT Intermediate Test

1

. A total of fifteen classes were placed in four different levels, namely, “High”, “High-Intermediate”,

“Low-Intermediate”, and “Elementary”. In order to enhance students’ English

proficiency in four skills, instructors of all levels should employ instruction activities

1 GEPT is a test aimed at testing the general English proficiency of students of all school levels and the public. GEPT is criteria-referenced, including test in four skills. It has five levels: Elementary,

Intermediate, High-intermediate, Advanced, and Superior. Those who pass the Intermediate Level test are equipped with basic English communication ability and their English proficiency equals that of most high-school graduates.

(3)

related to four skills in class. In addition, they had to assign a score to every student’s performances in the four skills respectively. Furthermore, instructors of all levels agreed on the general framework of the grading policy. It was decided that students’

final score of Freshman English should be composed of two parts of scores, namely, scores in the achievement test and scores in the proficiency test. Students’ scores in achievement were composed of their scores in the activities concerning four skills and their scores of the mid-term and the final exam, representing 70 percent of the final score. The general guideline was decided by the instructors of all levels, but the instructors of the same level held flexibility under the guideline. At the beginning of the semester, instructors teaching the same levels got together to decide on the instruction schedule. They decided on which units in the textbook would be taught, which units would be assigned as out-side readings in the semester. Within the 70 percent of students’ final score representing their achievement, every instructor could decide freely the percentage given to the four-skill-related activities and to the

mid-term and final exam. They could also decide on what to add to the achievement category freely. Instructors of the same level had to write unified mid-term and final exams for students of that level as a team. During the semester, instructors met periodically to exchange experience and opinions about teaching. At the end of the semester, students of all the fifteen classes had to take a proficiency test on listening and reading to check their improvement over the semester on the one hand, and to serve as a yardstick for students’ performance across different levels on the other hand.

This part accounted for 30 percent of the final score of a student. Both instructors involved in the current study taught the low-intermediate Freshman English class.

Compared with Freshman English, there were few constraints to Guided Writing

classes observed in this study. Guided Writing of Instructor A was for English minors,

and these students were randomly grouped, not classified by certain proficiency level.

(4)

Guided Writing of Instructor B was for English majors, and there were students with higher level, average level, and lower level in writing. There were no constraints or requirements from the institution imposed on both classes.

Among the four classes observed in this study, two were taught by Instructor A, including “Freshman English” for students who were placed in the low-intermediate level and “Guided Writing” for English minors. The other two classes were taught by Instructor B, including “Freshman English” in the lower-intermediate Level and

“Guided Writing” for English majors.

Data Collection

Data collected in the study were from three major sources, including classroom observations, interviews with the instructors, and other related documents.

Classroom Observations

In order to have a more complete understanding of the instructors’ beliefs and classroom assessment, each class of both instructors was observed for five times.

Therefore, there were totally 20 hours of classroom observations of both instructors’

assessment practices respectively. The observation of Instructor A’s two classes began after the mid-term exam of the first semester. Two weeks after finishing the

observation of Instructor A’s classes, the author started the classroom observations of

Instructor B’s classes. Table 3-1 outlines the schedule for classroom observations of

the two instructors’ assessment practice.

(5)

Table 3-1 The schedule for classroom observations

Instructor A Instructor B

Freshman English Guided Writing Freshman English Guided Writing Length of observation Oct. 11 ~ Nov. 15,

2002

Oct. 15 ~ Nov.12, 2002

Nov. 30 ~ Dec.27, 2002

Nov. 30 ~ Dec.27, 2002 Amount of observation

per week 2 hours 2 hours 2 hours 2 hours

Total amount of

observation 10 hours 10 hours 10 hours 10 hours

Both instructors’ assessment practices of the two different classes were

videotaped and the parts concerning assessment (e.g. the instructors’ talks revealing their beliefs about assessment, the instructors’ in-class comprehension check, etc.) were transcribed for further analysis.

Interviews with the Instructors

Interviews with both instructors were the major sources of the two instructors’

beliefs. There were three formal interviews with each instructor, and each interview

lasted for about one hour. All the six interviews were conducted in the second

semester. In the three formal interviews, the first one followed the pre-designed,

general interview questions (See Appendix A), exploring each instructor’s general

beliefs about language learning, assessment, designing, implementing, and evaluating

assessment. After that, there were two other interviews concerning Freshman English

and Guided Writing respectively (See Appendix B) based on the syllabi and the

classroom observations of the two courses. In the two interviews, the author asked

both instructors their beliefs about designing certain assessment activities and the

grading policies in detail. The author also asked both instructors about their beliefs

underlying some assessment episodes that deserved further exploration extracted from

(6)

the transcripts of classroom observation.

The formal interviews were semi-structured in which the interviewer (the author) prepared a list of questions, serving as the basic framework for the interview. If the instructors were expressive and provided the information that was not anticipated by the interviewer, the interviewer let the question-and-answer flow proceed smoothly, rather than sticking to a fixed set of questions. Besides formal interviews, there were also several informal interviews via e-mails or on the phone whenever the author found some questions in the previous interviews which required clarification. Table 3-2 shows the focuses of the six formal interviews. Table 3-3 shows the focuses of several informal interviews with both instructors.

Table 3-2 Focuses of the formal interviews The sequence of the

interview Focuses of the interview Time

Instructor A

1 Beliefs about language, assessment in general,

designing, implementing, and evaluating assessment March, 12, 2003 2 Beliefs underlying Freshman English course April, 11, 2003 3 Beliefs underlying Guided Writing course April, 25, 2003 Instructor B

1 Beliefs about language, assessment in general,

designing, implementing, and evaluating assessment March, 18, 2003

2 Beliefs underlying Freshman English course March, 28, 2003

3 Beliefs underlying Guided Writing course April, 15, 2003

(7)

Table 3-3 Focuses of the informal interviews The sequence of the

interview Focuses of the interview Instrument

Instructor A

1 Requirements of Freshman English course Face-to-face questioning 2 Grading policies of Freshman English course Face-to-face

questioning Instructor B

1 Requirements and grading policies of Freshman

English course E-mail

2 Requirements and grading policies of Freshman

English course E-mail

3 Beliefs underlying Guided Writing course E-mail

Interview Questions

Interview questions for both instructors’ general beliefs about language,

assessment, design, implementation, and evaluation of assessment (See Appendix A) were used in the first formal interview with both instructors. The interview questions were divided into five categories, and among them were “beliefs about language and language learning”, “beliefs about assessment in general”, “beliefs about designing assessment”, “beliefs about implementing assessment”, and “beliefs about evaluating assessment”.

As is suggested by literature, how teachers assess students and how they interpret

the results of assessment are influenced by their beliefs about language, learning, and

the subject matters (Brookhart, 1997; Rueda & Garcia, 1994; Genesee & Upshur,

1998). Thus, to have a holistic understanding of teachers’ beliefs and classroom

assessment, their beliefs about language, language learning, and the subject matter

(8)

should be explored. The first category of the interview questions, “beliefs about language and language learning”, was designed to help the author trace the factors that mold the two instructors’ beliefs. The second category, “beliefs about assessment in general”, was designed to elicit instructors’ beliefs about the function of assessment, grading, and the difficulty they met in assessment.

The other three categories were designed to elicit the instructors’ beliefs about designing, implementing, and evaluating assessment. Clark and Peterson (1986) identify three categories of teachers’ thoughts: teacher planning, teachers’ interactive thoughts and decisions, and teachers’ theories and beliefs. Teacher planning includes teachers’ preactive (before instruction) and postactive (after instruction) thoughts.

Teachers’ interactive thoughts refer to their thoughts during instruction. Clark and Peterson (1986) contend that teachers’ thoughts in planning are qualitatively different from those when they interact with students. However, the distinction of teachers’

preactive and postactive thoughts is not clearly drawn because teachers’ reflections after classroom instruction would influence and guide their thinking for future classes.

Thus, the two kinds of thinking inform each other in a cyclical way, thus obscuring the distinction between them (Clark & Peterson, 1986). Teachers’ preactive,

postactive, and interactive thoughts are guided by the third category of teachers’

thought process -- “teachers’ theories and beliefs” (Clark & Peterson, 1986). Drawing on the three-phase classifications, the interview questions were designed to explore instructors’ beliefs before, after, and during assessment, namely, beliefs about designing, evaluating, and implementing assessment. By incorporating teachers’

beliefs about language and language learning, assessment in general, and the three kind of beliefs involved in teachers’ assessment, the interview questions may help yield a holistic picture of the instructors’ beliefs and practices in classroom

assessment. To make the interviews more valid, if the instructors did not know the

(9)

definition of the term “assessment”, the interviewer explained it to the instructors briefly.

Other Related Documents

Besides classroom observation and interviews with the instructors, the other source of data was some related documents from the instructors and the students of the four classes. Documents from the instructors included their syllabi for Freshman English and Guided Writing courses (See Appendix C). The syllabi served as basis of interview questions of the requirements and grading policies of both instructors’ two classes.

Documents from the students included their journals and in-class or take-home writings (See Appendix D). The instructors’ feedback to these writing works helped the author understand the instructors’ beliefs about grading writing. These documents helped the author have a more complete understanding of the instructors’ beliefs and their beliefs about assessment in particular.

Procedures of Data Collection

The process of data collection started from the observations of the instructors’

teaching practices. The related documents were collected during the classroom observations. After the two-month observation, the parts related to assessment were transcribed as the basis of data analysis. The interviews with the instructors were conducted in March and April, 2002; afterwards, the interviews were audiotaped and transcribed. The analysis of the data from the classroom observation started

concurrently with the interviews. Finally, the results of the analysis of classroom

observations, the results of the interviews, and other related documents were

compared and discussed together, thereby producing a holistic picture of the

(10)

instructors’ beliefs and practice of classroom assessment.

Data Analysis

In the current study, the data for analysis included the transcriptions of the interviews with the instructors, instructors’ assessment practices, other related documents including the syllabi for Freshman English and Guided Writing, the instructors’ quiz sheets, and the checklist for writings and students’ writing pieces.

These were analyzed following the methods of data analysis in qualitative research posed by Bodgan and Bilklen (1992). They contend that there were three distinct activities involved in the data analysis of qualitative research, namely, discovering, coding, and discounting the data. In these activities, a researcher looked for emerging themes in the data and constructed typologies, then coded these themes and typologies into different categories and finally interpreted the coded categories in the context where they were gathered.

The current study drew on the methods of data analysis in qualitative research presented in Bodgan and Bilklen (1992). First, the author read the data several times to get a rough picture of the instructors’ assessment activities and possible themes related to assessment. Then the data were coded into categories of classroom

assessment activities such as quizzes, discussing the answers to the exercise, dealing with students’ difficulties, etc. Some teaching episodes revealed the instructors’

beliefs about assessment, and these episodes were also labeled as “explaining the format of the mid-term exam”, “reviewing the important words in the lesson”,

“providing strategies to help students solve their difficulty”, among others. Then labels which conveyed similar meanings or concepts were subsumed under a more general label such as “helping students experience success in assessment”. In a similar fashion, the new label could be subsumed into another more general label. The

categorizing and labeling enabled the data from the classroom observation to be more

(11)

organized and easier to be synthesized, thereby facilitating the exploration of the instructors’ beliefs about assessment in their teaching practice.

Besides, the teaching practice of each class observed was labeled and then listed in a chronological order. Thus, each class had its scenario of the classroom assessment activities. By ordering the instructors’ classroom assessment activities, the

inconsistency between the instructors’ plans for assessment and their assessment practices could be detected. For example, one instructor announced that the class would start by checking the answers to the exercises at the beginning of the class;

however, it turned out that students’ assignment was discussed first. The inconsistency helped identify contextual factors that deterred the instructors’ assessment practices.

The results of the analysis of the transcriptions of the interviews, classroom observation, and other related document helped produce detailed descriptions of the instructors’ beliefs and beliefs about classroom assessment in particular. Besides, the data from the interviews were compared with the data from classroom observations.

Thus, the congruence and incongruency between the instructors’ beliefs about

assessment and their assessment practices could be found and discussed. In addition,

the differences between both instructors’ beliefs about assessment and possible

reasons causing the difference were discussed. The current study explored both

instructors’ beliefs about assessment, the constraints and difficulty the instructors face

in assessment and their coping strategies, the instructors’ follow-ups after assessment,

and the difference in assessment between the instructors. Hopefully the study will

yield an in-depth analysis of the beliefs and practice of classroom assessment of the

university instructors of English.

(12)

數據

Table 3-1 The schedule for classroom observations
Table 3-2 Focuses of the formal interviews  The sequence of the
Table 3-3 Focuses of the informal interviews  The sequence of the

參考文獻

相關文件

Understanding and inferring information, ideas, feelings and opinions in a range of texts with some degree of complexity, using and integrating a small range of reading

 Promote project learning, mathematical modeling, and problem-based learning to strengthen the ability to integrate and apply knowledge and skills, and make. calculated

• helps teachers collect learning evidence to provide timely feedback & refine teaching strategies.. AaL • engages students in reflecting on & monitoring their progress

Based on the suggestions collected from the Principal Questionnaire and this questionnaire, feedback collected from various stakeholders through meetings and

Robinson Crusoe is an Englishman from the 1) t_______ of York in the seventeenth century, the youngest son of a merchant of German origin. This trip is financially successful,

fostering independent application of reading strategies Strategy 7: Provide opportunities for students to track, reflect on, and share their learning progress (destination). •

Strategy 3: Offer descriptive feedback during the learning process (enabling strategy). Where the

Now, nearly all of the current flows through wire S since it has a much lower resistance than the light bulb. The light bulb does not glow because the current flowing through it