• 沒有找到結果。

Previous studies on students’ perceptions

2.3 Previous studies of using AWE systems in EFL contexts

2.3.1 Previous studies on students’ perceptions

The following three studies investigated students’ perceptions toward the use of My Access in writing class by using questionnaires, and all three studies showed that most of the students were not satisfied with the fixed and repeated feedback provided by the system. Chen &Cheng further pointed out that the instructors’ attitudes and familiarity with the system might also affect the effectiveness and students’ attitudes towards the use of AWE system in class

2.3.1.1 Yang (2004)

Yang looked into the use of the AWE system called My Access in Taiwanese college classroom settings. There were approximately 300 subjects from Freshman English classes, English Composition classes and a group of students from a self-study program. At the beginning, all the subjects received workshop about how to use the program, but some classes had instructions on the use of My Access from the teachers while others did not. Therefore Yang further divided the subjects into five groups: WI and WN (composition classes with or without My Access instruction), EI and EN (English classes with or without My Access instruction), and S (self-study

13

students without instructions).

After using My Access for one or two semesters, questionnaires were administered to both the students and the teachers to further explore their attitudes and perceptions toward using My Access in the classroom. Most subjects had no difficulty in using the program and more than 60% of the students considered it user-friendly.

The results also show that 91% of the subjects who used My Access for a few times or more per month found the program helpful to their English writing. The more often students used the program, the more positive attitudes they had toward it.

Most students liked My Access for the revision function (89%), immediate scores and feedback (86%), the writing portfolio (83%), the instructions for improvement (77%), and the grammar suggestions (71%). They also felt their writing better improved in the Organization domain (61%) and the focus/ meaning domain (52%).

However, there were different opinions about the feedback provided by My Access. About half of the subjects said the comments were easy to comprehend and they would incorporate the feedback in their writing. Nevertheless, only a small percentage of students (13%) considered the scores they had from My Access appropriate, while more than half were uncertain about the scores. The reason why students did not trust the scores are: (1) the computer feedback were too general, (2) there were not clear information for further improvement. Students also pointed out the need of the instructors’ guidance in the writing process in addition to the computer program.

The teachers held positive attitudes toward the system, but they were not sure about its effectiveness in improving students’ writing and stimulating their motivation.

To integrate AWE programs in the EFL writing class, Yang proposed that teacher’s guidance is indispensable and that the idea of “autonomous learning” should be

14

introduced to students. Moreover, instructors should have sharing activities to encourage cooperative learning among students.

Yang also pointed out possible improvement for the system: detailed guidance or a writing sample before students write essays, self-study supporting mechanisms, more detailed instruction in the writing feedback, more options for creative writing, and more scoring scales (not just the 6 point scale). In brief, Yang concluded that no present system can replace the role of human teachers and that every program has its advantages and disadvantages. It is the instructors’ job to find out the best ways to help students learn in the classroom.

2.3.1.2 Chen & Cheng (2006)

Chen and Cheng explored the use of My Access in college EFL writing classes.

There were 68 students in total from three different classes. The two researchers used questionnaires to investigate students’ attitudes toward the program and its effectiveness in essay grading and providing feedback. They also collected writing samples and group interviews to triangulate the questionnaire results.

Chen & Cheng first discussed students’ reactions toward My Access as an essay grader. Surprisingly, except for the immediacy of feedback, most of them had negative reactions toward the grades provided by the system. The finding is similar to those in Yang’s (2004) study. None of the students considered the scores adequate and almost half of the students found the feedback given by the system not helpful at all.

The researchers further looked into students’ self-report to find out the reasons of their dissatisfaction. Firstly, students doubted the fairness of the scores. Some of them found they could trick the program by writing longer passages or using more transitional phrases. The researchers also provided a writing sample without coherence to show there are indeed some design flaws in My Access. Secondly,

15

students considered the feedback given by My Access too general and similar each time. They then expected help from their instructors to give more individual and detailed feedback. In addition, once students received “off-topic” comment, the program did not give them further explanations for improvement, which caused confusion and frustration.

Then Chen & Cheng looked into students’ reactions toward the writing tool functions provided by My Access: My Editor, Thesaurus, Word Bank and Online Portfolio. They found that students did not think the functions help their writing process in general. Some of them seldom used these functions while some found the functions quite limited.

Seeing that students from three classes held slightly different attitudes toward the program, Chen & Cheng also looked into the ways the instructors used My Access in their classes. Besides, the instructors’ technological skills and familiarity with the system also influenced students’ reactions toward the program.

Chen & Cheng concluded that these programs cannot replace the role of human teachers and that no single computer-based program is without flaws. Thus, it is important for researchers and teachers to evaluate the programs, find out their strengths and weakness, and make best use of them.

2.3.1.3 Chen & Cheng (2008)

The researchers adopted a classroom-based approach to investigate students’

perceptions of AWE, and whether the instructors’ attitudes toward it influence the effectiveness of AWE. The subjects were 68 English major students from three different classes, taught by three different instructors. An AWE system My Access!

was used in this research. In addition to questionnaires designed to survey students’

responses, the researchers also interviewed the three instructors to find out the ways

16

they integrated AWE into writing class. How the instructors uses the AWE scores and feedback in improving students’ work were especially worthy of note. Both

Instructors A and B had a two-stage design when implementing AWE system; they required students to work with My Access! and then used the scores and feedback to improve their drafts. After students got a score of 4 out of 6, they submitted their essays, and Instructor A gave students written feedback and there was also a peer review. For Instructor B, she allowed students to use the system as much as they want.

After students submitted their essays, she conducted individual teacher-student conference to help students improve their work. Both of the instructors appeared to have less confidence in AWE system and put more emphasis on human feedback.

Instructor C, however, seemed to have more trust in My Access! She didn’t give students guidelines or feedback during the writing process; instead, she let the system do the scoring and asked students to do online peer review by themselves.

As for the period of using My Access! in class, both Classes A and C used it for 16 weeks, while Class B used it for only six weeks. Instructor B had little trust in automated scores and feedback since the vagueness of the feedback only increased her workload, for she needed to more specific details. Besides, Instructor B and her class encountered technical problems, which made them frustrated. Instructors A and C did not have problems with My Access!, but Instructor A pointed out that the program gave some constraints, which would limit students’ creativity and idea development;

therefore, she gave them more freedom when writing the essays. Instructor C,

however, used the program as a tool to measure students’ writing performance and the final exam grades. Nevertheless, students complained that they had doubts in the fairness of automated scores and feedback. Instructor C then allowed students to submit revision of the essays for her re-assessment, which showed she had less confidence in AWE program due to students’ complaints.

17

Chen & Cheng summed up that the following four factors influenced the use of AWE program in class: the teachers’ attitude toward AWE scores and feedback, the views on human feedback, their familiarities with the AWE program, and their own ideas of teaching writing.

Another finding in this research is students’ perceptions of AWE, and the results were similar to the previous study published in 2006, about half of the students found the program moderately or slightly helpful, whereas the rest considered it not helpful.

The different ratings among the three classes were noteworthy: compared with the other two classes, 86% of Class A found the program was more or less helpful in improving their writing. The researchers attributed the results to the comparatively successful implementation of the program in Class A.

As for students’ responses toward the AWE scores and feedback, 83% students in Class B showed disagreement. This might result from the instructor’s negative attitude toward it. 57% in Class A and 42% in Class C also reported their distrust in the program for the following reasons: its preference of longer essays, strong

emphasis on using transition words, ignoring coherence and content development, and discouraging unconventional ways of writing.

Based on the findings, the researchers concluded that how AWE program was implemented in class would have a strong influence on students’ perceptions and its effectiveness. The way of using automated scores, need for human feedback, students’

language proficiency, and the purpose for learning writing were the issues needed to be taken into consideration. The researchers further emphasized the importance of giving human feedback when implementing AWE program in learning since

automated feedback is unable to solve students’ individual problems, nor can it attend to coherence and idea development. Besides, lack of meaning negotiation might frustrate students since they need a real audience to improve their writing. AWE

18

program might be competent when providing feedback on forms, but advanced learners would expect meaning-focused response. The researchers also addressed the need to strike a balance between form and meaning in second language writing instructions. Therefore, the role of teacher is essential in AWE writing environments, so that learners and teachers can make full use of AWE programs.