• 沒有找到結果。

Can Automated Writing Evaluation Programs Help Students Improve Their English Writing?

N/A
N/A
Protected

Academic year: 2021

Share "Can Automated Writing Evaluation Programs Help Students Improve Their English Writing?"

Copied!
3
0
0

加載中.... (立即查看全文)

全文

(1)

International Journal of Applied Linguistics & English Literature ISSN 2200-3592 (Print), ISSN 2200-3452 (Online) Vol. 2 No. 1; January 2013

Copyright © Australian International Academic Centre, Australia

Can Automated Writing Evaluation Programs Help Students Improve

Their English Writing?

Pei-ling Wang

Department of Applied Foreign Languages, National Kaohsiung University of Applied Sciences 415, Chien-Kung Road, Kaohsiung 807, Taiwan

Tel: 8867-3814526-3276 E-mail: peiling@cc.kuas.edu.tw

Received: 01-08- 2012 Accepted: 10-09- 2012 Published: 01-01- 2013 doi:10.7575/ijalel.v.2n.1p.6 URL: http://dx.doi.org/10.7575/ijalel.v.2n.1p.6

Abstract

This study explores the effect of the automated writing evaluation (AWE) on Taiwanese students writing, and whether student improvement and their perception of the program are related. Instruments included a questionnaire, 735 essays analyzed in Criterion, and a pre/post essay. Two classes of 53 college students participated in the study. Descriptive statistics, paired-samples t-tests, Pearson correlation, effect size, and regression were used to analyze the data. Results showed that students improved significantly in terms of the length of the essay and the scores awarded by the machine and the human raters. However, among the five essays, the first essay is the only one showing a significant level of consistency between student improvement and student attitude, and the correlation declined dramatically after the first essay. To conclude, this study may be of importance in confirming the usefulness of the AWE functions such as recursive revising and instant scoring, as well as in providing teachers with a better understanding of how student beliefs about the Criterion program might relate to their writing performance.

Keywords: AWE, Criterion, writing 1. Introduction

1.1 The Problem

While many studies have shown that students learn to write by writing (e.g. Brown, 2001; Elbow, 1973; Zamel, 1982), the National Commission on Writing (2003) pointed out, American students practice writing much less than they need. Similarly, Tsai (2010) claimed that the reason why many Taiwanese students’ English writing skills are so poor lies in the fact that they seldom or never practice writing.

Actually, writing is not only a nightmare for students. Reading and correcting student’s writing is also very time-consuming for teachers. Especially for Asian teachers who often have more than 50 students in one class, asking students to write more means teachers have to devote extended periods of time to assessing and giving comments on student work.

With the advent of the Internet, the topic of automated writing evaluation (AWE) has received considerable attention. Proponents of AWE maintained that the feature of immediate feedback of AWE can make learning more efficient and interesting (Frost, 2008; LinHuang, 2010; Moseley, 2006; Taylor, 1996); additionally, the AWE gives useful advice on organization and also objective feedback regarding the revision (Grimes, 2008; Phillips, 2007).

On the other hand, critics of AWE argued that the validity of AWE programs is doubtful. For example, McCurry’s study (2010) showed that the machine did not grade the broad and open writing tasks as reliably as human raters. Other studies (Chen, 2006; LinHuang, 2010; Wang & Brown, 2007) also found that the machine tended to score higher than human graders.

Given the fact that AWE programs are usually very costly, it is necessary to know the effectiveness of the AWE program before schools purchase the license of a particular program. Unfortunately, there have been few studies on the use of the AWE programs, and the results of these studies are still conflicting and inconclusive.

1.2 Importance of the Problem

At present, My Access and Criterion are the two most popular AWE programs in Taiwan. However, there has been little research on the outcomes of these two programs in the Taiwan classroom setting. Studies examining Taiwanese student attitudes toward the program were scant, and most of them inspected My Access (Chen & Cheng, 2008; Yang, 2004; Yu & Yeh, 2003) instead of Criterion. Moreover, most of the previous studies (e.g. Frost, 2008; Moseley, 2006; Otoshi, 2005) only examined students’ writing improvement in one genre of essay (e.g. persuasive writing), very little attention has been paid to other rhetorical modes such as process, cause/effect, and comparison/contrast essays. Furthermore,

(2)

IJALEL 2 (1):6-12, 2013 11 semester. However, since the participants of this study were English majors, they were taking other English-related courses and probably had been working on other writing assignments while the study was conducted. Therefore, it would be arbitrary to declare that the improvement of student writing was exclusively due to the use of Criterion. The third finding of this study was that there was no significant relationship between the machine score and student attitude toward the program. In fact, except the first essay showing positive correlation, the correlation among other essays decreased considerably, which might indicate the longer students tried the Criterion program, the more they felt the program was not useful. It is unclear, however, why students had such a negative evaluation.

The findings of this study lead to a number of implications. First, an AWE program is a good tool to motivate students to devote to the recursive process of drafting and revising. However, since the machine might not really understand the content of an essay, teachers had better randomly check student writing samples and provide consultation with individual student to clarify the vague or even incorrect machine messages (if there are any). Next, considering the machine might value a wordy but meaningless essay, teachers need to remind students about this drawback of the AWE program. Teachers should also encourage students to regard the quality of their essay more highly than the quantity of words or the machine scores. Furthermore, in the beginning of the class, teachers could show their own positive attitude toward the machine and patiently demonstrate various functions of the program to increase student confidence in the ability of the program, although it is also important to warn students not to blindly trust the machine scoring. Finally, according to Grimes (2008), “if AWE is used persistently and indiscriminately without a competent teacher or mentor and without authentic human audiences, then it is possible that students’ beliefs about the social nature of writing may be distorted, as critics have feared (p.197).” Writing teachers who plan to incorporate an AWE program into the curriculum need to consider the importance of meaningful communication between the writer and the real reader. Future studies might display strategies for teachers to integrate the activity of peer feedback for revision with the application of AWE programs, which will help to mediate the limitations of the use of the AWE in the classroom setting.

Acknowledgements

Part of this paper was presented in the 2011 Symposium on Second Language Writing. National Taiwan Normal University. June 9-11, 2011. Taipei, Taiwan. The author would like to express her gratitude to the audience for their valuable advice.

References

Brown, H. D. (2001). Teaching by Principles. Addison Wesley Longman.

Chen, C. F. E., & Cheng, W. Y. E. (2008). Beyond the design of automated writing evaluation: Pedagogical practices and perceived learning effectiveness in EFL writing classes. Language Learning & Teaching, 12 (2), 94-112.

Chen, H. J. (2006). Examining the scoring mechanism and feedback quality of My Access. Proceedings of Tamkang

University Conference on Second Language Writing. Tamkang University, Taipei.

Chen, H. J., Chiu, T. L., & Liao, P. (2009). Analyzing the grammar feedback of two automated writing evaluation systems: My Access and Criterion. English Teaching and Learning, 33 (2), 1-43.

Cheng, W. Y. (2006). The Use of a Web-based Writing Program in College English Writing Classes in Taiwan— A Case Study of MyAccess. Unpublished Master’s thesis. National Kaohsiung First University of Science and Technology, Taiwan.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Earlbaum Associates.

Creswell, J. W. (1994). Research design. Thousand Oaks, CA: SAGE Publications.

Dikli, S. (2006). An overview of automated scoring of essays. The Journal of Technology, Learning, and Assessment, 5 (1), 1-36. Retrieved from http://escholarship.bc.edu/ojs/index.php/jtla/article/view/1640

Elbow, P. (1973).Writing without Teachers. New York: Oxford University Press.

Flinn, J. (1986). The role of instruction in revising with computers: Forming a construct for good writing (ED 274963).

Frost, K. L. (2008). The effects of automated essay scoring as a high school classroom intervention. Unpublished Doctoral dissertation. University of Nevada, Las Vegas, USA.

Grimes, D. C. (2008). Middle school use of automated writing evaluation: A multi-site case study. Unpublished Doctoral dissertation. University of California, Irvine, USA.

Grimes, D. C., & Warschauer, M. (2006). Automated essay scoring in the classroom. American Educational Research Association (AERA) Annual Conference, San Francisco, CA, USA.

LinHuang, S. H. (2010).The exploitation of e-writing in an EFL classroom: Potential and challenges. Unpublished Master’s thesis. I-Shou University, Taiwan.

(3)

IJALEL 2 (1):6-12, 2013 12 McCurry, D. (2010). Can machine scoring deal with broad and open writing tests as well as human readers? Assessing

Writing, 15, 118-129. http://dx.doi.org/10.1016/j.asw.2010.04.002

Moseley, M. H. (2006).Creating recursive writers in middle school: the effect of a writing program on student revision

practices. Unpublished Doctoral dissertation. Capella University, USA.

National Commission on Writing (2003). The Neglected “R”: The Need for a Writing Revolution. New York, NY, College Entrance Examination Board.

Otoshi, J. (2005). An analysis of the use of Criterion in a writing classroom in Japan. The JALT CALL Journal, 1(1), 30-38.

Phillips, S. M. (2007). Automated essay scoring: A literature review. TASA Institute, Society for the Advancement of Excellent in Education, 1-70.

Taylor, J. (1996). Computers: Tools of oppression, tools of liberation. A paper presented at the annual meeting of the Conference on College Composition and Communication, Milwaukee (ED 434350).

Tsai, P. Y. (2010). Students’ biggest writing problem- they never write! Retrieved from http://mag.udn.com/mag/campus/storypage.jsp?f_MAIN_ID=13&f_SUB_ID=1259&f_ART_ID=290730

Wang, Y. J. (2011). Exploring the effect of using automated writing evaluation in Taiwanese EFL students’ writing. Unpublished Master’s thesis. I-Shou University, Taiwan.

Wang, J., & Brown, M. S. (2007). Automated essay scoring versus human scoring: A comparative study. The Journal

of Technology, Learning, and Assessment, 6(2), 1-28.

Warschauer, M., & Grimes, D. C. (2008). Automated writing assessment in the classroom. Pedagogies, 3, 22-36. Williamson, M. M., & Pence, P. (1989). Word processing and student writers. In B. K. Britton & S. M. Glynn (Eds.) (1989). Computer writing environments: Theory, research, and design. (pp.93-127). Hillsdale, NJ: Lawrence Erlbaum. Yang, N. D. (2004). Using My Access in EFL writing. Proceedings of the 2004 International Conference and

Workshop on TEFL & Applied Linguistics (pp. 550-564). Taipei, Taiwan: Ming Chuan University.

Yu, Y. T., & Yeh, Y. L. (2003). Computerized feedback and bilingual concordance for EFL college students’ writing. Proceedings of the 2003 International Conference on English Teaching and Learning in the Republic of

China (pp. 35-48). Taipei, Taiwan: Crane.

Zamel, V. (1982).Writing: the process of discovering meaning. TESOL Quarterly, 16, 195-209. http://dx.doi.org/10.2307/3586792

參考文獻

相關文件

Using Information Texts in the Primary English Classroom: Developing KS2 Students’ Reading and Writing Skills (New). Jan-Feb 2016

incorporating creative and academic writing elements and strategies into the English Language Curriculum to deepen the learning and teaching of writing and enhance students’

 A genre is more dynamic than a text type and is always changing and evolving; however, for our practical purposes here, we can take genre to mean text type. Materials developed

Making use of the Learning Progression Framework (LPF) for Reading in the design of post- reading activities to help students develop reading skills and strategies that support their

- promoting discussion before writing to equip students with more ideas and vocabulary to use in their writing and to enable students to learn how to work in discussion groups and

• School-based curriculum is enriched to allow for value addedness in the reading and writing performance of the students. • Students have a positive attitude and are interested and

To help students appreciate stories related to the theme and consolidate their knowledge and language skills in writing stories, the English Club has organised a workshop on story

 Diversified parent education programmes for parents of NCS students starting from the 2020/21 school year to help them support their children’s learning and encourage their