• 沒有找到結果。

On-line peer assessment and the role of the peer feedback: A study of high school computer course

N/A
N/A
Protected

Academic year: 2021

Share "On-line peer assessment and the role of the peer feedback: A study of high school computer course"

Copied!
14
0
0

加載中.... (立即查看全文)

全文

(1)

On-line peer assessment and the role of the peer feedback:

A study of high school computer course

Sheng-Chau Tseng, Chin-Chung Tsai

*

Institute of Education and Center for Teacher Education, National Chiao Tung University, 1001 Ta Hsueh Road, Hsinchu 300, Taiwan

Abstract

The purposes of this study were to explore the effects and the validity of on-line peer assessment in high schools and to analyze effects of various types of peer feedback on students. The participating students, a total of 184 10th graders, developed their individual course projects by involving the on-line peer assessment learn-ing activities in a computer course. The peer assessment activities consisted of three rounds, and each of the students acted as an author and a reviewer. Research data as evaluated from peers and experts indicated that students significantly improved their projects as involving the peer assessment activities. The scores deter-mined by the learning peers were highly correlated with those marked by the experts, indicating that peer assessment in high school could be perceived as a valid assessment method. Moreover, this study also exam-ined the relationships between the types of peer feedbacks in which students obtaexam-ined from peer assessment and the subsequent performance of their projects. We categorized peer feedbacks into four types: Reinforcing, Didactic, Corrective and Suggestive. It was found that Reinforcing peer feedback was useful in helping stu-dents’ development of better projects; however, Didactic feedback and perhaps Corrective feedback provided by peers might play an unfavorable role for subsequent improvement of students’ projects. The Suggestive feedback may be helpful in the beginning of peer assessment activities; however, in the later parts of peer assessment, the effect of this type of feedback on learning might not be significant.

Ó 2006 Elsevier Ltd. All rights reserved.

Keywords: Interactive learning environments; Secondary education; Learning communities; Improving classroom teaching; Peer assessment

0360-1315/$ - see front matter Ó 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.compedu.2006.01.007

*

Corresponding author. Tel.: +886 3 5731671; fax: +886 3 5738083. E-mail address:cctsai@mail.nctu.edu.tw(C.-C. Tsai).

(2)

1. Introduction

The implementation of peer assessment, an alternative way of assessment for teachers, receives much attention in recent years (Rada & Hu, 2002; Woolhouse, 1999) due to its effectiveness for students’ learning (Topping, 1998). This new assessment and learning strategy, modeled on the journal publication process of an academic society and based on social constructivism (Falchikov & Goldfinch, 2000; Lin, Liu, & Yuan, 2001a), has been used extensively in diverse fields (e.g.,

Falchikov, 1995; Freeman & McKenzie, 2002). In addition to helping students plan their own learning, identify their own strengths and weaknesses, target areas for remedial action, develop metacognitive and professional transferable skills, and enhance their reflective thinking and problem solving abilities during the learning experience (Sluijsmans, Dochy, & Moerkerke, 1999; Smith, Cooper, & Lancaster, 2002; Topping, 1998), peer assessment is also found to increase students’ interpersonal relationships in the classroom (Sluijsmans, Brand-Gruwel, & van Merrie¨nboer, 2002).

The rapid development of Internet technologies in the last decade reflects a dramatic shift for the educational practice (Seal & Przasnyksi, 2001; Tsai, 2001a; Warren & Rada, 1999). For exam-ple, the Web allows students to explore their learning without the restriction of time and space (Hall & Dalgleish, 1999; Tsai, 2001b). In this learning context, students may learn without an instructor physically present. Therefore, while reviewing recent developments in peer assessment,

Topping (1998) anticipated that computer-aided peer assessment would become an emerging growth area. Tsai, Liu, Lin, and Yuan (2001) also proposed that online peer assessment can improve the freedom of time and space for peer learners, and may also promote students’ atti-tudes toward peer assessment by utilizing anonymous online marking and feedback (Wen & Tsai, 2006). Obviously, using the Internet in aid of peer assessment has profoundly changed the process of peer assessment itself.

Despite its extensive use in many fields, most peer assessment studies were conducted in higher education (Topping, 1998). Therefore, this paper focused to inspect the possibility of implementing on-line peer assessment in high schools. In addition, the effects of the on-line peer assessment on students’ performance were examined. Finally, the importance of peer feedback in peer assessment process was highlighted in this study; therefore, the role of peer feedback was also investigated.

2. Research about on-line peer assessment 2.1. The usage of on-line peer assessment

Peer assessment has been used extensively in many different fields, such as writing composition, business, science, electrical engineering, medicine, information and social science. While reviewing the past studies of peer assessment,Topping (1998)found it to be a reliable and valid method for assessment and teaching. The peer assessment scheme has been modeled after the authentic jour-nal publication process of an academic society. In the process, the editors of the jourjour-nal provided the authors with anonymous comments and suggestions for further modification, thus making the papers more mature (Roth, 1997; Tsai, Lin, & Yuan, 2002).

(3)

Lin et al. (2001a) found many students did not improve over two rounds in an on-line peer assessment study.Tsai et al. (2002)showed a similar finding. They used a three-round peer assess-ment model and pointed out that most students improved their work over three rounds, which was an optimal situation. It is likely that many students did not improve over only two rounds, nor did they make significant progress over more than three rounds. Therefore, the present study adapted a three-round peer assessment model. And peer assessors for each student were randomly assigned and set by the on-line system before the activity initiated.

Moreover,Lin et al. (2001a)suggested that web-based peer assessment at least has the follow-ing advantages over traditional paper-and-pencil peer assessment:

1. Students evaluate peers’ work through the web (not in a face-to-face presentation), thereby ensuring anonymity and facilitating willingness to critique.

2. Web-based peer assessment allows teachers to monitor students’ progress during any period of the assessment process. Teachers can always determine how well an assessor or author performs and constantly monitors the process whereas this is nearly impossible during traditional peer assessment when several rounds are involved.

3. Web-based peer assessment can decrease the cost of photocopying their projects for their peer assessors.

However, Internet is not accessible to some of the students at home, which causes difficulty in web-based peer assessment. Nevertheless, it is believed that the difficulty can be solved with the prevalence of Internet in the future.

2.2. The effects of using on-line peer assessment

Numerous researchers have perceived the usefulness of on-line peer assessment; therefore, some studies have implemented such innovative way of peer assessment.Tsai et al. (2002)used a net-worked peer assessment system based on the use of a Vee heuristic (Novak & Gowin, 1984), allowing a group of preservice teachers to further develop their instructional activities.Lin, Liu, and Yuan (2001b)designed Network Peer Assessment System (NetPeas) to prove that it could facilitate univer-sity students to progress when learning to design some project work.Kwok and Ma (1999)developed Group Support Systems (GSS) to support collaborative activities and peer assessment.Rada and Hu (2002)conducted peer assessment online, analyzing student–student commenting patterns in three different classes of computer science students. In addition, whenLin et al. (2001b)managed web-based peer-review activities, they observed that students did learn effectively from reading numerous peers’ work and feedbacks. In summary, relevant studies supported the usage of web-based or on-line peer assessment. However, many of these studies were implemented in higher education, and more research concerning the usage of on-line peer assessment in high school level is required. Con-sequently, this study implemented on-line peer assessment activities in a high school course. 2.3. The validity of peer assessment

The validity of peer assessment is often a major concern for educators. Topping (1998)

(4)

1980 to 1996. Twenty-five among the 31 reviewed papers conferred on a high correlation between peer assessment grading and teacher assessment, indicating high validity of peer assessment. Falchikov and Goldfinch (2000) further reviewed 48 quantitative peer assessment studies and found that peer assessment scores resembled more closely teacher assessments when academic products and processes, rather than professional practice, were rated. How-ever, almost all of the studies in Topping (1998) and Falchikov and Goldfinch (2000) were conducted in traditional paper-and-pencil peer assessment format. The research particularly addressing the issue regarding the validity of on-line peer assessment is rare. As a result, this study would examine the correlation between the grading from on-line peer assessment and teacher assessment.

2.4. The role of peer feedback

Topping (1998) noted that in the process of peer assessment, peer feedback could be more timely and individualized. Its greater immediacy, frequency and volume compensate the lack of high quality feedback from a professional staff member. Web-assisted or on-line peer assess-ment, clearly, can help students effectively gather much more peer feedback than traditional method of peer assessment. Also, researchers would agree the quality of peer feedback is critical for the success of peer assessment learning activity. For example, Smith et al. (2002)found that brief feedbacks, in addition to marking, could increase transparency of the peer review process and student confidence and then enhance learning outcomes. Topping (1998)also asserted that different types of feedback could have various effects on students. Therefore, this study would analyze peer students’ feedback involved in peer assessment and explore the role of peer feedback.

3. Research questions

Unlike much previous peer assessment research, which was all done in higher education, this study aimed to examine if the on-line peer assessment could assist high school students to improve their projects. In addition, it also investigated the correlation between the grades of students’ peer assessment and the evaluation of experts. That is, the validity of peer assessment scores was exam-ined by checking the consistency between the scores marked by peers and those by their teachers. Furthermore, the role of different types of peer feedback on students’ project performance was examined in order to explore the relationships between peers’ feedback and students’ quality of the projects.

In sum, the research addressed the following three questions:

1. Did the high school students improve the quality of their projects in a computer course after the three-round peer assessment activity?

2. What were the relationships between the scores made by the learning peers and those deter-mined by experts?

3. What were the relationships between the types of peer feedback in which the students obtained and the subsequent performance of their projects?

(5)

4. Methodology 4.1. Participants

The participants of this study included 184 10th-grade students (16-year-olds) from four dif-ferent classes in a school in Taiwan. The school is a first-rate school in the local. Every student, who enrolled in a computer course in this study, was required to design an itinerary project suit-able for their classmates to travel by using searching browsers on the Internet. A total of 184 individual itinerary projects were generated. Then the students were asked to comment on their peers’ design anonymously through a three-round on-line peer assessment system. Then, they were asked to revise their own projects by taking their peers’ comments and suggestions (described later). In other words, every student acted as an author, an assessor and an adapter. In addition, the implementation of on-line peer assessment system in a computer course also provided students with additional experience of using the Internet, thus concurring with the rationale of the course.

4.2. Project for peer assessment

Each student in this study was asked to design a project for the class tour. The following is an example for the project. One student designed a one-day trip from their school to Taipei zoo. In this project, one should show the expense of time and money, such as how much time it would take and how much money it would cost from the school to Taipei and from Taipei to the zoo, admission fee for the zoo, locations for lunch and dinner. The student also had a plan for the flow of visiting the zoo for the whole day. Each student proposed an individual project like this for the computer course. The students were taught to use searching browsers (e.g. www.goo-gle.com.tw or tw.yahoo.com) over Internet to access related information. That is, the students were asked to use the web information searching methods taught by the computer course to complete the projects. After they got sufficient materials from the Web, they were encouraged to organize them or combined them with personal experience or preference before submitting the project through Internet to the on-line peer assessment system. Then, their peers read, rated and commented on others’ projects.

4.3. On-line peer assessment model and peer assessment scores

The model used here was adapted from Tsai’s model (Tsai et al., 2001, 2002). After students submitted their projects, 10 of their peer reviewers were randomly assigned to comment on and rate on each project. The reviewers were not changed throughout the peer assessment process to fully understand the progression of assessed work. In other words, for each round of the peer assessment process, each student would receive 10 comments and scores from his/her peer asses-sors. All of the students acted both as assessors and authors.

The networked peer assessment model consists of some steps, illustrated inFig. 1.

According toFig. 1, students submitted their project first, and experienced three rounds of peer assessment, and they had two times of modifying their projects. It took about six weeks to com-plete the peer assessment process proposed.

(6)

In the peer assessment process, it might be quite important to have their performance evaluated in quantitative terms (Tsai et al., 2001, 2002). The point of these quantitative scores was to quan-titatively represent students’ performance. Therefore, statistical information could be collected for the research purpose and students were allowed to have a better understanding about how they had progressed in their projects. In this study, each student’s project was quantitatively evaluated by their peer reviewers on the three dimensions adapted from Tsai et al. (2002).

1. Creativity: the extent to which the project reveals the student’s originality. For example, if student A simply copied more ideas from travel agencies while student B planned some novel tours in the project, then he got lower score than student B in the dimension of Creativity. 2. Relevance: the extent to which the content of the project is related to the course purpose (i.e., using the web resources); that is, the extent to which the student shows the ability to utilize web resources to enrich or finish the project. For example, if student A integrated and made use of more information from the web than student B, then he got higher score than student B in the dimension of Relevance.

3. Feasibility: the extent to which the project could be practically carried out. For example, if student A suggested a tour cost which was closer to the amount students could afford than student B, then he got a higher score than student B in the dimension of Feasibility. Or, if the traveling itinerary was planned in a more reasonable schedule, the score would be higher.

The peer assessors gave a score between 1 and 7 (with 1 point as unit) to every peer’s project on each dimension above. Here the seven-point scale was adopted rather than 1–100 scale in that the former was easier for students to grade (in certain range) when giving scores and it could also avoid the situation of arbitrary scoring. That is, in the case of 1–100 scale, the students might have too much variation for representing peers’ performance; hence, the situation of arbitrary scoring might occur. The peer assessment scores revealed in each round are also shown in Fig. 1. As described above, each student’s project was evaluated by 10 learning peers. Therefore, for each dimension of each peer assessment round, the average score of the 10 peers for the student’s pro-ject was calculated to represent his/her performance (as evaluated from peers). One may be inter-ested in the inter-rater (inter-peer) reliability of peer scores. By using alpha coefficients, this study found that the alpha coefficients for Creativity, Relevancy, Feasibility were 0.74, 0.77, 0.75, respectively, for the first round, 0.80, 0.81, 0.74 for the second round, and 0.70, 0.73 and 0.71 for the final round. These results suggested that the peer assessment scores in this study had ade-quate reliability (internal consistency). That is, a score, say 5, was being interpreted in roughly the

(7)

same way by the participants. In sum, the usage of 1–7 quantitative scale for peer score was per-ceived as appropriate in this study.

In addition to these quantitative scores, all of the peer assessors were asked to provide some qualitative comments to help the project author improve the work. These comments became important basis for the modification of the course projects. These comments were viewed as for-mative feedbacks for the development of students’ projects, and the types of these feedbacks were also analyzed (described later).

4.4. Expert scores

The following two experts also evaluated students’ projects, served as expert scores. One was a computer science teacher with four years of teaching experiences in a high school. The other was in charge of the computer information affairs in a junior high school with five years of teaching experiences. The former expert evaluated all of students’ projects, while the later evaluated thirty of them for the purpose of examining the reliability of scoring. Both of them evaluated the stu-dents’ projects in all of the three rounds and then gave a score between 1 and 7 (with 1 as a unit) on the dimensions of creativity, relevance and feasibility, exactly the same way utilized by their students. The average correlation coefficient between two experts on each outcome variable was around 0.65 (p < 0.001), indicating that the two experts’ scores on each outcome variable were significantly related. In other words, the two experts had similar perspectives in assessing these students’ projects; the scores given by the experts are perceived to be of adequate reliability in assessing students’ projects. In order to fully understand how peer assessment would play a role for student learning, expert scores were not revealed during the peer assessment process. In other words, students could only acquire their peers’ scores and comments when modifying their projects.

4.5. The analysis of peer feedback

Chi (1996) has proposed a framework of categorizing learning feedback. The framework included four types of feedback: Corrective, Reinforcing, Didactic and Suggestive. We used the same framework to classify peer feedback revealed in this study. As described previously, in addition to quantitative scores, assessors needed to provide qualitative comments and feed-backs for the authors. Therefore, all of the qualitative feedfeed-backs given by students were cate-gorized into aforementioned four types basically based on Chi (1996). Table 1 presents the description for each type with some examples. For instance, Reinforcing feedback is given by positive or supporting expression, while Didactic feedback is presented in a traditional, lecture-like tone.

When categorizing the peer feedback, it was found some of them could be classified into more than one type. For example, the comment that ‘‘Your description is very clear. But somehow it is very similar to the design of travel agency. I think it will be better if you add some creative ideas to it.’’ was regarded as Reinforcing feedback (‘‘Your description is very clear’’) as well as Suggestive feedback (‘‘Somehow it is very similar to the design of travel agency. I think it will be better if you add some creative ideas to it’’). In this case, this feedback was counted as both Reinforcing and Suggestive feedbacks.

(8)

Students’ peer feedbacks were classified by the four types in Table 1. In every round of peer assessment process, the frequency in which each student received for each type of peer feedback was counted for analyses. The frequency was utilized to relate to his/her subsequent project perfor-mance to examine how the type of peer feedback may be correlated with student following outcomes.

The categorization process was conducted by the first author and another junior high school computer teacher. The author analyzed all of the feedbacks. Thirty students’ on-line feedbacks were randomly selected for the latter to perform the same categorization process. The agreement between these two teachers’ categorization is over 95%, indicating highly sufficient reliability for analyzing students’ peer feedbacks.

Table 1

Four types of peer feedbacks analyzed in the study

Type Definition Example

Corrective feedback If a student’s preliminary design or information is incorrect, then a peer can give feedback to point it out or correct it directly. This kind of feedback can effectively reduces students’ incorrect design or information involved in the projects

 ‘‘The exact fare from school to the amusement park is not 15 dollars, but 20 dollars’’

 ‘‘It is wrong to go to the museum by taxi. Taxis are not available around there’’ Reinforcing feedback Reinforcing feedback is given when what

the student does is proper or correct. Positive feeling or recognition of the work is expressed. This kind of feedback sometimes occurs in the situation that students may be encouraged without explicitly knowing the reasons

 ‘‘Though your project looks a little bit small, it’s rich in content, especially in time and expense description’’

 ‘‘Your idea of observing birds is very cre-ative and appealing’’

 ‘‘You did a good job! We won’t get lost with the map attached to it’’

Didactic feedback In this kind of feedback, a peer may provide lengthy explanations when a student makes errors or provides inadequate information. In didactic feedback, lengthy explanations with a lecture tone are taken to direct the students to be on the right track

 ‘‘It would be unwise to go there again, and there is a lack of creativity in your project. Besides, the schedule is not quite work-able; for example, it would be sweaty to barbecue under sunshine at noon. And as to shopping, you didn’t mention where to shop and what for shopping. Therefore I think you need to provide ample infor-mation for each activity’’

Suggestive feedback If a student’s preliminary design is incomplete rather than incorrect, a peer is likely to give advisory feedback, which is more indirect. The peer may alert the student that there is a problem without telling him exactly what the problem is. Such feedback can be in the form of hints, pauses, or a rising intonation in the voice in order to redirect the student’s thinking. This kind of feedback is also considered a kind of scaffolding

 ‘‘Will it be better to go to Water Amuse-ment Park? Is it safe?’’

 ‘‘I would suggest you to explain the activ-ities on schedule more detailed’’

(9)

5. Findings

5.1. The effect of peer assessment process on student project

Students’ itinerary projects were scored by their peers and two experts between 1 and 7 points on the dimensions of creativity, relevance and feasibility. Table 2 shows that students’ average scores in the first round of on-line peer assessment as evaluated by their peers were found to be 3.94, 4.47, and 4.36 on the dimensions of creativity, relevance and feasibility respectively. The same projects rated by experts showed the average scores of 3.45, 3.74, and 3.80 on the above three dimensions, respectively. The scores on the third round of peer assessment were 5.07, 5.38 and 5.27 for the three dimensions as assessed by peers, and 4.58, 5.44, and 5.23 as evaluated by experts. An observation of the mean of students’ scores on each dimension revealed that both from peers’ and from experts’ viewpoints, these students had an increasing average score on each dimension, indicating that students benefited a lot in the quality of their project through the help of on-line peer assessment.

A series of paired t-tests were further used to compare student score changes as a result of the on-line peer assessment. The results are shown inFig. 2. It was found that for each dimension, no matter from peer or expert evaluation, students significantly progressed their performance as a result of the on-line peer assessment process. That is, their scores for each dimension were statis-tically higher in a later round than those in former round. Students significantly improved their projects as involving the peer assessment activities.

5.2. The correlation between the scores marked by experts and those by the learning peers

Table 3 displays the correlation coefficient between peer scores and expert scores on every outcome variable, such as the creativity score for each round. The result showed that the scores

Table 2

Students’ scores of itinerary design both from peers’ and from experts’ perspectives (n = 184)

Variable Peer scores Expert scores

Mean SD Range Mean SD Range

First round Creativity 3.94 0.85 1.60–5.80 3.45 0.93 1–5 Relevance 4.47 0.84 1.10–6.11 3.74 0.99 1–6 Feasibility 4.36 0.90 1.10–5.80 3.80 0.97 1–6 Second round Creativity 4.53 0.82 1.00–6.40 4.07 0.73 2–5 Relevance 4.81 0.79 1.00–6.20 4.63 0.91 2–7 Feasibility 4.74 0.78 1.00–6.20 4.56 0.88 1–6 Third round Creativity 5.07 0.68 2.60–6.60 4.58 0.83 2–7 Relevance 5.38 0.63 2.70–6.50 5.44 0.96 2–7 Feasibility 5.27 0.63 2.70–6.78 5.23 0.87 3–7

(10)

determined by the learning peers were significantly highly correlated with those marked by the experts (r = 0.49–0.79, p < 0.001), indicating that on-line peer assessment in high school as shown in this study could be perceived as a valid assessment method. This finding was consistent with the conclusion drawn fromTopping’s (1998)review; nevertheless, the present study addressed partic-ularly the validity of on-line peer assessment.

5.3. The relationships between the types of peer feedback and project performance

As described earlier, all peer feedbacks were classified into four types based onChi (1996). The feedbacks gathered in Round 1 might possibly influence students’ performance in Round 2 pro-ject. Similarly, the feedbacks derived from Round 2 of peer assessment might affect the following performance, that is, Round 3 project. So, we analyzed the frequency of peer feedback of each type on the first round and second round the student obtained, and then to explore its correlations

1st round 2nd round 3rd round

t = -12.01*** t = -19.39***

t = -11.62***

1st round 2nd round 3rd round

t = -12.16*** t = -14.70***

t = -6.73***

1st round 2nd round 3rd round

t = -11.37*** t = -13.97*** t = -5.65*** Creativity Relevance Feasibility

1st round 2nd round 3rd round

t = -9.57*** t = -15.48***

t = -10.53***

1st round 2nd round 3rd round

t = -14.51*** t = -23.52***

t = -15.5***

1st round 2nd round 3rd round

t = -13.16*** t = -19.56***

t = -12.84***

Peer Evaluation Expert Evaluation

Note: *** p < 0.001

Dimension

Fig. 2. Paired t tests on students’ score changes through three rounds on-line peer assessment.

Table 3

The correlation between expert and peer scores on every outcome variable (n = 184)

Round Evaluation dimension

Creativity Relevance Feasibility

First round 0.60*** 0.79*** 0.74***

Second round 0.49*** 0.65*** 0.55***

Third round 0.57*** 0.64*** 0.51***

***

(11)

to the student subsequent performance of the course project. In this part of analysis, we only used peer scores to represent student performance, as peer scores and expert scores were highly related (shown inTable 3) and expert scores were not revealed during the peer assessment process.

Table 4presents the relationships between the types of peer feedback for the first round and the performance of students’ projects for the second round. The result revealed that Reinforcing feed-back, Suggestive feedback and Didactic feedback were highly related to the performance of stu-dents’ work, but possibly in opposite ways. Reinforcing feedback of the first round was positively correlated with students’ scores on the three dimensions of the second (r = 0.38, 0.49 and 0.38 for Creativity, Relevance, and Feasibility, respectively, p < 0.01). Similarly, the Suggestive feedback was positively correlated with their performance in all dimensions (r = 0.18, 0.25 and 0.21 for Cre-ativity, Relevance, and Feasibility, respectively, p < 0.05). However, Didactic feedback of the first round had significantly negative correlations to students’ scores on the second round (r = 0.36, 0.38 and 0.30 for Creativity, Relevance, and Feasibility, respectively, p < 0.01). These findings suggested that Reinforcing and Suggestive feedback should be constructive in students’ develop-ment of their work, while feedback of lengthy explanation with a didactic tone produced negative relationships to students’ project performance.

Table 5shows the relationships between the types of peer feedback in which students received from the second round and their project performance of the third round. Similar to the results shown inTable 4, the Reinforcing feedback played a positive role on students’ quality of projects

Table 4

The relationships between the peer feedback under the first-round and the peer scores under the second-round Round 1, Feedback classification Round 2, Evaluation dimension

Creativity Relevance Feasibility

Corrective feedback 0.05 0.09 0.15* Reinforcing feedback 0.38*** 0.49*** 0.38*** Didactic feedback 0.36*** 0.38*** 0.30*** Suggestive feedback 0.18* 0.25** 0.21** * p < 0.05. ** p < 0.01. *** p < 0.001. Table 5

The relationships between the peer feedback under the second-round and the peer scores under the third-round Round 2, Feedback classification Round 3, Evaluation dimension

Creativity Relevance Feasibility

Corrective feedback 0.12 0.20** 0.23** Reinforcing feedback 0.52*** 0.55*** 0.53*** Didactic feedback 0.41*** 0.43*** 0.37*** Suggestive feedback 0.01 0.003 0.11 ** p < 0.01. ***p < 0.001.

(12)

in all dimensions (p < 0.01); nevertheless, the Didactic feedback was statistically negatively corre-lated with their performance (p < 0.01). Moreover, the Corrective feedback was also negatively related to students’ scores on ‘‘Relevance’’ and ‘‘Feasibility’’ (p < 0.01). The Suggestive feedback, in this part of analysis, was not statistically related to students’ performance. These results, in gen-eral, supported that Reinforcing peer feedback was useful in helping students’ development of bet-ter projects; however, Didactic feedback and perhaps Corrective feedback provided by peers might play an unfavorable role for subsequent improvement of students’ projects. The Suggestive feedback may be helpful in the beginning of peer assessment activities; however, in the later parts of peer assessment, the effect of this type of feedback on learning was not significant.

6. Discussion and conclusion

This study implemented an on-line peer assessment system to help high school students develop itinerary projects for a computer course. In this study, the peer assessment process took about six weeks for the three rounds. In each round, each student received comments and feedbacks from ten of his/her learning peers. As a result, every participant received numerous feedbacks for mod-ifying his/her work throughout the peer assessment process. The high interactions in such a short time between assessors and project authors were achieved by on-line system, which were rarely possible in traditional classrooms. Moreover, the on-line peer assessment system could also ensure a higher degree of anonymity of peer assessment.

The peer assessment tasks can be regarded as the learning exercises in which the assessment skills are practiced (Sluijsmans et al., 2002).Dochy, Segers, and Sluijsmans (1999)also stated that the students have an opportunity to observe their peers through the learning process and often have a more detailed knowledge of the work of others than do their teachers. In this study, it was shown that on-line peer assessment significantly enhanced students’ quality of projects, as it provided students with opportunities of learning not only from other peers but also from eval-uating other peers’ work. In other words, we believe that the learning in the peer assessment pro-cess comes from both students’ adaptation of peers’ feedback and their assessment of peers’ projects. We also believe in the importance of peer feedback for the peer assessment and assert that in the process of peer assessment, the students continuously gain formative feedback from peers; therefore, we could observe that the students in the present study improved their projects. Furthermore, with the implementation of networked peer assessment, it is believed that the teach-ing load could be somewhat reduced for instructors. In addition, the on-line peer assessment sys-tem successfully provided a small learning society. In the process of evaluating peer work and taking peer feedback, students gradually modified their original work of better quality, thus con-structing and refining knowledge through social interactions in a virtual community linked via the Internet.

This study also examined the consistency between peer assessment scores and expert (teacher) scores. It was found that the correlations between these scores were significantly high, implying that peer assessment could be perceived as a valid assessment method. It should be emphasized again that teachers’ scores were not revealed in the peer assessment process; thus, such consistency did not come from the possibility that the students could ‘‘mimic’’ teachers’ scores or assessment orientations. Also, as shown in Section4.3, the reliability (internal consistency) of peer scores by

(13)

alpha coefficients also indicated the internal consistency was adequately high (ranging from 0.70 to 0.81). However, the internal consistency reached the highest in the second round, but the lowest (but sufficiently acceptable) in the final round. Therefore, the improvement of students’ projects observed in this study did not come from a product of ‘‘group think’’, as the peer assessment scores at the end of the study did not show higher inter-peer consistency when compared to those of previous rounds.

Moreover, this study explored the relationships between the frequencies of various types of peer feedbacks in which students obtained and their subsequent performance on the projects. It was found that Reinforcing feedback is helpful to promote the quality of student project. In a tradi-tional classroom, students are always inspired for learning when they get any praise or positive reinforcement from teachers. The effect has been even more salient in on-line peer assessment environment, as this study found that Reinforcing feedback was very useful in developing student projects. However, the peer feedback in the forms of Didactic and possibly Corrective seemed to be harmful for their following work. The Suggestive feedback was helpful in the initial stage of peer assessment process, but it did not play an important role in the later stage of peer assessment activities. Based upon these findings, teachers need to encourage peers to offer Reinforcing feed-back, and hopefully not Didactic or Corrective feedback during the peer assessment process. In addition, Suggestive peer feedback should be emphasized in the early peer assessment process. Again, this study highlights the importance of peer feedback for learning in peer assessment. The analysis of peer feedback also provides some insights about learning involved in peer ment. This study concludes that Reinforcing and Suggestive feedbacks provided by peer assess-ment are particularly useful for students’ subsequent learning.

This study described an attempt to utilize an on-line peer assessment system to help high school students improve their learning in a computer course. Educators and researchers are encouraged to implement more on-line peer assessment activities for learning and then to acquire more insights about the effects as well as the concerns of using on-line peer assessment for educational purposes.

Acknowledgement

Funding of this research work is supported by National Science Council (Grant Nos. NSC 92-2524-S-009-003 and NSC 93-92-2524-S-009-003), Taiwan, and Ministry of Education, Taiwan (Grant No. E020-90B858).

References

Chi, M. T. H. (1996). Constructing self-explanations and scaffolded explanations in tutoring. Applied Cognitive Psychology, 10, 33–49.

Dochy, F., Segers, M., & Sluijsmans, D. (1999). The use of self-, peer-, and co-assessment in higher education: a review. Studies in Higher Education, 24, 331–350.

Falchikov, N. (1995). Peer feedback marking: developing peer assessment. Innovations in Education and Training International, 32, 175–187.

Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: a meta-analysis comparing peer and teacher marks. Review of Educational Research, 70, 287–322.

(14)

Freeman, M., & McKenzie, J. (2002). SPARK, a confidential web-based template for self and peer assessment of student teamwork: benefits of evaluating across different subjects. British Journal of Educational Technology, 33, 551–569.

Hall, R., & Dalgleish, A. (1999). Undergraduates’ experiences of using the world wide web as an information resource. Innovations in Education and Training International, 36, 334–345.

Kwok, R. C. W., & Ma, J. (1999). Use of a group support system for collaborative assessment. Computers and Education, 32, 109–125.

Lin, S. S. J., Liu, E. Z. F., & Yuan, S. M. (2001a). Web-based peer assessment: feedback for students with various thinking-styles. Journal of Computer Assisted Learning, 17, 420–432.

Lin, S. S. J., Liu, E. Z., & Yuan, S. M. (2001b). Web peer review: the learner as both adapter and reviewer. IEEE Transactions on Education, 44, 246–251.

Novak, J. D., & Gowin, D. B. (1984). Learning how to learn. New York: Cambridge University Press.

Rada, R., & Hu, K. (2002). Patterns in student–student commenting. IEEE Transactions on Education, 45, 262–267. Roth, W.-M. (1997). From everyday science to science education: how science and technology studies inspired

curriculum design and classroom research. Science and Education, 6, 373–396.

Seal, K. C., & Przasnyksi, Z. H. (2001). Using the world wide web for teaching improvement. Computers and Education, 36, 33–40.

Sluijsmans, D., Dochy, F., & Moerkerke, G. (1999). Creating a learning environment by using self-, peer- and co-assessment. Learning Environment Research, 1, 293–319.

Sluijsmans, D., Brand-Gruwel, S., & van Merrie¨nboer, J. J. G. (2002). Peer assessment training in teacher education: effects on performance and perceptions. Assessment and Evaluation in Higher Education, 27, 443–454.

Smith, H., Cooper, A., & Lancaster, L. (2002). Improving the quality of undergraduate peer assessment: a case study from psychology. Innovations in Education and Teaching International, 39, 71–81.

Topping, K. J. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68, 249–276.

Tsai, C.-C. (2001a). The interpretation construction design model for teaching science and its applications to Internet-based instruction in Taiwan. International Journal of Education Development, 21, 401–415.

Tsai, C.-C. (2001b). A review and discussion of epistemological commitments, metacognition, and critical thinking with suggestions on their enhancement in Internet-assisted chemistry classrooms. Journal of Chemical Education, 78, 970–974.

Tsai, C.-C., Liu, E. Z. F., Lin, S. S. J., & Yuan, S. M. (2001). A network peer assessment system based on a Vee heuristic. Innovations in Education and Training International, 38, 220–230.

Tsai, C.-C., Lin, S. S. J., & Yuan, S. M. (2002). Developing science activities through a networked peer assessment system. Computers and Education, 38, 241–252.

Warren, K. J., & Rada, R. (1999). Manifestations of quality learning in computer-mediated university courses. Interactive Learning Environments, 7, 57–80.

Wen, L. M. C., & Tsai, C.-C. (2006). University students’ perceptions of and attitudes toward (Online) peer assessment. Higher Education, 51, 27–44.

Woolhouse, M. (1999). Peer assessment: the participants’ perception of two activities on a further education teacher education course. Journal of Further and Higher Education, 23, 211–219.

數據

Fig. 1. Three round networked peer assessment model.
Table 3 displays the correlation coefficient between peer scores and expert scores on every outcome variable, such as the creativity score for each round
Fig. 2. Paired t tests on students’ score changes through three rounds on-line peer assessment.
Table 5 shows the relationships between the types of peer feedback in which students received from the second round and their project performance of the third round

參考文獻

相關文件

• elearning pilot scheme (Four True Light Schools): WIFI construction, iPad procurement, elearning school visit and teacher training, English starts the elearning lesson.. 2012 •

This study focuses on the need of walking to school for middle-grades students and designs related teaching plans.This study firstly conducts a questionnaire

Based on a sample of 98 sixth-grade students from a primary school in Changhua County, this study applies the K-means cluster analysis to explore the index factors of the

In order to serve the fore-mentioned purpose, this research is based on a related questionnaire that extracts 525 high school students as the object for the study, and carries out

英文:A Study on Increasing Students’ Awareness and Actions of Self-Directed Learning: Using the Students at Chung Hua University Taking “Employment Market Analysis” Course as

The purposes of this research was to investigate relations among learning motivation, learning strategies and satisfaction for junior high school students, as well as to identify

The purpose of this study was to explore the effects of learning organization culture on teachers’ study and teaching potency in Public Elementary Schools.. The research tool of

The purpose of the study aims at discussing the important factors of affecting junior high school students in aboriginal areas in terms of learning mathematics.. The research