• 沒有找到結果。

BeyondShare Evaluation

Chapter 3. Beyond sharing information: Engaging students in cooperative and competitive

3.5. BeyondShare Evaluation

3.5.1. Participants

A BeyondShare evaluation test was conducted to determine if the beyond-sharing activities successfully engaged students in meaningful learning and knowledge construction.

Participants were 34 college freshmen enrolled in an introductory computer science class at a research university in northern Taiwan. Students were randomly assigned to 3 clusters

consisting of 12, 9, and 13 participants, with students in each cluster studying one of three learning units on the topics of function, class, or flow as selected from a C++ textbook.

Members of each cluster generated individual concept maps for their assigned unit.

3.5.2. Procedures

I purposefully designed a series of beyond sharing activities to ensure active learning, positive interdependence, and personal accountability. Using BeyondShare features,

cooperative learning was structured by having participants work on a multiple-stage concept-mapping task requiring task interdependence (Table 5). After being grouped according to learning material divisions, students were asked to produce their own concept maps (a task that Novak & Gowin [1984] refer to as “meaningful learning”) for their assigned unit and to share their products with peers who worked on other units. Participants were instructed to evaluate, compare, and give feedback for the cross-unit concept maps.

Participants therefore contributed to their classmates’ tasks by giving feedback while gaining information and knowledge about the other learning units. Based on a meta-plan, participants were asked to link their own maps with the cross-unit maps they selected during the peer assessment stage to form integrated maps. Participants accepted responsibility for

contributing to their cross-unit peers’ efforts while competing with same-unit peers.

Participant roles switched between active and passive sharers, competitors and helpers, assessors and feedback recipients, and among active integrators, thereby achieving the successful group work components defined by Johnson et al. (1998).

As shown in Figure 7, the evaluation procedure consisted of three stages:

1. Preparation. During week eight of the school semester, students were taught concept mapping techniques and given several examples for practice. During week nine they were introduced to BeyondShare and its activities, after which they were randomly assigned to one of the three units.

2. Personal construction. During week ten, participants used their class time to create their individual maps. In an attempt to prevent social loafing or

duplications of their classmates’ efforts, the students were not allowed to view their peers’ maps during this stage.

3. Sharing construction. During week eleven, students were allowed to view the concept maps created by classmates assigned to the other units. They were instructed to select one personal best-fit map from each unit and to establish interlinks across units. Participants were explicitly instructed to make their selections in terms of cohesiveness and coherence and to avoid making their selections based on friendship or exchanges of favors.

At the end of week eleven, students were asked to complete a questionnaire about the BeyondShare environment and their subjective experiences with and perceptions of the beyond-sharing activities.

Figure 7. Research flow diagram and three effects

3.5.3. Scoring

1. Personal construction peer rating. Concept maps could be selected by peers assigned to other units based on general appearance or a specific task perspective (e.g., the best fit with a student’s own work). The number of votes thus represents the degree of cohesiveness and/or coherence between the concepts and structure of two maps. Personal construction scores accounted for 60% of the total peer rating, reflecting the study goal of emphasizing personal accountability in active learning.

2. Sharing construction peer rating. Based on evaluations of cohesiveness and coherence, this rating (which accounts for 40% of the peer rating total) represents the number of votes earned by an individual student’s favorite maps.

Total peer rating. This is calculated as 0.6 x Personal constructionpeer + 0.4 x Sharing constructionpeer.

The proposed peer rating system mimics the system of scholarly journal

citations—that is, the more citations (votes) a work gets, the more likely the chosen work is of high quality. However, BeyondShare also takes into account the quality of the selected works.

In other words, students must take responsibility for their personal best-fit choices because the

scores of their selected maps affect their own final scores. This mechanism reduces the odds of students choosing maps created by their close friends regardless of quality.

3.5.4. Questionnaire

A questionnaire was created to measure the participants’ subjective perceptions of BeyondShare and beyond-sharing activities. The first section consisted of six items on

interface usability—for instance, clarity of screen design, function simplicity and helpfulness, and comparative convenience.

Both time-spent and screen-capture records of construction procedures during the personal and sharing construction stages can serve as measures of active learning. However, it is important to note that active learning can take the form of a few meaningful and effective construction steps being produced quickly, or carefully planned cognitive functions emerging over a long time period. I therefore relied on a combination of learning outcomes and

questionnaire responses to estimate how many participants felt that they were engaged in active learning and to gather supporting evidence for their responses.

The nine items in the second section focused on student perceptions regarding personal construction (first level beyond-sharing activity) and approaches to active

engagement in meaningful learning. The next six items measured if and how peer assessment (second level) and competition influenced active engagement in knowledge construction. The final six items recorded student perceptions on sharing construction (third level) and

approaches to knowledge sharing. Responses were measured along a seven point Likert-type scale, with 1 indicating strong disagreement and 7 strong agreement.