• 沒有找到結果。

Research background and motivation of the study

From history we know that the progress of human never stops. For instance, we human being experience from hunting to agriculture to industrialization and now information era. We cannot deny that every era has it’s “technology form”. In the technology era, whether technology will promote students’ learning outcomes has been tested and researched over and over in the past two decades (e.g., Betz, 1996;

Gee, 2003; Gredler, 1996; Kafai, 1996; Malone, 1981; Prensky, 2001; Rieber, 1996;

Squire, 2005; Ke, 2009).

The reason why researchers are interested in the effect of utilizing technology in education is because people believe that the form of technology will affect people’s life or even thinking pattern. Marc Prensky (2001, 2006) suggested that the students now under college level are in fact “digital natives” while most instructors are “digital immigrants”. Digital natives are fundamentally different from digital immigrants even in their thinking patterns. The characteristics the digital natives have are: twitch speed, parallel processing, graphics first, random access, connected, active, play, payoff, fantasy, and technology-as-friend. Therefore, Prensky (2001) proposed that learning via digital games is a good way to reach digital natives in their “native language.”

A lot of research results did support Presky’s claim (e.g. Alcaniz & Botella, 2013;

Banos, Cebolla, Oliver, Núñez Castellar, Van Looy, Szmalec, & de Marez, 2013;

Erhel & Jamet, 2013; Hung, Hwang, Lee & Su, 2012; Owston, Wideman, Ronda &

Brown, 2009; Sung & Hwang, 2013). Sung and Hwang (2013) reported that Mindtool-integrated collaborative educational game not only benefited the students in promoting their learning attitudes and learning motivation, but also improved their learning achievement and self-efficacy owing to the provision of the knowledge organizing and sharing facility embedded in the collaborative gaming environment.

Ricci, Salas and Cannon-Bowers (1996) reported that participants assigned to the game condition scored significantly higher on a retention test compared to pretest performance. Furthermore, participants assigned to the game condition scored significantly higher on a retention test than participants assigned to the text condition.

On the other hand, some studies revealed the results that digital game-based learning (DGBL) did not achieve better results than traditional way (e.g. Panoutsopoulos &

Sampson, 2012; Lucht & Heidig, 2013; Gao, Yang & Chen, 2009; Panoutsopoulos &

Sampson, 2010; Wrzesien & Alcañiz Raya, 2010). Furio, Gonzalez-Gancedo, Juan, Segui and Rando (2013) proved that there was no significant difference between using digital game and traditional way. Students can achieve similar knowledge improvement using either way. Jong, Shang, Lee, Lee, and Law (2006) reported that no significant difference of students’ learning outcome with respect to these two approaches was found. Moreover, McQuiggan, Rowe, Lee and Lester (2008) found that students learning gains were less than those produced by traditional instructional approaches, and that showed that the effect of traditional way was better than DGBL.

With these diversity research results, some researchers were aware that it is important to use literature review as an instrumentation to figure out whether DGBL is effective or not. There are two ways to do the literature review. One way is descriptive literature review. Taking Randel, Morris, Wetzel and Whitehill (1992) as the example of descriptive literature review, they covered the years 1984 to 1991 and reported that of the 67 articles included, 38 found no differences between computer games and traditional teaching methods, 22 favored games, an additional four with questionable control groups also favored games, and only three favored traditional methods.

However, the review can only count how many researches related to the topic were effective but cannot calculate the effect due to the instrumentation it used. It may be an effective way if the data is not much; however, if the research data is a lot, this method may fall into vote counting. The weakness of vote counting is that it neglects the fact that the quality of each article is affected by the size of sample population, different significance level, and different sampling. Therefore, it is not reasonable to give each study the same weight.

Another way to do the literature review is meta-analysis. In order to overcome the above limitations and problems, meta-analysis has become an option. Meta-analysis is a quantitative literature review. The advantage of meta-analysis is that the result can be generalized to a larger population; the precision and accuracy of estimates can be improved as more data is used; inconsistency of results across studies can be quantified and analyzed; moderators can be included to explain variation between studies, and the presence of publication bias can be investigated.

Hartley (1977) is the first person to apply meta-analysis on computer-based instruction and from 1977 until now, the effect of DGBL still interests researchers,

showing that it is an important topic worthy of digging in. There are 19 previous reviews of the effectiveness of DGBL: six qualitative reviews(Kulik, 1981;

Bangert-Drowns, 1986; Thomas & Hooper, 1991; Emes, 1997; ke, 2009; van der Spek

& van Oostendorp, 2009) and 13 quantitative reviews (Hartley, 1977; Kulik, Kulik &

Cohen, 1980; Burns & Bozeman, 1981; Kulik, Banggert & Williams, 1983; Dekkers

& Donatti, 1981; Randel, Morris, Wetzel & Whitehill, 1992; Dempsey, Rasmussen &

Lucassen, 1996; Wolfe, 1997; Lee, 1999; Hays, 2005; Vogel et al., 2006; Sitzmann, 2011; Wouters, van Nimwegen, van Oostendorp & van der Spek, 2013).

The reviews of Burns and Bozeman (1981), Emes (1997) and Hays (2005) found that virtually no evidence of a relationship between experimental design features and study outcomes, while Vogel et al. (2006) and Wouters, Spek, van der and Oostendorp (2009) found positive effect sizes of interactive simulations and games vs. traditional teaching methods for both cognitive gains and attitude. Dekkers and Donatti(1981) showed that the effect of DGBL only workd on attitude whereas Wouters et al. (2013) reported that serious games were found to be more effective in terms of learning and retention but they were not more motivating than conventional instruction methods.

The difference may due to the reason that each of these studies focused on different skills to learn, used the computers differently, used different subjects and used in different instructional domain etc. The differences caused by moderators have been confirmed by several reviews. Kulik (1981), reviewing evidence from his own quantitative synthesis of findings and from Hartley (1977), concluded that the effectiveness of computer-based teaching was a function of instructional level, at least in mathematics education. Kulik (1981), therefore, suggested that at the lower levels of instruction, learners needed the stimulation and guidance provided by a highly

reactive teaching medium. At the upper levels of instruction, a highly reactive instructional medium may not only be unnecessary but may even hinder learning.

Dekkers and Donatti(1981) suggested that the digital game characteristics, duration, and sample size of the digital game group were important variables, and Wouters et al.

(2013) reported that the learners in serious games learnt more, relative to those taught with conventional instruction methods, when the game was supplemented with other instruction methods, when multiple training sessions are involved, and when players work in groups. These all show that moderators do influence the effectiveness of DGBL.

Besides of each article may focus on different aspect, the effect for each article also varies. Sitzmann (2011) stated that his review focused on the effect of learning.

However, the learning he meant only limited on cognitive level (declarative knowledge, procedural knowledge, retention and training transfer). Wouters et al.

(2013) separated learning gains into two categories: knowledge and motivation.

Though Wouters et al. (2013) investigated the learning outcomes; they put academic achievement and higher level thinking in the same category, which may influence the conclusion drawn from data.

Various differences exist among articles even if they study on the same topic. For example, Cheng and Su (2012) developed a game-based learning system to improve self-efficacy for student’s learning; Panoutsopoulos and Sampson (2012) provided evidence for the effect of a general-purpose commercial digital game on the achievement of standard curriculum; Hwang, Wu, and Chen (2012) developed a competitive board game for conducting web-based problem-solving activities;

Hummel, Geerts, Slootmaker, Kuipers and Westera (2013) described an empirical

study into the feasibility of an online collaboration game that facilitated teacher-in-training to deal with classroom management dilemmas. With the diverse focuses, it is necessary to properly arrange them into suitable categories and consider the different characteristics they have. Specifically saying, what kind of effect are they evaluating? What kind of moderator do they use? These questions may depend on meta-analysis to get a meaningful answer, and the descriptive literature review cannot deal with the subtle difference of each article.