• 沒有找到結果。

Affordances of Augmented Reality in Science Learning: Suggestions for Future Research

N/A
N/A
Protected

Academic year: 2021

Share "Affordances of Augmented Reality in Science Learning: Suggestions for Future Research"

Copied!
14
0
0

加載中.... (立即查看全文)

全文

(1)

Affordances of Augmented Reality in Science Learning:

Suggestions for Future Research

Kun-Hung Cheng•Chin-Chung Tsai

Published online: 3 August 2012

Ó Springer Science+Business Media, LLC 2012

Abstract Augmented reality (AR) is currently considered as having potential for pedagogical applications. However, in science education, research regarding AR-aided learning is in its infancy. To understand how AR could help science learning, this review paper firstly has identified two major approaches of utilizing AR technology in science educa-tion, which are named as image-based AR and location-based AR. These approaches may result in different affordances for science learning. It is then found that stu-dents’ spatial ability, practical skills, and conceptual understanding are often afforded by image-based AR and location-based AR usually supports inquiry-based scientific activities. After examining what has been done in science learning with AR supports, several suggestions for future research are proposed. For example, more research is required to explore learning experience (e.g., motivation or cognitive load) and learner characteristics (e.g., spatial ability or perceived presence) involved in AR. Mixed methods of investigating learning process (e.g., a content analysis and a sequential analysis) and in-depth examina-tion of user experience beyond usability (e.g., affective variables of esthetic pleasure or emotional fulfillment)

should be considered. Combining image-based and loca-tion-based AR technology may bring new possibility for supporting science learning. Theories including mental models, spatial cognition, situated cognition, and social constructivist learning are suggested for the profitable uses of future AR research in science education.

Keywords Augmented reality Science education  Spatial ability  Practical skills 

Conceptual understanding  Inquiry-based learning

Introduction

In the past two decades, the applications of augmented reality (AR) have been increasingly receiving attention. Since the 1990s, several special issues on AR have been published by journals such as Communications of the ACM (1993), Presence: Teleoperators and Virtual Environments (1997), Computers and Graphics (1999), and International Journal of Human–Computer Interaction (2003). More-over, according to the 2011 Horizon Report, AR, with its layering of information over 3D space, creates new expe-riences of the world. With these new prospects of infor-mation access, the prevalent employment of AR has been in marketing, social engagement, or entertainment (John-son et al. 2011). In addition to these consumer uses, the 2011 Horizon Report also suggested that AR should be adopted in the next 2–3 years to provide new opportunities for teaching, learning, research, or creative inquiry. By examining article publications on Google Scholar, Martin et al. (2011) reported that AR is in its initial stage according to its publication impact, and they have proposed that it will probably have significant influences on educa-tion in the future.

K.-H. Cheng (&)

Digital Content Production Center, National Chiao Tung University, #1001, University Rd., Hsinchu 300, Taiwan e-mail: kuhu@mail.nctu.edu.tw

K.-H. Cheng

Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei, Taiwan C.-C. Tsai

Graduate Institute of Digital Learning and Education, National Taiwan University of Science and Technology, #43, Sec.4, Keelung Rd., Taipei 106, Taiwan

e-mail: cctsai@mail.ntust.edu.tw DOI 10.1007/s10956-012-9405-9

(2)

As far as the development of educational technologies is concerned, investigating how technology assists students’ learning is an important issue. Also, in science education, researchers have continued to devote their efforts to exploring technology-aided learning. In Linn’s (2003) review, the technology of providing customizable envi-ronments (e.g., a function of allowing users to graphically organize their concept maps of scientific arguments) and the development of visualization tools to enhance scientific spatial understanding (e.g., earth structures or molecular geometry) were highlighted as trends in science learning. Similarly, a recent review also reported that computer simulations which can visualize invisible phenomena and provide opportunities of manipulating experimental vari-ables have positive learning effects (Rutten et al.2011). In addition to its advantages in science education, Rutten et al. (2011) indicated a potential issue regarding how users’ immersion in computer simulation environments contributes to learning effects. The notion of users’ per-ceived immersion brings us to a consideration of the im-mersive experiences afforded by AR technology. With capability of infusing digital information throughout the real world, AR technology could engage learners in an immersive context along with authentic experiences to make scientific investigations, collect data outside class-room, interact with an avatar, or communicate face-to-face with peers (e.g., Dunleavy et al.2009). Therefore, it may suggest that researchers should pay attention to how AR technology could further help science education.

Since AR is considered as having potential for pedagog-ical applications (Johnson et al.2011), several studies have probed its effects on science education regarding the issues of conceptual change (Shelton and Stevens2004), laboratory work (Andu´jar et al.2011), inquiry-based learning (Squire and Klopfer2007), scientific argumentation (Squire and Jan

2007), ecological preservation (Koong Lin et al.2011), and spatial ability (Martı´n-Gutie´rrez et al.2010). The results of these studies mostly showed learners’ positive attitudes toward AR (e.g., satisfaction or perceived usefulness), and to some extent, indicated improvement in student outcomes.

Due to the fact that educational research regarding AR-aided learning is in its infancy (Martin et al. 2011), this paper argues that it is worth understanding what role AR technology may play in science learning and to further identify future directions for AR-related study. Therefore, the background and characteristics of the current AR applications are firstly discussed in this paper. Because the technology utilized in AR applications has been developed for a period time and continues evolving, it is necessary that the contemporary and emerging features of AR tech-nology should be identified. With an overall understanding of the features of this technology, the affordances of AR in science learning are then explored.

Moreover, to pedagogically examine AR-related studies on science education from a broader perspective, the dimensions (i.e., learning concepts, technical features, learner characteristics, interaction experience, learning experience, learning process, and learning outcomes) of the virtual reality (VR)-based learning model proposed by Salzman et al. (1995) are adopted in this paper. Based on these pedagogical dimensions, this paper finally proposes suggestions for future research regarding AR-aided science learning. In summary, based on a review of the literature, the purposes of this paper are as follows:

1. To identify the current features of AR technology in science education.

2. To understand the affordances of AR in science learning.

3. To examine the focal point of the current research on AR-related science learning.

4. To make suggestions for future science research regarding AR-related learning.

Background of Augmented Reality

In the 1960s, the idea of virtual reality (VR) was initially proposed by computer graphics pioneer Ivan Sutherland to construct a synthetic environment through visualization using a head-mounted device (Sutherland1968). With the growth of VR, in the 1990s, the term augmented reality (AR) was originated by scientists at the aircraft manufac-turer Boeing, who were developing an AR system that blended virtual graphics onto a real environment display to help aircraft electricians with cable assembly (Caudell and Mizell 1992). At about the same time during the early 1990s, several introductory applications of AR were pub-lished, such as a surgical training program (Bajura et al.

1992) and a laser printer maintenance demonstration (Fe-iner et al.1993). Since AR evolved from and shared partial commonalities with VR because of their computer-gener-ated elements, in 1994, Milgram and Kishino presented the concept of a virtuality continuum, as shown in Fig.1, defining environments consisting solely of physical objects (e.g., a video display of a real-world scene) on the left, and environments consisting solely of virtual objects (e.g., a computer graphic simulation) on the right. Described as the possibilities of the mixture of real-world and virtual-world objects within a single display, mixed reality exists at any point on this continuum and encompasses both augmented reality and augmented virtuality (Fig.1).

The definition of AR commonly adopted by relevant studies is what Azuma (1997) described as a variation of VR. While VR entirely immerses a user in a synthetic environment, AR allows a user to see a real world with

(3)

virtual elements overlapped upon it in real time. To avoid limiting AR to specific technologies or required devices such as head-mounted displays (HMD), Azuma (1997) identified three characteristics of AR: (1) combines real and virtual, (2) interactive in real time, and (3) registered in 3D. Recently, instead of emphasizing the 3D characteristic, Klopfer (2008) proposed a spectrum describing AR with lightly to heavily virtual information provided to users. While light AR refers to experiencing a lot of physical reality along with limited virtual information access, heavy AR represents massive virtual information input in an augmented environment. Several studies have also devel-oped AR systems without 3D virtual information register-ing in physical environments (e.g., Dunleavy et al. 2009; Squire and Jan 2007; Squire and Klopfer 2007). In this paper, it is hence considered that a current characteristic of AR may be that it is not necessarily presented in 3D virtual objects (or information).

The initial stage of AR development seemed to rely on HMD-related devices for implementing research. The HMD devices which are used to combine real-world and computer-generated information are often categorized into two forms: optical see-through and video see-through (Azuma 1997). While the optical system superimposes virtual images on a user’s view of the real world, video systems blend computer graphics with camera images that approximate what a user would normally see. For the reason that the see-through HMD is deemed as a high-end, expensive, or obtrusive device, such as an additional backpack with computer apparatus, AR hardware charac-terized by simplicity and portability may have greater opportunities of widespread use. That is why mobile AR applications have recently become popular on Google search. Also, some studies regarding AR-related learning (e.g., Ha et al.2011) or AR games (e.g., Broll et al.2008) have suggested a future direction for developing a mobile AR platform.

Features of Augmented Reality

While a variety of AR-related equipment is utilized, there is a need to understand the current features of AR. For example, the enabling technologies of computing hardware (e.g., wearable PC, tablet-PC, or smart phone), software architectures (e.g., Wireless and 3G networking), and

tracking and registration (e.g., GPS) for mobile AR have been summarized in a previous literature review (Papagi-annakis et al. 2008). In order to simply categorize the present state-of-the-art developments in AR, the two types of AR application reported in Pence’s (2011) study, namely (1) marker-based and (2) markerless AR, could be a clas-sification of AR for general acceptance. However, as an infant AR technology of natural image recognition being developed beyond artificial marker identification, it is necessary that the definition of marker-based AR should be enriched. Therefore, this paper attempts to re-coin the two types of AR as (1) image-based and (2) location-based AR, further offering a broader feature of AR applications.

Image-Based AR

Basically, marker-based AR requires specific labels to register the position of 3D objects on the real-world image. As presented in Fig.2, an AR book with basic equipment such as a webcam and marker labels is one of the typical marker-based applications and has been employed in sev-eral studies (e.g., Koong Lin et al.2011; Martı´n-Gutie´rrez et al. 2010). A marker label in typical marker-based Fig. 1 Virtuality continuum defining the possibilities of the mixture of real-world and virtual-world objects (modified from Milgram and Kishino’s1994study)

Fig. 2 The concept of an AR book modified from Koong Lin et al.’s

(4)

applications is commonly presented as an iconic coded image (refer to Fig.3). By detecting a marker label on a book through a webcam capture, a virtual element is then generated by the AR software. This virtual element would be shown upon the book and recovered on the computer screen that can be manipulated by tilting or rotating the book. Moreover, setting with a projector in a traditional classroom, students can operate the card with an AR marker to control the 3D objects on a projected screen (Nu´n˜ez et al. 2008) or a whiteboard (Kerawalla et al.

2006). Also, it is a trend that marker-based AR applications with mobile devices are being developed, such as the An-dAR project initiated by Google in 2011. With mobile devices, the applications of AR would not be restricted on the front of desktop computers.

Beyond the artificial labels identified in marker-based AR, natural image recognition has been integrated into AR technology. For example, in Ajanki et al.’s (2011) study, the augmented information regarding an individual syn-opsis could be overlaid on the display by recognizing the image of a human face. Recently, several handheld appli-cations (e.g., the junaio app) have been developed to show commercial promotion information by recognizing graph-ics. For example, when a restaurant poster designed with actual beverage graphics is detected by a mobile camera, a virtual object (e.g., a 3D beverage model) is then popped on the mobile screen for the business purposes. In the present paper, this detection process refers to natural graphics recognition. Instead of marker identification, graphic recognition has also been gradually applied in AR because the graphics naturally fit human visual experi-ences. To summarize, this paper concludes that artificial markers and natural graphics recognition could be deemed as a type of image-based AR feature.

Location-Based AR

In contrast to image-based AR, markerless AR uses posi-tion data launched from mobile devices, such as a wireless

network or global positioning system (GPS), to identify a location, and then superimposes computer-generated information (as illustrated in Fig.4). Several studies have demonstrated location-aware AR educational games with mobile devices. For example, McCall et al. (2011) devel-oped a handheld AR game to immerse users in exploring the history of a city via the assistance of position orien-teering and augmented elements (e.g., characters, objects, and buildings) on the scene. Popular mobile apps such as Layer and Wikitude are also designed for discovering augmented information around users (e.g., restaurant information or scenic spots) by detecting their position. Obviously, these AR applications share similarity in their technical features, in that location-based augmented information is shown on the users’ mobile screens in real time, and can thus be generalized as a type of location-based AR.

A Comparison of Image-Based and Location-Based AR

To clearly identify the similarities and differences between image-based and location-based AR, Fig.5 illustrates a comparison. While the recognition of artificial labels or natural graphics is the main feature of image-based AR, GPS or a wireless network is used as the recognition technique to register users’ positions and to offer them real-time information in a location-based AR environment. After the process of recognition, both features of AR technology will add augmented assets (e.g., text, audio, video, 3D model) to the physical elements on the users’ display. The use of these two categories might enhance understanding of the features of AR applications, regard-less of what hardware and software is used. Before dis-cussing the affordances of AR in science learning based on the two categories, a case of AR applications in science education is presented in the following.

Fig. 3 Example of a marker label in image-based AR

Fig. 4 The concept of location-based AR modified from the design of the Layer app

(5)

A Case of AR Applications in Science Education

To clearly address the mechanism of AR technology and how it supports science education, a case of image-based AR learning activity by recent research is presented in this section. In Martı´n-Gutie´rrez et al.’s (2010) study, an image-based AR book system was established for engi-neering graphics learning. The hardware settings consist of a desktop computer, a webcam, and an AR book with several marker labels on the pages. By detecting an iconic marker on the AR book through the webcam, a 3D virtual geometry object is shown upon the AR book and mounted on the computer screen. The learning task is required the participants to identify surfaces and vertexes on both orthographic and axonometric views of the 3D object and further practice sketch exercises for evaluating their spatial ability. With the affordances of AR, the participants are allowed to freely tilt or rotate the AR book to manipulate the 3D object when they need to inspect it from different perspectives. This operation of AR empowers learners to interact with the 3D geometry objects without wearing headsets and immediately exercise on the paper book to reflect spatial concepts.

Affordances of AR in Science Learning

AR technology has been widely utilized due to its possi-bilities in a variety of fields such as manufacturing, urban design, museum exhibitions, or clinical psychology. In the educational domain, researchers are continually endeavor-ing to develop AR in learnendeavor-ing. To initially understand how AR could help learning in science education, relevant studies were firstly searched for through the Web of

Knowledge and Scopus database using keywords such as augmented reality and science learning or science educa-tion. Basically, studies with either empirical data or topics related to science (e.g., astronomy, chemistry, biology, or engineering) were selected. It should be noted, however, that to depict the current AR technology in science edu-cation, several studies defined as using AR applications which were actually developed using VR-based systems were excluded in this paper. Hence, 12 articles regarding AR-related work were chosen for analysis. As shown in Table1, the technical features, focus topics, participants, and affordances in science learning of these studies are summarized. Moreover, in order to fully describe what has been investigated in AR-related science learning from a pedagogical perspective, the VR-based learning model (Salzman et al.1995) was used as a basis for examining the affordances of AR in science learning. That is, the selected articles are discussed according to the dimensions in the model (i.e., technical features, science concepts, learner characteristics, interaction experience, learning experience, learning process, and learning outcomes).

Technical Features and Science Concepts

Through the selected articles on the application of AR, their supports in science learning are addressed by exam-ining the associations between technical features (i.e., image-based AR and location-based AR) and science concepts.

Image-Based AR for Spatial Ability, Practical Skills, and Conceptual Understanding

Several image-based AR applications have been designed for science learning. For instance, Martı´n-Gutie´rrez et al. (2010) designed an AR book with marker identification, namely AR-Dehaes, which utilizes iconic markers and offers 3D virtual objects displayed on the screen to help students to handle and visualize engineering graphics and further enhance their spatial ability. For inorganic chem-istry education, the image-based AR setup in a multimedia classroom could support students in developing spatial intuition regarding the 3D arrangement of crystalline structures (Nu´n˜ez et al. 2008). Another image-based AR applied with teachers’ instructional guidance for geosci-ences in a classroom, which was employed by Kerawalla et al. (2006), required children to hold an AR tile to manipulate the spatial relationships between the 3D objects of the Earth, Sun, and Moon.

Similarly, in the field of astronomy, Shelton and Stevens (2004) also used an image-based AR system (with HMD) to provide a way to understand the spatial concept of Earth–Sun relationships, as well as to make a conceptual Fig. 5 A comparison of image-based and location-based AR

(6)

Table 1 A summary of selected studies on AR in science education No. Primary author (year of publication) AR features Science-related topics Participants Affordances in science learning 1 Shelton and Stevens ( 2004 ) Image-based (HMD) Astronomy (Earth–Sun relationships) 15 university students Conceptual change, spatial ability 2 Kerawalla et al. ( 2006 ) Image-based (projector, whiteboard, and webcam) Geoscience (the Earth, Sun and Moon) Children aged 9–10 years Conceptual understanding, spatial ability 3 Squire and Klopfer ( 2007 ) Location-based (mobile) Environmental science (role-playing for investigating a simulated chemical spill within a water shed) High school and university students Inquiry-based learning 4 Rosenbaum et al. ( 2007 ) Location-based (mobile) Medical science (role play for containing a disease outbreak) Mathematics and science public high school students Inquiry-based learning 5 Squire and Jan ( 2007 ) Location-based (mobile) Environmental science Students aged 9–16 Inquiry-based learning 6 Eursch ( 2007 ) Image-based (HMD and video camera) Manual tasks in nuclear science None Practical skills 7N u´ n˜ ez et al. ( 2008 ) Image-based (projector, screen, and webcam) Inorganic chemistry 15 university students Spatial ability 8 Dunleavy et al. ( 2009 ) Location-based (mobile) Math, language arts, and scientific literacy Middle and high school teachers and students Inquiry-based learning 9 Martı ´n-Gutie ´rrez et al. ( 2010 ) Image-based (PC and webcam) Spatial ability 24 engineering students in AR group and 25 engineering students in control group Spatial ability 10 Andu ´jar et al. ( 2011 ) Image-based (PC and video camera) Remote laboratory experimentation 36 university students and 10 teachers Practical skills 11 Koong Lin et al. ( 2011 ) Image-based (PC, webcam, and touch screen) Biology (conservation of fish) 33 subjects (without demographic information) Conceptual understanding 12 O’Shea et al. ( 2011 ) Location-based (mobile) Math and language arts Several male groups and female groups Inquiry-based learning

(7)

change in astronomical thinking. Moreover, an interactive image-based AR learning system developed by Koong Lin et al. (2011) was devoted to assisting students to learn about the importance of the conservation of fish. In the system operation process, after watching the instructional video regarding conservation issues on the AR book, learners used a fishing rod as a game property to interact with avatars (e.g., eliminating polluting objects in a river) which was generated by the detection of a webcam on several setup markers. In short, this learning activity was designed to help the conceptual understanding of biology. With regard to the enhancement of practical skills, Andu´jar et al. (2011) developed an image-based AR remote laboratory to demonstrate to students the labor device and enable them to interact with physical elements overlapped with virtual objects in real time. Moreover, to reduce the threat to health of radiation when performing manual tasks in a nuclear labor, an image-based AR system with HMD and a remote control camera were set up as a solution for this challenge (Eursch 2007). In addition to increasing operational safety, the AR system could offer operators significant augmented information in their view of the working environment, such as visualized nuclear radiation and warnings about dangerous areas, to help them cope with radioactive materials and further complete the manual tasks in the nuclear labor.

In sum, the image-based AR technology utilized in the selected studies allows users to manipulate a plate with a marker to comprehend the 3D structure of augmented virtual objects. These studies have extended this charac-teristic to enhance learners’ scientific spatial ability (Ker-awalla et al. 2006; Martı´n-Gutie´rrez et al. 2010; Nu´n˜ez et al.2008; Shelton and Stevens 2004), conceptual under-standing (Koong Lin et al.2011), and conceptual change (Shelton and Stevens 2004). Also, with the auxiliary information about physical elements superimposed on a display (e.g., a computer screen or seeing through an HMD) provided by image-based AR technology, learners’ practical skills (Andu´jar et al. 2011; Eursch 2007) regarding science learning are most likely enriched based on their experiences in physical environments such as laboratories.

Location-Based AR and Scientific Inquiry Learning

With the mobility of location-based AR, collaborative inquiry-based activities are employed in five selected sci-ence education studies (i.e., Dunleavy et al.2009; O’Shea et al. 2011; Rosenbaum et al. 2007; Squire and Klopfer

2007; Squire and Jan2007). For example, Dunleavy et al. (2009) developed Alien Contact!, an AR simulation, in which students have to work together with four characters, a chemist, a cryptologist, a computer hacker, and an FBI

agent. Interviewing these virtual characters, collecting information, and further solving mathematics, language, and scientific literacy puzzles are the tasks to be completed. Gray Anatomy, as is the following case with Alien Con-tact!, is implemented by O’Shea et al. (2011) according to the suggested modifications learned from Alien Contact! project for improving the learning activity (O’Shea et al.

2009). In the beginning of the AR curriculum, a scenario in which a gray whale has stranded itself on a beach is pre-sented. The students work in teams to interview avatars, inspect virtual objects, and try to solve mathematics and language problems to find out when and why the whale beached. Rosenbaum et al. (2007) designed an AR game consisting of authentic role-playing, namely Outbreak @ The Institute, where students play the roles of doctors, medical technicians, and public health experts to collabo-ratively prohibit a disease outbreak across an area of a campus. In Environmental Detectives, an AR simulation game designed by Squire and Klopfer (2007), the students take on the role of environmental engineers investigating a simulated chemical spill within a watershed. Similar to the scenario designed by Squire and Klopfer (2007), Squire and Jan (2007) designed Mad City Mystery, a location-based AR game in which students must come up with a viable reason for explaining the virtual character’s death while playing one of three roles (i.e., a medical doctor, an environmental specialist, or a government official). To sum up, in the scenario of inquiry-based learning with location-based AR, role-playing, gaming, and teamwork are the three major activity designs in the four studies.

Commonly, the role-playing and gaming designs in the context of inquiry-based AR learning generate a learning process that encourages students to (1) observe the phe-nomena in a surrounding environment, (2) ask questions about the phenomena, (3) investigate and interpret data, (4) create hypotheses, plausible explanations, or practical plans, and (5) develop conceptual understandings in a collaborative way. According to Squire and Jan (2007), the designs of collaborative role-playing serve as cognitive scaffolding for the activity. Because an individual does not have sufficient information to complete the task, the teams are forced to read, synthesize, and discuss their findings. While students are involved in Mad City Mystery, Squire and Jan (2007) argue that such an inquiry-based AR learning process can engage them in scientific thinking (e.g., argumentation or literacy) due to the requirements of assessing evidence, developing hypotheses, testing them against evidence, and finally generating theories. In an authentic setting with real environments, students’ prior knowledge is triggered, and their understanding of the socially situated nature of scientific practice is also enhanced. Despite the positive influences of inquiry-based AR learning, however, Rosenbaum et al. (2007) found that

(8)

some misconceptions about disease transmission might be induced in the process of the learning activity. These misconceptions may be attributed to a lack of knowledge about disease transmission and flaws in the game pre-sentation for students, rather than to the technical features of location-based AR.

As described by Squire and Klopfer (2007), ‘‘inquiry is a process of balancing and managing resources, combining multiple data sources, and forming and revising hypotheses in situ’’ (p. 371). Location-based AR allows students to step outside the classroom and provides an opportunity for inquiring into science issues with the aid of virtual infor-mation in a real world or with real phenomena. From Ro-senbaum et al.’s (2007) view of authenticity, the affordances of handheld computers with portability could be used to structure inquiry-based activities in which students interact with each other and with the real environment around them. Similarly, location-based learning (Squire and Jan2007) and situated learning (Dunleavy et al.2009) are considered as adequate pedagogical concepts for developing mobile AR to immerse students in authentic scientific inquiry. At present, location-based AR might therefore temporarily correspond to its affordances in scientific inquiry learning.

To sum up, a temporary result found in this paper is that there are three main aspects of science learning afforded by image-based AR, including spatial ability (Kerawalla et al.

2006; Martı´n-Gutie´rrez et al. 2010; Nu´n˜ez et al. 2008; Shelton and Stevens2004), practical skills in the laboratory (Andu´jar et al.2011; Eursch2007), and conceptual under-standing (Koong Lin et al. 2011) or conceptual change (Shelton and Stevens2004). Due to the fact that image-based AR technology is focused on the presentation of three-dimensional space, its affordances for science learning are mainly related to spatial ability and extended to practical skills or conceptual understanding. On the other hand, the selected studies indicate a trend that the applications of location-based AR are likely to support collaborative inquiry-based activities in science learning (Dunleavy et al.

2009; O’Shea et al.2011; Rosenbaum et al.2007; Squire and Jan 2007; Squire and Klopfer 2007). Location-based AR technology is characterized as position-free and developed within the context of physical environments; therefore, it provides more opportunities to design activities for learners to make inquiries into scientific topics.

Learning Process and Learning Outcomes

Most of the studies regarding the utilization of image-based AR did not clearly explore the students’ learning process or examine their learning outcomes, except for those studies undertaken by Kerawalla et al. (2006), Martı´n-Gutie´rrez et al. (2010), and Shelton and Stevens (2004). Through videotaping, utterance coding, and excerpting from the

interactions between teachers and children (e.g., teachers’ questions to children), Kerawalla et al. (2006) reported the students’ learning process when using image-based AR. The results showed that, in the AR sessions, the children had less opportunity to manipulate the AR tile themselves and to ask questions because the teachers always did the demonstration. Particularly, another interesting finding regarding learning process in Kerawalla et al.’s (2006) study was that the children using the image-based AR system were less engaged than those taught using tradi-tional methods (e.g., role-playing) and resources (e.g., a large print book). They argued that the possible reason was that the children in the AR session were mostly asked to watch and describe the AR animation of the Earth, Sun, and Moon. However, Kerawalla et al. (2006) did not carry out measurements of learning outcomes. On the other hand, Martı´n-Gutie´rrez et al. (2010) conducted a valid compari-son of learning outcomes. The results found that the use of an image-based AR system had a positive impact on the spatial ability of engineering freshmen through pre- and post-tests. Moreover, Shelton and Stevens (2004) used iterative videotape analysis to qualitatively explore uni-versity students’ learning processes and outcomes regard-ing conceptual change in an AR exercise. However, of the 15 students who constituted the research sample, only one student’s transcript excerpts were reported in their study to indicate how the students’ astronomical thinking changed after the AR-related activity.

Compared with image-based AR, the studies regarding the utilization of location-based AR (i.e., Squire and Klopfer

2007; O’Shea et al.2011; Rosenbaum et al.2007; Squire and Jan2007; Dunleavy et al.2009) did thoroughly explore the participants’ inquiry-based science learning process with qualitative methods such as interviews, observations, or vid-eotaping analysis. These studies emphasized the students’ interactive discourse to describe the process of forming initial problems, constructing group goals, exchanging or negotiat-ing information with group members, plannnegotiat-ing solutions, and developing shared understandings. Through the discourse, these studies also qualitatively indicated a result for the improvement of students’ scientific practice and thinking ability. Another interesting measurement used in Rosenbaum et al.’s (2007) study is that the learning outcomes regarding understanding of a dynamic system in a game context were measured by pre- and post-surveys via diagram drawing. They found that the students did understand the complex causality after participating in the authentic science activity with loca-tion-based AR technology.

Learning Experience and Interaction Experience

In general, students in the image-based AR setting showed positive interaction experience when learning. Through the

(9)

measurement of a satisfaction questionnaire, Martı´n-Gut-ie´rrez et al. (2010) found that the students perceived very positive attitudes toward the AR-Dehaes system due to its attraction and usefulness. Also, from the aspect of usability evaluation, the acceptance, ease to use, and acknowledg-ment of the effectiveness of the image-based AR systems were indicated by the users’ responses (Andu´jar et al.2011; Eursch2007; Koong Lin et al.2011; Nu´n˜ez et al. 2008). However, there are some problems with the operation of these systems. For instance, Koong Lin et al. (2011) found that the students considered that the system procedure to be complicated, the operation of the system was not stable enough (e.g., there were system crashes), and there was a need for assistance from technical staff. Confronting the interaction obstacles, Kerawalla et al. (2006) suggested that a more flexible and controllable image-based AR system (e.g., a function allowing the altering of parameters to flexibly explore the relationships between the Earth, Sun, and Moon) should be developed to avoid the limitation of asking children to just watch and describe the elements. To be more specific, the capability of adding and removing elements and of changing the speed of animations was considered to be incorporated into the AR system.

In the location-based AR setting, students mostly expressed positive learning experiences and were highly motivated. For example, Dunleavy et al. (2009) reported that the students’ high engagement resulted from several reasons, such as using the handhelds and GPS to learn, collecting data outside the classroom, or the positive interdependence of team members. Rosenbaum et al. (2007) also concluded that the authentic roles, the com-munication and collaboration, and the game scenario as a dynamic system all facilitate students’ motivation to become involved in AR-related learning. However, some instructional challenges unique to location-based AR environments were addressed in Dunleavy et al.’s (2009) study. Firstly, the lack of logistical support might cause instructional management problems (e.g., keeping groups together, answering interface questions, or corralling stu-dents out of the street). Secondly, cognitive overload might be caused by the amount of material and complexity of tasks. Thirdly, the design of AR games is likely to induce unanticipated competition among students (e.g., a race to see who can solve the problems firstly when two teams are walking side-by-side). In the following study, O’Shea et al. (2011) found that fewer virtual character/object interac-tions can improve the cognitive overload, and nonlinear gaming path design can reduce the competitions among groups. These challenges expressed by students’ learning experiences in a location-based AR system should be noticed by researchers.

Engagement in location-based AR activity is a common issue. In O’Shea et al.’s (2011) study, it was found that the

use of GPS-enabled handhelds, the opportunity to collect data outside classroom, and the interactions among the game roles within the team dynamic are crucial factors to engage learners in the AR-related activity. Similar to the concept of engagement, Rosenbaum et al. (2007) found that students showed personal embodiment in the AR simulation game. That is, the students felt themselves physically interacting with the virtual environment. The results indicate an issue of interaction experience with the location-based AR interface. Although the studies addres-sed similar results regarding immersion in location-baaddres-sed AR environments (Squire and Klopfer 2007; Squire and Jan 2007), Dunleavy et al. (2009) pointed out several interaction problems from students’ experiences. They highlighted software issues of GPS errors and hardware challenges of the screen being too bright and the environ-ment being too noisy when walking outside the classroom. These negative interaction experiences might frustrate students’ confidence in engagement and therefore require more attention.

Learner Characteristics

Among the selected articles, there were only three studies examining the learner characteristics when students were involved in AR-related science learning (O’Shea et al.

2011; Squire and Jan2007; Squire and Klopfer2007). For example, the students in O’Shea et al.’s (2011) study were divided into two different groups (i.e., males and females). It was found that there were no differences of engagement and collaboration in genders; however, the male groups had more conversations during the process of the activity than the female groups did. As O’Shea et al. (2011) argued, the possible reason is the fact that the male students had more game experiences and show more inclination to talk with their team members.

In Squire and Jan’s (2007) study, three groups were selected as the research participants, including elementary, middle, and senior high school students, to make a parison of the effect of different ages on reading, com-prehension, communication, and scientific argumentation. They found that older students presented more sophisti-cated reading practices, integrated pieces of evidence, and built coherent arguments to make a hypothesis or conclu-sion for the AR game task. In contrast, the younger stu-dents tended to raise and reject hypotheses when receiving new evidence, and further to construct more fragmented and incoherent narratives.

With two cohorts of university and high school students participating in a location-based AR game, a cross-case discussion by Squire and Klopfer (2007) revealed that the students presented similarity in environmental engineering practices and in identifying problems regardless of age.

(10)

However, concerning the pedagogical value of the physical learning experience, they emphasized the existing knowl-edge of the surroundings to confront and solve problems when making inquiry investigations regarding environ-mental science. In short, though the learner characteristics, including gender difference, age difference, and prior knowledge, have been mentioned in AR-related science learning research, there are still few studies focusing on the issues of learner characteristics.

Suggestions for Future Research

Based on the dimensions in the VR-based learning model proposed by Salzman et al. (1995), the present paper has explored the affordances of AR in science learning. Fur-thermore, to clearly conclude this review of what has been done in science learning with AR supports, Fig.6has been generated to address several issues and research methods according to the selected studies.

As shown in Fig.6, the research of image-based AR in science learning mainly emphasizes the evaluation of users’ interactive experience, such as their perceived usability of the AR applications. One possible reason is that most studies focus on the design and evaluation of a newly developed system (Andu´jar et al.2011; Eursch2007; Koong Lin et al.

2011; Nu´n˜ez et al.2008). Relatively, the learning processes and outcomes of AR-related research have been explored by very few image-based AR studies (Kerawalla et al.2006; Martı´n-Gutie´rrez et al. 2010; Shelton and Stevens 2004). Among the selected studies regarding image-based AR, the researchers did not focus on the issues of the learning experience or learner characteristics.

On the other hand, in a few location-based AR studies, the learning experience (i.e., motivation and cognitive overload) (Dunleavy et al. 2009; O’Shea et al. 2011;

Rosenbaum et al.2007) and learner characteristics (i.e., age differences and prior knowledge) (O’Shea et al. 2011; Squire and Jan2007; Squire and Klopfer2007) have been discussed. Regarding issues of the learners’ interactive experience, it has been indicated that perceived immersion (Rosenbaum et al.2007; Squire and Jan2007; Squire and Klopfer 2007) and challenges in software and hardware (Dunleavy et al.2009) when involved in location-based AR activities require attention.

Additionally, with regard to research methods, qualita-tive analyses (e.g., interviews, observations, videotaping, or discourse analysis) for exploring the learning process are the commonly adopted methods in both image-based and location-based AR research. Besides the quantitative pre-and post-surveys (Martı´n-Gutie´rrez et al.2010), qualitative analyses were also used to investigate the learning out-comes in image-based AR (Kerawalla et al.2006; Shelton and Stevens2004) and location-based AR-related learning (Dunleavy et al. 2009; O’Shea et al. 2011; Rosenbaum et al.2007; Squire and Jan2007; Squire and Klopfer2007). However, there are still limited investigations with regard to the state-of-the-art application of AR-related learning. Based on the results revealed in Fig. 6, the present paper offers some suggestions for possible future research directions.

The Need to Explore Learning Experience

According to the selected articles in this paper, the learning experience has scarcely been discussed in AR-related sci-ence studies, especially in image-based AR applications. Following the issues of learning experience which have been raised in location-based AR research (Dunleavy et al.

2009; O’Shea et al. 2011; Rosenbaum et al. 2007), the investigations of learners’ responses to motivation and cognitive load could be incorporated into image-based AR

(11)

studies in the future. As Dunleavy et al. (2009) have argued, instructional activity design and management are challenges in location-based AR learning. It could be fur-ther proposed that these challenges are related to students’ learning experience. Taking this a step further, it is sug-gested that an experimental design for examining students’ learning experience (e.g., motivation or cognitive load) by different instructional designs, either in location-based AR or in image-based AR studies, be developed. For instance, designing different scaffolding mechanisms (e.g., instruc-tional prompts or mid-activity reviews) for learning science in AR-related scenarios could be considered in future studies.

More Research about Other Variables of Learner Characteristics

In addition to the variables of gender, age, and prior knowledge probed in location-based AR learning (O’Shea et al.2011; Squire and Jan2007; Squire and Klopfer2007), there are several learner characteristics deserving attention. For example, learners’ spatial ability in image-based AR learning environments may be an important variable. Although the image-based AR setting could support the enhancement of spatial ability according to the studies reviewed in this paper, learners’ original ability to under-stand 3D objects or concepts might interfere with their learning experiences, learning process, or even learning performance. In the VR-based learning environments, Salzman et al. (1995) have made a similar suggestion. Following this notion, learners’ spatial orientation ability should also be a concern when they are immersed in location-based AR scientific scenarios. For example, in the Environmental Detectives (Squire and Klopfer 2007) activity, with poor sense of orientation in physical envi-ronments, learners might easily get lost even with the aid of maps, and so could not adequately perform the required scientific investigations (e.g., searching for a simulated chemical spill within a watershed).

Moreover, learners’ perceived presence in AR-related environments may be an important learner characteristic variable to consider. ‘‘Presence’’ is defined as a user’s mental state of being within a real place or situation when participating in a virtual world and has been discussed in several VR-related studies (e.g., Murray et al. 2007; Schuemie et al.2001; Sylaiou et al. 2010). Extending the concept to AR systems, the sense of presence reflects a user’s perceptions of being immersed in a blended physi-cal/virtual environment as if being in a single world. For example, McCall et al. (2011) have assessed learners’ perceived presence by self-reported questionnaires when involved in an AR location-aware game for history learn-ing. Also, in a museum simulation system with an

exhibition of AR objects, learners’ perceived presence was reported to be associated with their satisfaction and enjoyment (Sylaiou et al.2010). It can hence be considered that perceptions of presence are expected to relate to learners’ behaviors in AR-related learning.

Mixed Methods of Investigating Learning Process

Several studies regarding image-based (Kerawalla et al.

2006; Shelton and Stevens 2004) and location-based AR (Dunleavy et al. 2009; O’Shea et al. 2011; Rosenbaum et al.2007; Squire and Jan2007; Squire and Klopfer2007) reviewed in this paper have highlighted and investigated participants’ scientific learning processes. By probing learning process through the methods of interviews, observations, or videotaping analysis, how students struc-ture the scientific thinking and knowledge in AR-related learning activities could be better understood. Although these qualitative methods have been commonly utilized in AR-related studies, there is a need to apply mixed method analysis to attain in-depth understanding of the learning process. For example, a content analysis and a sequential analysis (e.g., Hou 2010) might be adopted to analyze students’ behavioral patterns when involved in science learning with AR technology. Furthermore, a sequential analysis for eye movement could be considered in the future work (e.g., Tsai et al. 2012). With the aid of eye-tracking technology, researchers could collect data about eye movement sequences to represent learners’ attention to AR information and further compare the quantitative data with the results of learning process analysis generated by qualitative methods.

The User Experience beyond Usability

According to the above review, most of the image-based AR studies have focused on the usability evaluation of interaction experience (Andu´jar et al. 2011; Eursch2007; Koong Lin et al. 2011; Nu´n˜ez et al. 2008). However, beyond the learners’ perceived usability, there should be other facets of the interaction experience in the usage of AR systems to consider. Since AR technology involves extensive user interaction, it is suggested that an interaction design including usability goals and user experience goals, proposed by Preece et al. (2002), could be considered in AR-related studies. While the usability goals include sev-eral cognitive dimensions (e.g., effectiveness, efficiency, safety, utility, learnability, and memorability) for evaluat-ing a product or a system, the user experience goals con-cern several affective variables, such as perceiving satisfaction, enjoyment, fun, entertainment, helpfulness, motivation, esthetic pleasure, support of creativity, reward, and emotional fulfillment.

(12)

Presently, the cognitive issues regarding usability have been widely discussed in the evaluation of relevant e-learning systems. In contrast, the learners’ affective responses (e.g., esthetic pleasure or emotional fulfillment) about their experiences when interacting with e-learning systems have been relatively ignored. A successful design is not only determined by users’ cognitive perceptions but is also influenced by their affective states when interacting with a product (Norman2004). Affection involves passing logical judgments and changing the way users perceive, decide, and react. In the field of science education, Pintrich et al. (1993) have suggested a direction for considering the role of motivational beliefs in scientific conceptual change beyond the role of cognitive factors. Although the moti-vational variables (e.g., goals, values, or self-efficacy) addressed by Pintrich et al. (1993) are different from the factors of user experience suggested by Preece et al. (2002), in some aspects, these ideas do direct the orienta-tion toward a concern with affective issues. The applica-tions of AR in learning could be seen as interactive products. Hence, in addition to the usability issues, it is contended here that the affective variables in the aspects of user experience should be paid attention to when learners are involved in AR-related learning.

Enhancing the Stability and Interaction of AR Systems

For the future development of AR systems in science learning, stability and interaction should be the two major facets to be highlighted. As previous studies have men-tioned (e.g., Dunleavy et al.2009; Koong Lin et al.2011), the current AR systems are apt to experience operational problems such as system crashes, GPS errors, or hardware challenges. Although technical assistance could be inte-grated into AR-related activities, how to enhance the sta-bility of such systems is still a challenge. For example, this paper suggested that, in addition to the enhancement of GPS functions, the usage of graphic recognition (the technique of image-based AR) to detect the scene around users may be an alternative solution for GPS errors.

Regarding the interaction issues, suggestions for designing more flexible and controllable systems have been made by previous studies (e.g., Kerawalla et al. 2006). Taking this a step further, it is proposed that gesture-based technology might be a solution to fulfill learners’ interac-tion experiences when involved in AR-related learning. Since gesture-based computing allows users to touch, swipe, jump, and move as a means of accessing digital information, the 2011 Horizon Report has predicted its adoption within 4–5 years, particularly in education. Hence, it is contended here that gesture-based computing has the potential to be integrated into AR technology for the enhancement of learners’ interaction experiences.

The Possibility of Combining Image-Based and Location-Based AR

According to the aforementioned results, while image-based AR may support science learning for spatial ability, practical skills, and conceptual understanding, scientific inquiry activities may be afforded by location-based AR in science learning. It is interesting to note that, in the articles reviewed in this paper, both image-based and location-based technology have not been conjointly utilized in an AR system. The reasons may come from the challenges of combining these two techniques or designing appropriate instructional activities. To enrich AR-related research in science learning, however, the possibility of combining image-based AR with location-based AR in a scientific activity should be considered. For example, the topics of environmental conservation in Koong Lin et al.’s (2011) and Squire and Klopfer’s (2007) studies could be combined and extended. After acquiring an understanding of the conservation of fish (e.g., pollution issues in a river) with an image-based AR book, an environmental inquiry activity (e.g., detecting the polluted elements on a cam-pus which threaten the conservation of fish) could be incorporated into the learning process with the aid of location-based AR technology. It is therefore suggested that students’ science learning should be fulfilled using the two kinds of AR applications together.

The Theories Guiding AR Research in Science Education

By examining the theoretical or conceptual frameworks used in the selected articles, four theories including mental mod-els, spatial cognition, situated cognition, and social con-structivist learning could be temporally concluded for the profitable uses of future AR research in science education.

Since the unique affordances of AR technology are superimposing computer-generated information on an individual’s view of the real world, his/her mental models, which involve internal thinking processes or representa-tions about how something works in the external reality (Johnson-Laird1980), might play a role in the AR-related learning. For example, when reading a book, an individ-ual’s mental models about paper book reading may be different from his/her mental models about book reading with the aid of AR technology. In other words, one’s personal thinking processes about the representations of a paper book are supposed to be challenged by the affor-dances of AR on it (e.g., the experiences of viewing overlapped virtual information on the paper book). The consequent mental models may also relate to a personal cognition [e.g., conceptual understanding or changes in image-based AR (Kerawalla et al.2006)], task doing [e.g.,

(13)

practical skills in image-based AR (Andu´jar et al.2011)], and reasoning or problem solving [e.g., scientific inquiry in location-based AR (Squire and Klopfer2007)].

Due to the fact that several articles included in this paper afford learners’ spatial ability (Kerawalla et al.2006; Mar-tı´n-Gutie´rrez et al. 2010; Nu´n˜ez et al. 2008; Shelton and Stevens2004), spatial cognition, which concerns knowledge or beliefs about spatial properties (e.g., size, shape, location, or direction) of objects and events in the world (Montello

2001), could be an essential theoretical framework to guide AR research either in image-based or in location-based set-tings. Based on the theory of spatial cognition, learners’ spatial knowledge structures and processes (e.g., how spatial knowledge is acquired and develop over time) can be an AR research issue in addition to the examination of spatial ability gains. Moreover, the relationships between AR users’ spatial cognition and science learning may be a potential topic under investigation.

Most of the selected articles regarding location-based AR learning are likely germane to a theory of situated cognition. The theory proposed that an individual’s learn-ing is inseparable from authentic activity, context, and culture (Brown et al. 1989). In the selected articles, the location-based AR activities commonly allow students to make scientific inquiry with real-time virtual information in the real context. When involving in these activities, the students are situated in authentic contexts (Dunleavy et al.

2009; O’Shea et al. 2011; Rosenbaum et al. 2007) and posited in physical environments for science learning (Squire and Jan 2007; Squire and Klopfer 2007). It may indicate that situated cognition can be a useful theoretical perspective for the foundation of AR-related research in science education, especially for location-based AR.

In addition, in the location-based AR activities, the stu-dent groups with handheld devices were required to make scientific investigation in physical environments. During the

processes, these students have to communicate with avatars and peers to collaboratively hypothesize, reason, and solve problems. Because a location-based AR activity would provide a rich visual and verbal-based learning environment for groups to coordinate and construct knowledge for one another through social interaction, the theory of social con-structivist learning is appropriate for the basis on which location-based AR activity design is founded. Social con-structivist learning initiated by Vygotsky (1978) emphasizes the social and collaborative nature of learning. Since the reviewed articles regarding image-based AR in this paper did not well provide opportunities for learners to collaborate, it is further suggested that social constructivist learning can be considered as a framework to design image-based AR learning context.

To summarize, Fig. 7 is illustrated for addressing that the four theories could be substantially germane to the unique affordances provided by AR technology. Also, AR research in science education is suggested to be guided by these theories in the future.

Limitations

Besides providing an overview of the current features of AR technology in the present paper, the results generated from the selected articles conclude the affordances of AR and indicate several directions for future research on AR-related applications in science education. However, due to the fact that well-implemented studies of AR in science learning are still in their infancy, the number of selected articles may be a limitation. Moreover, the articles reviewed in this paper were filtered from scholar database rather than general searching engines (e.g., Google). Latest technical reports or business demonstrations of AR in science learning were excluded in this paper, which might limit the representation of state-of-the-art AR applications. In addition, as the theories of AR research were temporally concluded in this paper, in the future, it deserves to explore the possibilities of other theoretical frameworks in science education to elaborate the affordances of AR. Despite these limitations, this paper does propose some valuable trends and potential research directions for AR-related science learning from the perspective of contemporary technology.

Acknowledgments Funding of this research work is supported by the National Science Council, Taiwan, under grant numbers NSC 98-2511-S-011-005-MY3 and 99-2511-S-011-005-MY3.

References

Ajanki A, Billinghurst M, Gamper H, Ja¨rvenpa¨a¨ T, Kandemir M, Kaski S et al (2011) An augmented reality interface to contextual information. Virtual Real 15(2–3):161–173

Fig. 7 A framework of theories guiding AR research in science education

(14)

Andu´jar JM, Mejias A, Marquez MA (2011) Augmented reality for the improvement of remote laboratories: an augmented remote laboratory. IEEE Trans Educ 54(3):492–500

Azuma R (1997) A survey of augmented reality. Presence Teleoper Virtual Environ 6:355–385

Bajura M, Fuchs H, Ohbuchi R (1992). Merging virtual objects with the real world: Seeing ultrasound imagery within the patient. Commun ACM 36(7):52–62. In: Proceedings of SIGGRAPH ‘92, ACM Press, New York, pp 203–210

Broll W, Lindt I, Herbst I, Ohlenburg J, Braun AK, Wetzel R (2008) Toward next-gen mobile AR games. IEEE Comput Graph Appl 28(4):40–48

Brown JS, Collins A, Duguid P (1989) Situated cognition and the culture of learning. Edu Res 18(1):32–41

Caudell TP, Mizell DW (1992) Augmented reality: an application of heads-up display technology to manual manufacturing processes. In: Proceedings of Hawaii international conference on system sciences, pp 659–669

Dunleavy M, Dede C, Mitchell R (2009) Affordances and limitations of immersive participatory augmented reality simulations for teaching and learning. J Sci Educ Technol 18(1):7–22 Eursch A (2007) Increased safety for manual tasks in the field of

nuclear science using the technology of augmented reality. IEEE Nuclear Science Symposium Conference Record 3:2053–2059 Feiner S, MacIntyre B, Seligmann D (1993) Knowledge-based

augmented reality. Commun ACM 36(7):52–62

Ha T, Lee Y, Woo W (2011) Digilog book for temple bell tolling experience based on interactive augmented reality. Virtual Real 15(4):295–309

Hou HT (2010) Exploring the behavioural patterns in project-based learning with online discussion: quantitative content analysis and progressive sequential analysis. Turk Online J Edu Technol 9(3): 52–60

Johnson L, Smith R, Willis H, Levine A, Haywood K (2011) The 2011 horizon report. The New Media Consortium, Austin Johnson-Laird PN (1980) Mental models in cognitive science. Cogn

Sci 4:71–115

Kerawalla L, Luckin R, Seljeflot S, Woolard A (2006) ‘‘Making it real’’: exploring the potential of augmented reality for teaching primary school science. Virtual Real 10(3–4):136–174 Klopfer E (2008) Augmented learning: research and design of mobile

educational games. MIT Press, Cambridge

Koong Lin HC, Hsieh MC, Wang CH, Sie ZY, Chang SH (2011) Establishment and usability evaluation of an interactive AR learning system on conservation of fish. Turk Online J Edu Technol 10(4):181–187

Linn MC (2003) Technology and science education: starting points, research programs, and trends. Int J Sci Educ 25(6):727–758 Martin S, Diaz G, Sancristobal E, Gil R, Castro M, Peire J (2011)

New technology trends in education: seven years of forecasts and convergence. Comput Educ 57(3):1893–1906

Martı´n-Gutie´rrez J, Luı´s Saorı´n J, Contero M, Alcan˜iz M, Pe´rez-Lo´pez DC, Ortega M (2010) Design and validation of an augmented book for spatial abilities development in engineering students. Comput Graph 34(1):77–91

McCall R, Wetzel R, Lo¨schner J, Braun A-K (2011) Using presence to evaluate an augmented reality location aware game. Pers Ubiquit Comput 15(1):25–35

Milgram P, Kishino F (1994) A taxonomy of mixed reality visual displays. IEICE Trans Inf Syst E77(12):1321–1329

Montello DR (2001) Spatial cognition. In: Smelser NJ, Baltes PB (eds) International encyclopedia of the social and behavioral sciences. Pergamon Press, Oxford, pp 14771–14775

Murray CD, Fox J, Pettifer S (2007) Absorption, dissociation, locus of control and presence in virtual reality. Comput Hum Behav 23(3): 1347–1354

Norman DA (2004) Emotional design: Why we love (or hate) everyday things. Basic Books, New York

Nu´n˜ez M, Quiros R, Nu´n˜ez I, Carda JB, Camahort E (2008) Collaborative augmented reality for inorganic chemistry educa-tion. In: Proceedings of the 5th WSEAS/IASME international conference on engineering education, July 22–24, 2008. Herak-lion, pp 271–277

O’Shea P, Mitchell R, Johnston C, Dede C (2009) Lessons learned about designing augmented realities. Int J Gaming Comput Mediat Simul 1(1):1–15

O’Shea P, Dede C, Cherian M (2011) The results of formatively evaluating an augmented reality curriculum based on modified design principles. Int J Gaming Comput Mediat Simul 3(2): 57–66

Papagiannakis G, Singh G, Magnenat-Thalmann N (2008) A survey of mobile and wireless technologies for augmented reality systems. Comput Animat Virtual Worlds 19(1):3–22

Pence HE (2011) Smartphones, smart objects, and augmented reality. Ref Libr 52(1):136–145

Pintrich PR, Marx RW, Boyle RA (1993) Beyond cold conceptual change: the role of motivational beliefs and classroom contextual factors in the process of conceptual change. Rev Educ Res 63(2):167–199

Preece J, Rogers Y, Sharp H (2002) Interaction design: Beyond human-computer interaction. Wiley, NY

Rosenbaum E, Klopfer E, Perry J (2007) On location learning: authentic applied science with networked augmented realities. J Sci Educ Technol 16(1):31–45

Rutten N, van Joolingen WR, van der Veen JT (2011) The learning effects of computer simulations in science education. Comput Educ 58(1):136–153

Salzman MC, Dede C, Bowen Loftin R, Chen J (1995) The design and evaluation of virtual reality-based learning environments. Presence Teleoper Virtual Environ (special issue on education) Schuemie MJ, van der Straaten P, Krijn M, van der Mast CAPG (2001) Research on presence in virtual reality: a survey. CyberPsychol Behav 4(2):183–201

Shelton B, Stevens R (2004) Using coordination classes to interpret conceptual change in astronomical thinking. In: Kafai Y, Sandoval W, Enyedy N, Nixon A, Herrera F (eds) Proceedings of the 6th international conference for the learning sciences. Lawrence Erlbaum & Associates, Mahweh, NJ

Squire KD, Jan M (2007) Mad city mystery: developing scientific argumentation skills with a place-based augmented reality game on handheld computers. J Sci Educ Technol 16(1):5–29 Squire K, Klopfer E (2007) Augmented reality simulations on

handheld computers. J Learn Sci 16(3):371–413

Sutherland IE (1968) A head-mounted three dimensional display. Proc AFIPS Conf 33:756–764

Sylaiou S, Mania K, Karoulis A, White M (2010) Exploring the relationship between presence and enjoyment in a virtual museum. Int J Hum Comput Stud 68(5):243–253

Tsai MJ, Hou HT, Lai ML, Liu WY, Yang FY (2012) Visual attention for solving multiple-choice science problem: an eye-tracking analysis. Comput Educ 58(1):375–385

Vygotsky LS (1978) Chapter 6: Interaction between learning and development. In: Cole M (ed) Mind in society: the development of higher psychological processes. Harvard University Press, Cambridge

數據

Fig. 2 The concept of an AR book modified from Koong Lin et al.’s
Fig. 4 The concept of location-based AR modified from the design of the Layer app
Fig. 6 An overview of AR research issues in science learning
Fig. 7 A framework of theories guiding AR research in science education

參考文獻

相關文件

2 Department of Educational Psychology and Counseling / Institute for Research Excellence in Learning Science, National Taiwan Normal University. Research on embodied cognition

形成 形成 形成 研究問題 研究問題 研究問題 研究問題 形成問題 形成問題 形成問題 形成問題 的步驟及 的步驟及 的步驟及 的步驟及 注意事項 注意事項 注意事項

 2D materials have potential for future electronics.  The real and unique benefits is the atomically

The algorithms have potential applications in several ar- eas of biomolecular sequence analysis including locating GC-rich regions in a genomic DNA sequence, post-processing

Having regard to the above vision, the potential of IT in education and the barriers, as well as the views of experts, academics, school heads, teachers, students,

EdD, MEd, BEd Adjunct Assistant Professor Department of Early Childhood Education Member, Centre for Child and Family Science The Education University of Hong

• A school with teachers strong in science can strengthen the learning of science and technology elements in GS by promoting reading in science and providing ample opportunities

Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input