• 沒有找到結果。

以眼動實證研究探討個人差異於教育輔助平台視覺分析上之影響 - 政大學術集成

N/A
N/A
Protected

Academic year: 2021

Share "以眼動實證研究探討個人差異於教育輔助平台視覺分析上之影響 - 政大學術集成"

Copied!
102
0
0

加載中.... (立即查看全文)

全文

(1)國立政治大學資訊管理學系 碩士學位論文. 以眼動實證研究探討個人差異於教育輔助平台視覺 分析上之影響 The impact of individual differences on visual analytics of an. 治 study using eye-tracking 政empirical orchestration platform: An 大 立. ‧. ‧ 國. 學 er. io. sit. y. Nat. n. al v 指導教授:林怡伶博士 ni Ch. engchi U. 研究生:李明緯 撰. 中 華 民 國 108 年 7 月. DOI:10.6814/NCCU201901137.

(2) ACKNOWLEDGEMENT. To my family, To participants, joining the user study to support my data collection, To my advisor, Dr. Yi-Ling Lin, giving me guidance and helping me to complete my research, To the thesis committee, Dr. Yen-Chun, Chou & Dr. I-Chin Wu, To the community that is exploring learning analytics.. 立. 政 治 大. ‧. ‧ 國. 學. n. er. io. sit. y. Nat. al. Ch. engchi. i Un. v. DOI:10.6814/NCCU201901137.

(3) 摘要 本研究著重於探討學習目標導向、視覺化圖表格式(折線圖、柱狀圖、雷達圖)與學習類 型(程序性學習及推論學習)對學生在線上複習平台中複習紙本程式考試表現的影響。我 們透過使用者研究及眼動儀,探討自行開發的視覺化系統之可行性。此研究總共募集了 34 位曾經至少修習過一堂 Java 程式設計課的受測者,並收集了問卷資料、系統紀錄、 眼動追蹤數據等相關資料進行後續分析。我們的實驗透過使用迴歸模型驗證學習目標導 向、視覺化圖表格式以及學習類型對於使用者在視覺化分析上認知的影響,進而提出以 實證研究分析視覺化學習的可行性。我們的實驗結果顯示具有較高學習目標導向的使用 者在視覺化分析的輔助下,相對應會有較高的學習表現與學習認知。然而實驗結果也顯. 政 治 大 實驗結果顯示在視覺化分析的輔助下,使用者在資訊檢索類型的複習表現較推理發想類 立. 示,雷達圖因為組成較為複雜,會對使用者複習時的效率有負面影響。在學習類型方面,. 型更為優越。. ‧ 國. 學. 關鍵詞:學習分析、圖表理解、資訊視覺化、學習目標導向、紙本考試、教育科技協. ‧. io. sit. y. Nat. n. al. er. 作、眼動追蹤. Ch. engchi. i Un. v. DOI:10.6814/NCCU201901137.

(4) ABSTRACT We examined the impact of learning goal orientation, visualization format (line, bar and radar chart) and type of learning task (search fact vs. inference generation) upon a viewer’s perception of reviewing paper-based exams in an online virtual assessment environment. A lab experiment was conducted with an eye-tracker. System log, eye-tracking data and questionnaires were collected from 34 students who have taken at least one Java programming course. Our experiments demonstrate the empirical research practicality by using a regression model to validate the effect of learning goal orientation, format and task on user perceptions of visualization analytics. Our results show that the viewers with a high degree of learning goal. 治 政 however, would have a negative influence on the review 大 performance due to its complicated 立with the assistance of visualization analytics, users perform composition. We also found that orientation would have better learning perception of visualization material. Radar graph,. ‧ 國. 學. more efficiently on search fact tasks rather than inference generation tasks when reviewing programming exams.. ‧. Keywords: learning analytics, graph comprehension, information visualization, learning goal. n. al. er. io. sit. y. Nat. orientation, paper-based assessment, classroom orchestration technology, eye tracking. Ch. engchi. i Un. v. DOI:10.6814/NCCU201901137.

(5) Table of Contents Chapter 1 INTRODUCTION ................................................................................................. 1 1-1 Background and Motivation...................................................................................................... 1 1-2 Research Questions .................................................................................................................... 3 1-3 Research Method ........................................................................................................................ 5. Chapter 2 LITERATURE REVIEW...................................................................................... 8 2-1 Orchestration in Learning Analytics ........................................................................................ 8 2-2 Dashboards and Visualizations in Learning Analytics.......................................................... 10 2-3 Visual Analytics in Learning Environment ............................................................................ 11. Chapter 3 RESEARCH MODEL ......................................................................................... 17 3-1 Learning Goal Orientation, Format and Task ....................................................................... 18 3-2 Learning Comprehension ........................................................................................................ 20. 政 治 大 3-4 Perceived Learning................................................................................................................... 23 立. 3-3 Understanding of Visualization ............................................................................................... 22. ‧ 國. 學. Chapter 4 METHODOLOGY .............................................................................................. 25. 4-1 Dataset ....................................................................................................................................... 25 4-2 System Development and Interface ........................................................................................ 26. ‧. 4-3 Search Fact Tasks and Inference Generation Tasks .............................................................. 30 4-4 Apparatus .................................................................................................................................. 33. Nat. sit. y. 4-5 Subjects and Experiment Procedure ...................................................................................... 33. er. io. 4-6 Analysis Method ....................................................................................................................... 37. Chapter 5 DATA AND MEASUREMENTS ........................................................................ 39. n. al. i Un. v. 5-1 User Behavior and Perception Data........................................................................................ 39. Ch. engchi. 5-2 Eye-tracking Data .................................................................................................................... 45. Chapter 6 MODEL SPECIFICATIONS .............................................................................. 51 6-1 Log-based User Behavior and Perception Data Analysis ...................................................... 51 6-2 Eye-tracking Data - Fixation Analysis .................................................................................... 57 6-3 Eye-tracking Data - Transition Analysis ................................................................................ 64. Chapter 7 DISCUSSIONS ..................................................................................................... 66 7-1 The Influence on User Behavior and Perception ................................................................... 66 7-2 Eye Movement and User Behavior ......................................................................................... 70. Chapter 8 CONCLUSION .................................................................................................... 80 REFERENCE ......................................................................................................................... 85 Appendix A: Visualizations with different format ........................................................................... 94 Appendix B: Learning Goal Orientation Measurement Items ....................................................... 95. DOI:10.6814/NCCU201901137.

(6) List of Tables Table 3.1 Proposed constructs. ................................................................................................ 18 Table 4.1 Sequence of exam number and format pairing. ....................................................... 34 Table 4.2 Selected exam questions and corresponding answer rate ........................................ 35 Table 5.1 Description of user perceptions variables................................................................ 41 Table 5.2 Description of learning goal orientation, format and task variables ....................... 42 Table 5.3 Default answer of understanding of visualization for each format ......................... 42 Table 5.4 Descriptive statistics of variables (N=204) ............................................................. 44 Table 5.5 Correlation of variables ........................................................................................... 44 Table 5.6 Frequency table of categorical variables ................................................................. 45 Table 5.7 The descriptive statistic of Total Fixation Duration in seconds. ............................. 47. 政 治 大. Table 5.8 The descriptive statistic of Fixation Count. ............................................................. 48. 立. Table 5.9 The state diagrams of gaze transitions among AOIs. .............................................. 50. ‧ 國. 學. Table 6.1 Estimated results for learning comprehension. ....................................................... 53 Table 6.2 Estimated results for understanding of visualization............................................... 55. ‧. Table 6.3 Estimated results for perceived learning. ................................................................ 56 Table 6.4 Estimated results for total fixation duration time in All page. ................................ 59. Nat. sit. y. Table 6.5 Estimated results for total fixation duration time in AOI Visualization. ................. 60. er. io. Table 6.6 Estimated results for total fixation duration time in AOI Question......................... 60 Table 6.7 Estimated results for total fixation count in All page . ............................................ 62. n. al. Ch. i Un. v. Table 6.8 Estimated results for total fixation count in AOI Visualization............................... 63. engchi. Table 6.9 Estimated results for total fixation count in AOI Question. .................................... 63 Table 6.10 Estimated results for transition rate between AOI Visualization and Question. ... 65 Table 7.1 Estimated results for learning comprehension in exam 1. ....................................... 69 Table 7.2 Estimated results for learning comprehension in exam 2. ....................................... 69 Table 7.3 Estimated results for learning comprehension in exam 3. ....................................... 70 Table 7.4 Main effects of goal orientation on fixation duration. ............................................. 71 Table 7.5 Main effects of format on fixation duration. ........................................................... 74 Table 7.6 Main effects of format on fixation count. ................................................................ 75 Table 7.7 Main effects of format on fixation count. ................................................................ 77. DOI:10.6814/NCCU201901137.

(7) List of Figures Figure 1.1 The exploratory data analysis framework................................................................ 5 Figure 4.1 WPGA student interface detail view...................................................................... 26 Figure 4.2 Experiment Schedule ............................................................................................. 26 Figure 4.3 Data processing schema......................................................................................... 28 Figure 4.4 Visualization analytics: individual answer rate comparing to the class average ... 28 Figure 4.5 Visualization analytics: individual answer rate comparing to the average ............ 29 Figure 4.6 Graph visualization: radar graph ........................................................................... 29 Figure 4.7 Graph visualization: bar graph .............................................................................. 29 Figure 4.8 Graph Visualization: line graph ............................................................................. 30 Figure 4.9 Visualization interface: search fact question ......................................................... 32. 政 治 大. Figure 4.10 Visualization interface: inference generation question ........................................ 32. 立. Figure 4.11 Question interface: inference generation question 2 ............................................ 33. ‧ 國. 學. Figure 4.12 Question interface: inference generation question 3 ........................................... 33 Figure 4.13 Original exam question interface: exam question and exam solution ................. 36. ‧. Figure 4.14 Experiment Procedure in one iteration. ............................................................... 37 Figure 4.15 User perception questionnaire. ............................................................................ 37. Nat. sit. y. Figure 5.1 Predefined AOIs: Visualization, Question, Legend and Title ................................ 46. er. io. Figure 7.1 Main effect of goal orientation on fixation duration in All page. .......................... 72 Figure 7.2 Main effect of goal orientation on fixation duration in AOI Visualization. .......... 72. n. al. Ch. i Un. v. Figure 7.3 Main effect of goal orientation on fixation duration in AOI Question. ................. 73. engchi. Figure 7.4 Main effect of format on fixation duration in All page. ........................................ 74 Figure 7.5 Main effect of format on fixation duration in AOI Visualization. ......................... 74 Figure 7.6 Goal orientation on fixation count in All page. ..................................................... 76 Figure 7.7 Goal orientation on fixation count in AOI Question. ............................................ 76 Figure 7.8 Main effect of format on fixation count in All page. ............................................. 77 Figure 7.9 Main effect of format on fixation count in AOI Visualization. ............................. 78. DOI:10.6814/NCCU201901137.

(8) Chapter 1 INTRODUCTION 1-1 Background and Motivation With the development of the Internet and other various technologies, learning styles and environments have changed since last decade. Nowadays, the learning behavior is not limited to the classroom. Instructors can manage classroom activities from any distance through the instructional design that connects different systems (namely web services). As a result, learners can use multiple ways to engage in the learning material. In the context of education, we use “orchestration” to refer to the. 治 政 integrated process, and “classroom orchestration” is defined 大 as how a teacher manages 立 multi-layered activities (i.e., offline and online). Classroom orchestration discusses ‧ 國. 學. how and what research-based technologies have been adopted and should be integrated. ‧. within the physical classrooms (Dillenbourg, 2013; Dillenbourg & Jermann, 2010). For. y. Nat. decades it has been used in different types of learning environments. Several innovative. er. io. sit. systems have been proposed in classroom orchestration to improve students’ learning performance (Brusilovsky, Hsiao, & Folajimi, 2011; Denny, Luxton-Reilly, & Hamer,. al. n. iv n C 2008; Hsiao, Bakalov, Brusilovsky,h& König-Ries, 2013; e n g c h i U Hsiao & Lin, 2017). Most of these studies present innovative Web-based tools based on the concepts of social navigation as well as open student models. User usage and implementation performance results in classroom were also provided to validate the effectiveness of these systems.. While the classroom orchestration provides students with abundance of materials corresponding to various aspects of their learning, the benefit may not be fully realized without proper guidance. Rather than “one-size-fits-all” solutions (such as ordering questions in a fixed sequence), an adaptive guidance should be provided given that students typically have different starting knowledge and learn at different paces (Hsiao, Sosnovsky, & Brusilovsky, 2008). To support adaptive guidance, most classroom 1. DOI:10.6814/NCCU201901137.

(9) orchestration systems come with analytic dashboards which help teachers monitor students’ engagement and effectiveness toward a specific subject. Visualizations in dashboards not only summarize general performance indicators like scores, but also visualize advanced indicators like interactions between students and learning content, time spent and corresponding resource using in a virtual classroom (Govaerts, Verbert, Duval, & Pardo, 2012; Hsiao, Pandhalkudi Govindarajan, & Lin, 2016; Hsiao & Brusilovsky, 2012; Lu & Hsiao, 2016). These works show that a dashboard with visualizations guide students to the suitable learning material as well as significantly increases the quality of students’ learning and motivation to work with non-mandatory learning content.. 立. 政 治 大. Although orchestration technologies have changed the education environment, it. ‧ 國. 學. is commonly agreed that there is a need to gain insights into students’ perceptions on. ‧. assessments and discover how they behave while dealing with assessment tasks with. y. Nat. different requirements (Papamitsiou & Economides, 2015). Some previous works have. er. io. sit. attempted to investigate adaptive navigation support for self-assessment questions in larger classes with a broader range of question difficulty. Specifically, a series of works. al. n. iv n C were proposed to concentrate on h the context of paper-based programming exams, engchi U particularly given the fact that paper-based exams are still one of the most practical. assessments in large programming courses in school ( Hsiao, 2016; Hsiao, Huang, & Murphy, 2017; Hsiao et al., 2008; Paredes, Huang, Murphy, & Hsiao, 2017). These works connected paper-based assessments to the online virtual assessment environment and showed students’ performance in the exam in order to provide adaptive user interface for programming exams. Just as in the previous works, the present study also focuses on the domain of Java programming language, which is now still the language of choice in most introductory programming classes. We implemented an online Java exam reviewing system called 2. DOI:10.6814/NCCU201901137.

(10) Topic Combination Analysis Visualization (TCAV) to provide different kinds of visualization analytics of students’ behavior in exams. To be more specific, we extracted topics from predefined rubrics of each question in a Java paper-based exam and applied adaptive visualizations to support the interpretation of students exam performance. This is a study based on a Java programming orchestration platform with visualization analytics. We attempted to enhance learning awareness to programming learners by providing elaborated visualization results. Students can find patterns between their behavior and performance during the paper-based exam so that they can. 治 政 prepare for the next exam more efficiently. We are interested 大 in whether visualization 立 analytics can benefit the students and, if so, which visualization format is most effective. ‧ 國. 學. We are also interested in whether individual differences in psychology can influence. ‧. the impact of the visualization analytics.. sit. y. Nat. 1-2 Research Questions. al. er. io. Learning analytics have been defined as the “measurement, collection, analysis. v. n. and reporting of data about learners and their contexts, for purposes of understanding. Ch. engchi. i Un. and optimizing learning and the environments in which it occurs” (Siemens & Gasevic, 2012). Learning analytics have combined research, methods, and techniques from several fields such as orchestrated learning, information visualization, psychology and HCI. Illustrating student academic performance and providing dedicated feedback have been two of the most frequently adopted tasks associated with learning analytics. To investigate learning achievement, the role of some main psychological indicators are involved in previous studies, including self-efficacy (Wang, Shannon, & Ross, 2010) and locus of control (Joo, Lim, & Kim, 2013). Further, indicators peculiar to the context of learning have been widely applied in recent studies, such as learning engagement 3. DOI:10.6814/NCCU201901137.

(11) (Hu & Hui, 2012) and learning goal orientation (Debicki, Kellermanns, Barnett, Pearson, & Pearson, 2016). These indicators have thoroughly investigated the individual differences and perceptions of various learning contexts. Although establishing lead indicators of academic performance are essential steps for learning analytics, there has been a gap in empirical studies which have sought to evaluate the impact and transferability of this initial work across domains and contexts. Despite the fact that there are a large number of learning analytics tools developed with innovative approaches and accompanied by elaborate dashboards, most of them are generally not developed from theoretically established instructional strategies,. 治 政 especially those which utilize the trace data and feedback 大 from students (Gašević, 立 Dawson, & Siemens, 2015). ‧ 國. 學. We argued that there is a disheartening lack of necessary empirical research in the. ‧. field of learning analytics, especially for the validity of orchestration technology. y. Nat. learning tools. Few studies have focused on the influence of individual differences and. er. io. sit. user perceptions in learning analytics tools empirically. Orchestration technology tools should be developed and investigated under the consideration of both psychological. al. n. iv n C indicators and user perceptions. Specifically, were U h e n gwec h i interested in two factors, learning. genres and visualization formats, and we investigated the effects of these two factors between individual differences and students’ perceptions. As a result, the goals of this study were a) to depict students’ performance in the paper-based programming exam with different visualizations b) to explore the factors that influence the effectiveness of students when they view different visualization formats, and c) to examine the effect of students’ personality and their learning comprehension of the visualization analytics. This study attempts to answer the following research questions: RQ1. Does the visualization format, task type and individual indifferences influence students’ comprehension of the visualization analytics on the 4. DOI:10.6814/NCCU201901137.

(12) proposed orchestration technology platform? RQ2. Does the visualization format and individual indifferences influence students’ understanding of the visualization analytics on the proposed orchestration technology platform? RQ3. Does the visualization format and individual indifferences influence students’ perceived learning of the proposed orchestration technology platform?. 1-3 Research Method. 政 治 大 for the Java programming course taught at the National Chengchi University. During 立. In this study, we introduced WPGA (Paredes et al., 2017) as an exam grading tool. ‧ 國. 學. the semester, grading data (i.e., exam questions with answers and corresponding topic rubrics) were collected from WPGA for the development of TCAV, a learning analytics. ‧. tool for paper-based programming exams. We adopted an exploratory data analysis to. n. al. er. io. sit. y. Nat. design the TCAV prototype. The process is summarized in figure 1.1.. Ch. engchi. i Un. v. Figure 1.1 The exploratory data analysis framework. TCAV provides visualizations of students’ performance in a series of paper-based programming exams. A lab experiment was conducted to collect both objective and 5. DOI:10.6814/NCCU201901137.

(13) subjective data, including questionnaires and eye-tracking data. An eye-tracker was employed to collect real-time data with several settings from students. Individual differences such as personality were collected through questionnaires. This data was used to identify the impact and capability of adopting such orchestration technology. The participants were student volunteers from students who have taken at least one Java programming course prior to this study. Adaptive information visualizations of paper-based exams is a main function of TCAV. Since the visualizations were presented as a guidance of reviewing exams, the understanding of how the visualizations were generated and composed is crucial.. 治 政 Corresponding to the increased prevalence of analytic大 dashboards and visualizations, 立 graph comprehension has been widely used in order to evaluate the efficiency of the ‧ 國. 學. visual analytics interface (Ratwani & Boehm-davis, 2008; Ratwani & Trafton, 2008).. ‧. For our purposes, we defined learning comprehension, which came from graph. y. Nat. comprehension, to capture how well students learned from the proposed visualizations.. er. io. sit. To measure how the visualization provided knowledge for students, we used the concept of perceived learning, which was adapted from the concept of perceived. n. al. Ch usefulness in TAM (Davis, 1989).. engchi. i Un. v. In the research field of cognitive science, there are two broad genres of learning: procedural and inferential, which represent different processes of knowledge creation and use. Integrating this into our context, we divided the process of reviewing exam results into two kinds of task types: fact-retrieval tasks and inference-generation tasks. As the task of reviewing exams varies, students’ perceptions and learning comprehension to the visualization analytics should be considered while designing the learning analytics tool. Empirical studies that investigate the impact of individual differences and the visualization formats on the effectiveness of orchestration technology platform are 6. DOI:10.6814/NCCU201901137.

(14) scarce. This study focuses on students’ learning comprehension and perceptions of visualization learning analytics tools. Instead of using only subjective survey data, we also collected secondary data from eye-tracking analysis to conduct a leaning analysis of students’ initiatives toward the programming exam. The remainder of this paper is organized as follows. Chapter 2 presents the literature review of orchestration technology and focuses on the dashboard and the visualization analytics. The impact of the individual differences, perception and eyetracking analysis are also included. Chapter 3 describes the concept of learning comprehension, understanding of visualizations and proposes hypotheses in regards to. 治 政 utilizing the proposed visualization analytics tool. Chapter 大 4 describes the research 立 system and experiment procedure. Chapter 5 describes the collected data and ‧ 國. 學. operationalization of variables. Chapter 6 presents the estimation methodology and. ‧. results of our empirical analysis. Chapter 7 discusses the practical and research. y. Nat. implications, limitations, and potential directions for future research. Chapter 8. n. al. er. io. sit. concludes the paper.. Ch. engchi. i Un. v. 7. DOI:10.6814/NCCU201901137.

(15) Chapter 2 LITERATURE REVIEW Learning analytics is an emerging approach toward the educational context. Its main idea is bridging the computer science and sociology/psychology of learning to ensure that interventions and organizational systems serve the needs of all stakeholders (G. Siemens & Baker, 2012). With increasing numbers of education technologies for programming language learning, there is an abundance of approaches related to learning analytics in recent studies. The present study proposes personalized visualization analytics of an orchestration technology platform. Other than the effect of. 政 治 大. the visualizations alone, we were interested in how individual differences reflect on the. 立. visualization analytics, thus, we tried to investigate the influence of learning goal. ‧ 國. 學. orientation, one of the common used indicators that measures the learning performance. The literature review presents these approaches in four categories: orchestration in. ‧. al. sit. io. 2-1 Orchestration in Learning Analytics. er. Nat. individual perception of learning analytics visualizations.. y. learning analytics, dashboard analysis & visualizations, individual differences and. n. iv n C Classroom orchestration defines a teacher manages multi-layered activities. h ehow ngchi U. It discusses how and what research-based technologies have been adopted and should. be done in the physical classrooms (Dillenbourg, 2013). For several decades, classroom orchestration has been used in different types of learning environments. For example, PeerWise offers an innovative approach that enhances standard teaching and learning practice by requiring students to participate in the construction and evaluation of multiple choice questions (MCQs) (Denny et al., 2008). QuizMap also combines open student modelling and social-based adaptive navigation support, an approach that is based on the “collective wisdom” of a student community to guide students to the right questions as successfully as classic knowledge-based guidance (Brusilovsky et al., 8. DOI:10.6814/NCCU201901137.

(16) 2011). Progressor as well was an innovative Web-based tool based on the concepts of social navigation and open student modeling that helped students to find the most relevant resources in a large collection of parameterized self-assessment questions on Java programming. Hsiao, Bakalov, Brusilovsky, & König-Ries, 2013).. More specifically, series of previous work was done to deal with practical paperbased programming language learning exams. Some examples can be found through QuizJET (Java Evaluation Toolkit), a system for authoring, delivery, and evaluation of parameterized questions for Java (I. Hsiao et al., 2008), and EduAnalysis which tested an intelligent semantic indexing for paper-based programming problems by integrating. 治 政 physical classroom learning assessments to online visual 大learning analytics (Hsiao & 立 Lin, 2017). Programming Grading Assistant (PGA) also supports an automatic ‧ 國. 學. semantic partial credit assignment approach through scanning the paper-based exam. ‧. results into a mobile app and providing an interface for teachers to calibrate recognition. y. Nat. results (Hsiao, 2016). Web Programming Grading Assistant (WPGA) is a Web-based. er. io. sit. system to facilitate grading traditional paper-based exams in today's majority classes. It connects paper-based assessments to the online virtual assessment environment and. al. n. iv n C ensures teachers the flexibility to continue paper exams without having to learn h e n gusing chi U new content authoring tools (Hsiao, Huang, & Murphy, 2017; Paredes, Huang, Murphy, & Hsiao, 2017).. Within the reported literatures, it is clear that the integration of a new learning technology in the real classroom is important when implementing classroom orchestration technology. There is a need to design the learning environments as well as the system from the very beginning. It is also very important that the system itself can interact with any other web service or simply consider the learning environment as web services. Therefore, to provide a better learning environment for programming education, we concentrated on the effect of building an online learning analytics tool 9. DOI:10.6814/NCCU201901137.

(17) for paper-based programming exam.. 2-2 Dashboards and Visualizations in Learning Analytics To support orchestration technology, most of classroom learning analytic systems come with analytic dashboards. Dashboards help teacher interpret students’ performance in classroom orchestration activities. Students can also evaluate and adjust their learning strategies via reviewing personalized information such as their own learning progress on dashboards. LOCO-Analyst was a learning analytics tool aimed at providing educators with feedback on the performance of the learning activities taking. 政 治 大 mix of textual and graphical representations of different kinds of feedback provided by 立 place in a web-based learning environment. The study showed that educators value the. ‧ 國. 學. the tool (L. Ali, Hatala, Gašević, & Jovanović, 2012). Visualizations in a dashboard not only just summarize general performance indicators like scores, but also visualize. ‧. interactions between students and the learning content (Hsiao et al. 2016; Lu and Hsiao. sit. y. Nat. 2016). It strives to help both students and teachers to find patterns, and contribute to. al. er. io. awareness and self-monitoring. The Student Activity Meter emphasized awareness of. v. n. time spent and corresponding resource use in a virtual classroom (Govaerts et al., 2012).. Ch. engchi. i Un. The Temporal Learning Analytics Visualization (TLAV) tool also aims to visualize time spent on activities, but instead focuses on the time aggregation according to the correctness of a submitted answer during an assessment procedure (Papamitsiou & Economides, 2015). According to these previous works, a dashboard with visualization benefits students through suitable learning material as well as significantly increases the quality of students’ learning and motivation to work with non-mandatory learning content. Some studies also reported that the selection of appropriate display formats for users could be based on data types (categorical or quantitative), tasks types (comparison or 10. DOI:10.6814/NCCU201901137.

(18) identification of trends or totals), and user backgrounds (experts or casual users of graphs). For different user personality types, dashboards are more effective when they present flwxbility, i.e. allowing users to switch between alternative presentation formats (Helfman & Goldberg, 2007; Yigitbasioglu & Velcu, 2012). In our work, we implemented different formats of visualization formats in the hope of helping students with different personality types review their Java programming exam. Different task types were also considered as a key factor when we designed the learning analytics tool.. 2-3 Visual Analytics in Learning Environment. 政 治 大 primary data. In orchestration technology, a well-designed dashboard is often adopted 立. Data visualizations can support organized and easy-to-understanding depiction of. ‧ 國. 學. to visualize mass data and to assist users according to their needs, abilities and preferences. Graphs are a collection of nodes and links. Each node represents a single. ‧. data point and each link represents how two nodes are connected. This way of. sit. y. Nat. representing data is well suited for scenarios involving relationships and correlations of. al. er. io. data. Graph visualization is the visual representation of the nodes and links of a graph. v. n. and can be presented as an image, picture or interactive multimedia with different sizes. Ch. engchi. i Un. and colors. Further, graph visualization provides useful and efficient ways to understand the data. A better depiction of quantitative information can be derived from a well-designed graph visualization (Freedman, Shah, & Vekiri, 2005). As such, graph visualization is used extensively in different fields as various applications(Cui, Zhou, Qu, Wong, & Li, 2008; Gansner & North, 1999). With the amount of analytic dashboards with graph visualization increasing, the graph comprehension has also been widely discussed in order to evaluate the efficiency of the visualization analytics interface (Ratwani & Boehm-davis, 2008). When asked to extract information from a graph, users generally have some stored knowledge that 11. DOI:10.6814/NCCU201901137.

(19) they use to comprehend the graph, despite the fact that different graph types represent information differently (Ratwani & Trafton, 2008). Green & Fisher (2010) explored the impact of individual differences in three personality psychometric factors (Locus of Control, Extraversion, and Neuroticism) on interface interaction and learning performance behaviors in both an interactive visualization and a menu-driven web table. The results demonstrated that all three measures predicted completion times and the number of insights participants reported while completing the tasks in each interface. In the study of Ziemkiewicz et al.(2011), they observed the correlation between Locus of Control and the layout style and conducted a user study with four visualizations that gradually shift from list-based. 立. 治 政 to spatial-based. 大 The. results demonstrate that. participants with an internal locus of control perform more poorly with spatial-based. ‧ 國. 學. visualizations, while those with an external locus of control perform well with such. ‧. visualizations.. y. Nat. The effect of different graph visualization types has also been investigated in. er. io. sit. recent studies. Ali & Peebles (2013) reported three experiments investigating the ability of undergraduate college students to comprehend 2 × 2 “interaction” graphs from two-. al. n. iv n C way factorial research designs. Thehresults of the three e n g c h i U experiments demonstrated the effects (both positive and negative) of Gestalt principles of perceptual organization on. graph comprehension. Toker, Conati, Carenini, & Haraty (2013) investigated the impact of four user characteristics (perceptual speed, verbal working memory, visual working memory, and user expertise) on the effectiveness of two common graph visualization formats: bar graphs and radar graphs. The results showed that different characteristics may influence different factors that contribute to the user’s overall experience and effectiveness with a bar and radar graphs. Research has shown that people differ substantially in their ability to understand graphically presented information. Individuals with high graph literacy usually make more elaborate 12. DOI:10.6814/NCCU201901137.

(20) inferences when viewing graphical displays (Yashmina Okan, Garcia-Retamero, Cokely, & Maldonado, 2011). Novice graph viewers often neglect the relevance of important elements of graphs and interpret graphs incorrectly (Mazur & Hickam, 1993). In our study, the concept of graph comprehension could help to capture how well students reviewed and learned the Java programming topics from the visualizations. Moreover, when students reviewed the visualization analytics, understanding of why certain visualizations were generated and what the embedded information was mainly presented were also very important. The effect of personality trait on user behavior in a learning environment has also. 治 政 been widely applied in past research. In the research field 大of information management, 立 self-efficacy and perceived ease of use have been extensively used to determine a ‧ 國. 學. person’s behavior. Self-efficacy is a measurement of how individuals believe their. ‧. ability to achieve specific goals. Bandura (1993) stated that perceived self-efficacy. y. Nat. operated as an important contributor to academic development. Prior research on. er. io. sit. technology acceptance behavior had examined the effects of self-efficacy and enjoyment on ease of use (Venkatesh, 2000). Locus of control, which referred to the. al. n. iv n C degree of individual’s perception about well they can control over the underlying h e how ngchi U. causes of events in their lives, was also widely used in studies about online learning and distance learning. Previous studies have confirmed this psychological construct plays an important role in learning achievement, satisfaction and persistence in online learning context because learners’ capabilities to apply proper time management, continuous monitoring and self-evaluation is more important than a teacher’s role in such learning tasks (Cascio, Botta, & Anzaldi, 2013; Joo et al., 2013). Much of the existing studies in the visualization field investigate the impact of individual differences through the influence of user’s cognitive abilities. Conati & Maclaren (2010) find that a user’s perceptual speed predicts whether a star graph or 13. DOI:10.6814/NCCU201901137.

(21) heatmap will be most effective for a given user. Similarly, Toker, et al.(2012) investigate the impact of four user cognitive abilities (perceptual speed, verbal working memory, visual working memory, and user expertise) on the effectiveness of bar graphs and radar graphs and find that certain user characteristics have a significant effect on task efficiency, user preference, and ease of use. However, in the work of Ziemkiewicz & Kosara (2009), the results show that factors of Big Five personality (Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism) can lessen the significant effect of visual metaphor on accuracy for simple visualization tasks. The research by Green et al. find effects on interface performance from three psychometric. 治 政 measures: locus of control, neuroticism, and extraversion 大 (2010). Ziemkiewicz et al. 立 (2011) further suggests that locus of control can influence an individual’s use of a ‧ 國. 學. complex visualization system.. ‧. Taking a more learning perspective, with regard to measurement of academic or. y. Nat. learning performance, learning engagement and goal orientation were two variables. er. io. sit. which had received a great deal of attention in organizational research. Learning engagement is a concept extended from work engagement (Xanthopoulou, Bakker,. al. n. iv n C Demerouti, & Schaufeli, 2009) and be seen as a behavioral factor that can h ecould ngchi U influence the learning outcome. Chen (2017) extended the job demands-resources (JD-. R) model which was proposed by Crawford et al. (2010) to evaluate the positive relationship between learning engagement and learning performance. The results found that learning engagement is positively associated with learning performance. Furthermore, the results also strengthened the solid finding that work engagement improves job performance. Goal orientation is the primary aim of individual toward developing or validating one's ability in an achievement settings (VandeWalle, 1997) and it has been applied in many studies in IS and HCI domains toward Web-based distance learning contexts (Chang, 2005; Klein, Noe, & Wang, 2006; Payne, 14. DOI:10.6814/NCCU201901137.

(22) Youngcourt, & Beaubien, 2007). Yi & Hwang (2003) extended the technology acceptance model (TAM) by adding learning goal orientation into the domain of technology acceptance in order to predict the use of Web-based information systems. The results demonstrated that learning goal orientation was positively related to the self-efficacy on a particular formation. Previous studies have showed that learning goal orientation exerts a significant effect on system use over behavioral intention, therefore, we adopt learning goal orientation as a factor of learning performance in this study.. 2-4 Eye-tracking analysis. 政 治 大 data is accumulated more quickly than ever before. The variety of data makes it possible 立. With the increase in computing power, as well as new data processing methods,. ‧ 國. 學. to conduct more comprehensive and user-adaptive analysis. To date, learning analytics has been focused on the investigation of the effects of operations performed by users.. ‧. The analysis based on tracking data from the interactions between users with. sit. y. Nat. educational content has been considered to be a promising approach for advancing our. al. er. io. understanding of the learning process in learning analytics (Gašević et al., 2015).. v. n. Human eye movement is a sequence of activity in which the viewer focuses on specific. Ch. engchi. i Un. information to support the mental or physical activities (Malcolm & Henderson, 2009), and is one of the most common and profitable types of tracking data which is widely used in the field of HCI. Li, Pelz, Shi, & Haake (2012) used an eye- tracker to model eye movement behavior in a medical examination context. Expert medical practitioners’ examination processes were recorded in order to provide guidance to the novices. Besides behavior modeling, eye tracking analysis is also used to evaluate the effectiveness of users’ characteristics on graph comprehension. Steichen, Carenini, & Conati (2013) explored users’ eye gaze patterns while interacting with bar graphs and line graphs to predict the users’ visualization tasks, as well as user cognitive abilities 15. DOI:10.6814/NCCU201901137.

(23) including perceptual speed, visual working memory, and verbal working memory. The results showed that predictions based on eye gaze data are significantly better than a baseline classifier. Drawing on the reviewed literatures, visualization is necessary and would be beneficial to the students. It is proved through previous research to be better at reflecting user’s demand while developing an user-adaptive system and support more timely and effective feedback through monitoring information about learning (Gibson & de Freitas, 2016). However, the visualization formats and the task types, as well as individual differences of students, would influence a student’s perception of the. 治 政 visualization. In order to understanding the relationships 大 between these aspects, we 立 constructed several hypotheses. The hypotheses will be discussed in the next section. ‧. ‧ 國. 學. n. er. io. sit. y. Nat. al. Ch. engchi. i Un. v. 16. DOI:10.6814/NCCU201901137.

(24) Chapter 3 RESEARCH MODEL From the literature review, we have seen an abundance of studies working on building visualization tools with class room orchestration technology. The graph comprehension enables us to get a grasp of how individual perform on different visualization formats, but it does not elaborate on the individuals differences in the various aspects of academic or learning performance. This study adapts the concept of graph comprehension in regard to learning analytics. Multiple factors have been considered for the process of using online visualization learning analytics tool to review. 政 治 大. paper-based programming exams. Learning goal orientation was adopted as a. 立. personality trait to measure the individual differences regarding to learning and. ‧ 國. 學. academic performance. Two types of tasks were also identified: “search-fact” and “inference-generation”. As for the graph visualization formats, we present three types. ‧. of graph visualization formats in our study: line graphs, bar graphs and radar graphs.. y. Nat. sit. In order to further explore the effects on individual perceptions via graph. n. al. er. io. visualization, two types of constructs were proposed: “Learning Comprehension” and. i Un. v. “Understanding of Visualization”. We adapted the concept of graph comprehension. Ch. engchi. (Shah & Freedman, 2011) to proposed the construct of learning comprehension, which refers to an evaluation of how students can integrate their prior knowledge and information embedded in the graph visualization to perform learning. Shah, Mayer, & Hegarty (1999) extended studies of bar versus line graphs work to characterize how Gestalt principles might affect comprehension of common graphs depicted in high school social studies textbooks. The results demonstrated that viewers’ descriptions, if based on the visual pieces, would differ depending on format. Originating from this study, we focused on the user perceptions on different formats and proposed the constructs of understanding of visualizations. To measure how useful the visualization 17. DOI:10.6814/NCCU201901137.

(25) was to provide knowledge for students, we used the concept of perceived learning (Wu, Hiltz, Roxanne, & Bieber, 2010), which was adapted from the concept of perceived usefulness in TAM (Davis, 1989). Table 3.1 summarizes the representative literature for the involved factors. These constructs are proposed to capture how students perceive the proposed orchestration technology platform with different formats, task types and personality. The influences of these factors on a user’s perception and the respective research hypotheses are discussed in the next subsection. Table 3.1 Proposed constructs. Perceptions. 立. (Shah & Freedman, 2011). 學. Understanding of Visualization. (Shah et al., 1999). Perceived Learning. (Wu et al., 2010). Nat. y. ‧. ‧ 國. Learning Comprehension. 治 政 Reference 大 Studies. er. io. sit. 3-1 Learning Goal Orientation, Format and Task The construct of goal orientation has recently received increasing attention due to. al. n. iv n C the increase in web-based distanceh learning of the IS and HCI domains. The goal e n gout chi U. orientation concept was first proposed to compare the orientations of students who approached college to acquire new skills and knowledge verses those who approached. college to obtain high grades (Eison, 1979). Individuals with a high learning goal orientation tend to have a learning motivation to understand something new or to enhance their level of competence. Klein et al. (2006) examined how learning goal orientation relates to motivation to learn and course outcomes. The results suggest that learners high in learning goal orientation have significantly higher motivation to learn within a blended learning condition. In our study, we theorized that students with different personality types would have different behavior when reviewing their exam 18. DOI:10.6814/NCCU201901137.

(26) results. Students who prioritize the pursuit of new knowledge or skills for their own personal development tend to prefer exploring visualization analytics over students who are merely in pursuit of scores. This assumption matches the concept of learning goal orientation. Thus, to measure the effectiveness and learning outcome of students who review the paper-based programming exam in the online visualization learning analytics tool, we adopted learning goal orientation as a factor. Several formats were investigated throughout previous studies of graph comprehension. The study conducted by Shah & Freedman (2011) reported that viewers are more likely to describe the interactions between variables on the X axis and. 治 政 Y axis when viewing line graphs, and they are more likely 大 to describe main effects and 立 the interactions between the variables in the legend and Y axis when viewing bar graphs. ‧ 國. 學. In the study of Toker et al. (2013), radar graphs were chosen to compare with bar graphs. ‧. given that radar graphs are widely used for multivariate data. In this study, we focus on. y. Nat. the visual characteristics of these common graphs and their influence on comprehension.. er. io. sit. We extended the work of Shah and Toker and adopted three types of common graphs for this study: line graphs, bar graphs and radar graphs. Although radar graphs are often. al. n. iv n C considered inferior to bar graphs for seeking tasks (Few, 2005), h common e n g cinformation hi U. there are indications that radar graphs may be just as effective as bar graphs for more complex tasks or integrated information (Toker et al., 2012). This study utilizes two tasks that touching on two broad genres of learning: procedural and inferential. Both genres have broad records in the human behavioral literatures, and represent two very different types of knowledge: creation and use.. Procedural learning is the learning composed of a sequence of iterative tasks (Sun, Merrill, & Peterson, 2001). Inference learning, on the other hand, is the learning that comes to a conclusion or a concept from available data. Several transmutations are used in the process of inference learning, including induction, deduction, generalization and 19. DOI:10.6814/NCCU201901137.

(27) comparison (Michalski, 1993). In the study of Green et al. (2010), both procedural tasks and inferential tasks are applied to explore the impact of individual differences in personality factors on different interfaces. Shah & Freedman’s work (2011) uses both fact-retrieval and inference generation tasks to investigate the graph comprehension on interaction of top-down and bottom-up processes. Through these works, two types of learning tasks are identified in this study: “search-fact” and “inference-generation”.. 3-2 Learning Comprehension In the theory of graph comprehension, the process of comprehension starts as the. 政 治 大 clusters by viewers. Then these visual clusters influence a viewer’s interpretations of 立. visual elements such as nodes, lines, and colors are identified and grouped together into. ‧ 國. 學. the data. Specifically, the display is clustered based on the Gestalt principles of proximity, good continuity, and similarity(Pinker, 1990). Graph comprehension can. ‧. evaluate the effect of individual differences on the information visualization (Okan,. sit. y. Nat. Garcia-Retamero, Cokely, & Maldonado, 2011; Okan, Garcia-Retamero, Galesic, &. al. er. io. Cokely, 2012) and can also be used to depict how does an individual’s prior knowledge. v. n. (topic familiarity and graphical literacy skills) interact with format to influence s. Ch. engchi. i Un. viewer’s interpretations of graph visualization. Shah & Freedman (2011) investigated the effects of format (line vs. bar), viewers’ familiarity of topics, and viewers’ graphical literacy skills on the comprehension of multivariate data presented in graphs. The results showed that high-skilled graph viewers were able to make main effect inferences when viewing bar graphs that supported their ability to make the necessary mental computations, but not when viewing line graphs. Low-skilled graph viewers, however, could not make such inferences, even when viewing bar graphs. It may be useful to present different formats of graph visualization for users with different graphical literacy skills. The study also showed that skill may correspond to greater 20. DOI:10.6814/NCCU201901137.

(28) differentiation between formats in open-ended tasks than fact-retrieval tasks. As for personality trait factors, learning goal orientation was used intensively in education research as an indicator of learning performance or academic achievement. Debicki et al. (2016) developed a model to test potential mediating effect of learning goal orientation, prove performance orientation and avoid performance goal orientation between core self-evaluations and academic performance. The results showed that students with high core self-evaluations and learning goal orientation might utilize their perceived high capability to gain new experiences and increase their knowledge in search for personal development, thus creating a positive relationship with academic performance.. 立. 政 治 大. Therefore, in regards to our study, we propose learning comprehension as a factor. ‧ 國. 學. because of its origination from the concept of graph comprehension. The degree of. ‧. learning comprehension represents how well students review and learn the Java. y. Nat. programming topics from the graph visualizations. If the students are motivated by. er. io. sit. increasing their competence through learning programming rather than just motivated by passing the course, they are willing to explore more topics which they are not. al. n. iv n C familiar with when reviewing examh results in the web-based e n g c h i U learning environment. The. students who are enthusiastic for learning could benefit from the visualization analytics and uncover more knowledge, thus resulting in a better learning performance. Also, students may perform different degrees of comprehension due to the different presented graph visualization formats and the different types of reviewing tasks. Therefore we proposed the following hypotheses. H1a. Learning goal orientation would have a positive influence on the degree of learning comprehension.. 21. DOI:10.6814/NCCU201901137.

(29) H1b. The graph visualization formats would have different influences on the degree of the learning comprehension. H1c. The type of reviewing tasks would have different influences on the degree of the learning comprehension.. 3-3 Understanding of Visualization The study of Toker et al., (2013) suggested that visualization types should be taken into account from the interaction effects found in user’s cognitive abilities. More. 政 治 大 humans typically see objects by grouping similar elements, recognizing patterns and 立 specifically, we looked into Gestalt principles (Koffka, 2013), which described how. ‧ 國. 學. simplifying complex images. Gestalt principles are widely used in data visualization applications (Nesbitt & Friedrich, 2002; Patel et al., 2010) Shah et al. (1999). ‧. characterize how Gestalt principles might affect comprehension of common graphs. In. sit. y. Nat. the bar graph, the proximity principle predicts that for bar graphs a viewer would. al. er. io. encode the grouped sets of bars representing levels of word familiarity (low, medium,. n. and high). In the line graph, the principle of good continuity suggests that individuals. Ch. engchi. i Un. v. would encode three visual clusters formed by the lines representing reading skill (low, medium, and high). The results showed that viewers’ descriptions, if based on the visual clusters, would differ depending on format. We adapted this idea and proposed the constructs for the understanding of visualization, which refers to the level of how well students can interpret the visualization in the generation and meaning of the graph. According to Gestalt principles, the embedded information would be different corresponding to the format of graph. Line graphs are useful for displaying smaller changes in a trend over time according to the law of continuity. Bar graphs are easy to compare sets of data between different groups at a glance according to the law of. 22. DOI:10.6814/NCCU201901137.

(30) proximity. Though there is no significant law on radar graphs, radar graphs may contain integrated information which is useful for complex data. We argued that such user perceptions on graphs will be different in line graphs, bar graphs and radar graphs. We also wanted to investigate within our study if personality traits induces different effects in terms of understanding of visualization. We assumed that a student with high learning goal orientation would show a greater willingness to depict graph visualization comprehensively, thus, accordingly they would have a better understanding of visualization in our system. Therefore, we proposed the following hypotheses. H2a. Learning goal orientation would have a positive influence influences on the. 治 政 degree of understanding of visualization. 大 立 ‧ 國. 學. H2b. The graph visualization formats would have different influences on the degree of understanding of visualization.. ‧ sit. y. Nat. 3-4 Perceived Learning. al. er. io. According to Caspi & Blau (2008), perceived learning is “the set of beliefs and. n. feelings one has regarding the learning that has occurred”. Perceived learning was used. Ch. engchi. i Un. v. extensively in educational researches, including game-based learning systems (Barzilai & Blau, 2014), asynchronous online courses (Swan, 2001), personal differences (Rovai & Baker, 2005), and visualization-based learning environments (Wang, Wu, Kinshuk, Chen, & Spector, 2013). In our context, perceived learning refers to the students’ selfevaluation of their learning experience while using our system, indicating the degree of knowledge gained from the visualization. Therefore, to measure how useful the visualization was to provide knowledge for the students, we proposed the construct of perceived learning (Wu et al., 2010), which was adapted from the concept of perceived usefulness in TAM (Davis, 1989). We assumed that a student with high learning goal. 23. DOI:10.6814/NCCU201901137.

(31) orientation would learn better in our system because of their aggressive motivation to learning programming in order to increase their competence, a trait for higher skills in perceived learning. In our study, we also wanted to know if the formats of graph visualization induces different effects in terms of perceived learning. Thus, we proposed the following hypotheses: H3a. Learning goal orientation would have a positive influence on the degree of perceived learning. H3b. The graph visualization formats would have different influences on the degree of perceived learning.. 立. 政 治 大. ‧. ‧ 國. 學. n. er. io. sit. y. Nat. al. Ch. engchi. i Un. v. 24. DOI:10.6814/NCCU201901137.

(32) Chapter 4 METHODOLOGY 4-1 Dataset In this study we used grading data exported from Web Programming Grading Assistant (WPGA), a system from previous studies 1, to conduct an exploratory analysis on students’ behavioral patterns toward topics in the programming exam. WPGA was a web-based system that facilitates grading and feedback delivery of paper-based programming assessments. We intended to provide students with a visualization tool which is independent from their existing learning environment. We named it Topic. 治 政 Combination Analysis Visualization (TCAV). TCAV大 aimed to support learners in 立 exploring cognitive, topic-based, and behavioral insights of students’ performance in ‧ 國. 學. exams.. ‧. WPGA was first introduced to the Java Programming class in the first semester of. y. Nat. 107 academic year. There was a total of three exams during the semester. Before each. er. io. sit. exam, an instructor can set the grading rubrics related to the current learning material by inputting involved topics and corresponding scores. Figure 4.1 shows the grading. al. n. iv n C interface of WPGA and topic rubrics questions. At the end of each exam, grading hofethe ngchi U. data was exported from WPGA. By analyzing the grading data, we can get an initial insight into how familiar an individual was with a particular topic, as well as discover. potential correlation between topics through peer comparison. Extremely detailed grading data is available after applying exploratory data analysis on grading data from WPGA. This grading data can then be used to make various types of user-adaptive visualization, which is the core function of TCAV.. 1. https://cidsewpga.fulton.asu.edu/login/ 25. DOI:10.6814/NCCU201901137.

(33) Figure 4.1 WPGA student interface detail view. 治 政 We also conducted a lab study in the second semester 大 of the 107 academic year. 立 Participants were students who had enrolled in the Java Programming class in the ‧ 國. 學. Department of Management Information System of National Chengchi University. All. ‧. participants joined the study voluntarily and acknowledged their right to decline their. n. al. er. io. sit. y. Nat. participation with a consent form. The data collection schedule is as figure 4.2.. Ch. engchi. i Un. v. Figure 4.2 Experiment Schedule. 4-2 System Development and Interface We designed TCAV based on the grading data collected from participating students who attended the 107 Fall Java Programming Language I in the Department of Management Information System of National Chengchi University. It is a mandatory 26. DOI:10.6814/NCCU201901137.

(34) course for all first year students of MIS, however, it is also comprised by nearly onethird of students from other colleges. The visualizations were created through the following steps (Figure 4.3). First, the instructors determined several related Java programming conceptual topics for each exam according to the lectures and text books used in the class. They then assigned several topics and grading rubrics for each question. The topics in the present study cover basic java programming concepts, including loops, control statements, objects, interfaces, etc. After students finish the exam, TA’s (teaching assistants) graded the exam online through WPGA based on the rubrics. Personalized visualizations were. 治 政 generated from the detailed grading data. We calculated大 the scores students got for each 立 topic of question. Then we correlated the visualization analytics containing topics ‧ 國. 學. involved in the question with percentage of correct answers.. ‧. We particularly focused on two kinds of rates of correct answers: individual. y. Nat. answer rates versus the class and individual averages. Class average was the mean. er. io. sit. correct answer rates of this question calculated by the whole class for each topic involved in the current question (green area of Figure 4.4). Individual average, on the. al. n. iv n C other hand, was the mean correct h answer rate of other e n g c h i U questions of a current exam. calculated per individual for each topic involved the in current question (orange area of Figure 4.5). Students could discover the topics in which they performed poorly compared to the class average. These topics could be crucial fundamental concepts they needed to focus on. Students could also discover the topics in which they performed not poorly compared to the individual average. These topics could be the drilling concepts of the specific question. Finally, we applied the semantic results to visualizations with different types of graphs including bar graphs, line graphs and radar graphs (Figure 4.6-4.8).. 27. DOI:10.6814/NCCU201901137.

(35) Figure 4.3 Data processing schema. 學 ‧. ‧ 國. 立. 政 治 大. n. er. io. sit. y. Nat. al. Ch. engchi. i Un. v. Figure 4.4 Visualization analytics: individual answer rate comparing to the class average. 28. DOI:10.6814/NCCU201901137.

(36) Figure 4.5 Visualization analytics: individual answer rate comparing to the average. 立. 政 治 大. ‧. ‧ 國. 學 er. io. sit. y. Nat. al. n. Figure 4.6 Graph visualization: radar graph. Ch. engchi. i Un. v. Figure 4.7 Graph visualization: bar graph. 29. DOI:10.6814/NCCU201901137.

(37) Figure 4.8 Graph Visualization: line graph. 4-3 Search Fact Tasks and Inference Generation Tasks To answer our research question 1 (RQ1), which explored how the visualization. 政 治 大 format and task type influence student’s learning comprehension, we implemented two 立. ‧ 國. 學. interfaces: a “search fact” interface with only a topic correct answer rate comparison between the individual and class averages, and an “inference generation” interface with. ‧. a topic correct answer rate comparison between individual scores and both the class. y. sit er. io. question areas.. Nat. and individual average. Each interface included two main areas: the visualizations and. al. Figure 4.9 shows the interface for search fact questions. The upper half of the page. n. iv n C was the graph visualizations. The h graph e nvisualizations g c h i U compared the correct answer. rates of topics involved in the current question between the individual and the class average. The lower half of the page was the question representing a search fact task involving Java conceptual concepts with a checkbox of possible answers. We established search fact questions through the instruction “retrieve the information provided in the visualizations”. Thus, participants needed to go through the visualization analytics and check the topics corresponding to the different search fact questions. There are a total of three search fact questions in this study: 1. Please check all the topics which are involved in this question.. 30. DOI:10.6814/NCCU201901137.

(38) 2. Please check the topics in which the individual answer rate is higher than the class average answer rate. 3. Please check the topics in which the individual answer rate is lower than the class average answer rate, and is lower than 60%. Figure 4.10 shows the interface for inference generation questions. The upper half of the page is the graph visualizations. The difference between the interface for search fact questions and inference generation questions is that we applied graph visualizations to compare both individual average and the class average. Participants could switch between these visualizations while answering the inference generation questions. The. 治 政 reason for providing two kinds of visualization analytics 大is that we created instructions 立 for the inference generation questions as “compare and integrate multiple information ‧ 國. 學. sources to generate inference toward specific question.” The process of inference. ‧. generation may contain information retrieval, mapping, comparing, classifications etc.. y. Nat. To answer the inference generation questions, the participants not only needed to. er. io. sit. compare the different visualizations, but also needed to rely on subjective prior knowledge of Java programming. Therefore, we designed a total of three inference. n. al. Ch generation questions for this study:. engchi. i Un. v. 1. Please check the topics which an individual needs to review and strengthen for this question. 2. Given the chapter contents, please check the chapters which are needed to be reviewed for this question. 3. Following the previous question, please sort the chapters you checked according to their priority when you review the exam. The corresponding interface of Q2 and Q3 were showed in figure 4.11-4.12. 31. DOI:10.6814/NCCU201901137.

(39) 立. 政 治 大. Figure 4.9 Visualization interface: search fact question. ‧. ‧ 國. 學. n. er. io. sit. y. Nat. al. Ch. engchi. i Un. v. Figure 4.10 Visualization interface: inference generation question. 32. DOI:10.6814/NCCU201901137.

(40) Figure 4.11 Question interface: inference generation question 2. 立. 政 治 大. ‧. ‧ 國. 學. Figure 4.12 Question interface: inference generation question 3. y. Nat. er. io. sit. 4-4 Apparatus. A 24-inch computer screen with a resolution of 1280 X 1024 was used to display. al. n. iv n C X2-60 h eye tracker with U e n g c h i a sampling. the system, and a Tobii. rate of 60Hz was. implemented to collect the participants’ eye-movements.. 4-5 Subjects and Experiment Procedure We conducted a user study at the Department of Management Information System of National Chengchi University. The experiment simulated the process of reviewing the paper-based Java programming exam on TCAV. Limited to the nature of our visualizations and research system, the participants could only be recruited from those who had taken at least one Java programming courses prior this research. For those who participated, a worth of NTD100 gift card and extra credit were given. In 33. DOI:10.6814/NCCU201901137.

(41) total 34 student participants (17 females and 17 males) were recruited. Their average age was 21.03 years old (min=19, max=24, SD=1.1 years). The participants were informed that this experiment was based on a simulated exam and they had finished their exams and were prepared to review the exam results on TCAV. We adopted within-subjects design in this user study, hence the participants would need to review three exams, which were from 2018 Fall Java programming course, in the experiment. Each exam was paired with one kind of the format of graph visualizations described in the previous section, and the order of each participant was decided by Latin square order (Table 4.1). The participants would review starting from. 治 政 exam 1 and ending with exam 3 because the difficulty大 of exams was incremental, and 立 we believed that this is the normal learning process for students. While the participants ‧ 國. 學. were viewing the exam questions, an eye-tracker was utilized to collect their eye. ‧. movement.. y. Nat. In each exam, the task of participants was to answer the search fact questions and. er. io. sit. inference generation questions according to the given visualizations. For the consistency of the visualizations, we used the same grading data for each participant so. al. n. iv n C that every participant would view the visualization analytics based on the same h esame ngchi U. grading score in each exam. For simplicity of the study, the participants were required to review only one question appointed beforehand in each exam. To control the difficulty consistency of the questions between the exams, we selected the questions which had a close class average correct rate in the range of 70% to 80% (Table 4.2). Table 4.1 Sequence of exam number and format pairing. Exam number. Exam1. Exam2. Exam3. Radar Radar Bar. Bar Line Radar. Line Bar Line. User ID. test1 test2 test3. 34. DOI:10.6814/NCCU201901137.

(42) Table 4.2 Selected exam questions and corresponding answer rate Exam number. Exam1. Exam2. Exam3. Selected question Class average * Number of topics. Question 9 77%. Question 10 76.6%. Question 21 76.2%. * Class average correct answer rate in 2018 Fall Java programming course. The procedure of the experiment was as follows: 1. The participants were required to fill in a demographic questionnaire. 2. The participants were required to complete a cognitive test to test their. 政 治 大 perceptual speed (Ekstrom, French, & Harman, 1976). 立. ‧ 國. 學. 3. The participants were required to browse the questions of the three exams in the paper. The purpose was to let the participants get familiar with the paper-based. ‧. Java programming exams and recall the basic Java knowledge.. sit. y. Nat. 4. An eye-tracker was set up and calibration was conducted for each participant to. io. er. make sure the eye movement was successfully collected.. 5. The participants were then introduced to the system interfaces and their task.. al. n. iv n C The experimenter would focus h eonnintroducing g c h i Uthe functions of the system.. 6. The participants would review questions from exam 1 to exam 3, each of which was paired with one of three designed visualization formats (see Appendix A). Meanwhile, an eye-tracker would collect their eye movement, this step would take roughly 30 to 45 minutes and the participants would be encouraged to take their time during this step instead of rushing to finish. 7. The experiment had three iterations, which are corresponded to exam 1 to exam 3. In each iteration, the participants were given a specific visualization format, three search fact questions and three inference generation questions. The participants would first review the exam question. Then they could check the 35. DOI:10.6814/NCCU201901137.

(43) exam solution (Figure 4.13). After the participants had reviewed the exam question and solution, they could move to the main interface to check visualization analytics with search fact and inference generation questions of the current question. There would be 3 search fact questions first, followed by 3 inference generation question. The procedure in each iteration was showed in Figure 4.14. 8. After finishing each iteration, the participants would be asked to fill in a questionnaire. The questionnaire was in regard to the participants’ perceptions of the assigned visualizations, specifically its presentation format in specific (Figure 4.15).. 立. 政 治 大. 9. After finishing reviewing all three exams, the participants would answer a post. ‧ 國. 學. questionnaire and a short interview regarding their whole experience of the. ‧. system and experiment.. n. er. io. sit. y. Nat. al. Ch. engchi. i Un. v. Figure 4.13 Original exam question interface: exam question and exam solution. 36. DOI:10.6814/NCCU201901137.

參考文獻

相關文件

Regarding the importance of these aspects as perceived by the employers, nearly all aspects received a rating between “quite important” and “very important”, with Management Skill

™ ™ When ready to eat a bite of your bread, place the spoon on the When ready to eat a bite of your bread, place the spoon on the under plate, then use the same hand to take the

Our main goal is to give a much simpler and completely self-contained proof of the decidability of satisfiability of the two-variable logic over data words.. We do it for the case

He frequently made a certain degree of communication between Buddhism doctrine and Yijing (《易 經》 ) ideas, and tried to explain that the way of Buddhism and the words said

To convert a string containing floating-point digits to its floating-point value, use the static parseDouble method of the Double class..

As to the effects of internet self-efficacy on information ethics, students who get high, middle, and low scores on basic computer operation also perform differently on

The main goal of this research is to identify the characteristics of hyperkalemia ECG by studying the effects of potassium concentrations in blood on the

“fumes / smoke Management”, followed by “waste recycling”, and finally “green energy”. Mean value of the Green restaurant on the degree of cognition are agreeing to