• 沒有找到結果。

Chapter 7 DISCUSSIONS

The main objective of this study is to investigate the influence of learning goal orientation, visualization format, and type of learning task on students’ learning perception and learning performance of our system in the context of Java programming.

To fulfill the research goal, three research questions and corresponding hypotheses were proposed.

7-1 The Influence on User Behavior and Perception

The first research question aims to investigate the influence of the visualization format, type of learning task and individual differences on the learning comprehension, since these factors have been proved to be the indicators of graph comprehension (Shah

& Freedman, 2011) and learning performance (Debicki et al., 2016). Based on the results from the linear regression analyses, H1a and H1c are supported, indicating that the learning goal orientation and task type could influence the student’s learning comprehension. Students with a relatively high learning goal orientation would have a better degree of learning comprehension. The results also show that with the assistance of visualizations, students perform search fact task better than inference generation task in our system. However, H1b is not supported, indicating that visualization format has no significant influence on the student’s learning comprehension. Similar results are found in the third research question, which aims to investigate the influence of the visualization format and individual differences on the student’s perceived learning.

From the linear regression analyses, H3a is supported, indicating that the learning goal orientation had an influence on the student’s perceived learning. However, H3b is not supported, which means that visualization format has no significant influence on the student’s perceived learning. These results show that the learning goal orientation is an important factor of learning performance, which is consistent with the previous studies.

The students who are high in learning goal orientation would have higher motivation to learn in the blended learning condition, thus have a better comprehension and learning outcome to the programming learning contexts. Also, they will perceive a better learning performance when learning programming in the proposed learning system. We don’t find any significant effect of visualization format on the student’s learning comprehension and perceived learning, which does not meet our expectation (e.g. the radar graphs will be more effective for reviewing integrated information of exams). The possible reason is that we measure learning comprehension through the reviewing questions designed by ourselves, which are not various enough for users to demonstrate the difference of visualization formats.

The second research question aims to investigate the influence of the visualization format and individual differences on the understanding of visualization.

H2b is supported, indicating that the format do have an influence on the student’s understanding of visualization. However, H2a is not supported, which meant that learning goal orientation has no significant influence on the student’s understanding of visualization. These results show that the users would depict the information visualization in different ways according to a different graph format is given. We measure the understanding of visualization by asking users a question with multiple choice based on Gestalt principles (e.g. The proposed visualization imply the trend of the correct answer rate). The difference between the bar and line graphs is consistent with the law of proximity and continuity of Gestalt principles. Also, comparing to the bar chart, students have a relatively worse understanding on line chart and radar chart.

It indicates that the radar chart and line chart may not be suitable in our context because our tasks are mainly about comparing the difference between individual and class average. The results are consistent with prior study that radar graphs are considered inferior to bar graphs on common information seeking tasks (Few, 2005).

According to the interview, some participants reported the radar graph was more intuitive as it was widely used to display core competencies of school course, and some preferred bar graph and thought radar graph was hard to understand. The user feedback is consistent with the results of understanding of visualization. However, we didn’t find any significant influence of visualization format on students’ learning performance in our system. The possible reason is that each format convey a part of information which the students need or lack. Although there is difference between formats, our questions are not complicated enough for students to demonstrate the difference. They can retrieve the information from each kind of format. In sum, each kind of format may somehow be helpful for students to review exams. As a result, we can’t find significant difference on learning performance between formats.

The results of learning comprehension also showed that there are significant difference between the three exams. Even though we control the effects of the three different exams, it indicate that there may be learning effect between the three exams.

Thus we further estimated the same regression model on each exam. The results are summarized in Table 7.1-7.3. The possible reason is that we asked the same search fact questions and inference generation questions in each exam. Participants may tend to answer the question in purpose of achieving as high correct answer rate as possible. As a result, at the first exam, participants would reference to the visualization and answer the question step by step. But after the first exam, they learned to reference to the visualization in a more efficient way to answer the same question. Thus the format itself is not as important as the first iteration. Though the system log could support the objective results of users’ learning performance, the results were also influenced by how we design the question and experiment procedure. Hence, the eye-tracking data could make up the deficiency of log analysis results.

Table 7.1 Estimated results for learning comprehension in exam 1.

Estimate Std. Error t value Pr(>|t|)

Constant 0.72079 0.09732 7.406 5.06e-10 ***

Goal orientation (Low) -0.09155 0.06920 -1.323 0.1908 Goal orientation (Middle) -0.03229 0.06012 -0.537 0.5932

Format (Line) -0.06695 0.06277 -1.066 0.2905

Format (Radar) -0.12884 0.06274 -2.054 0.0444 **

Task (SearchFact) 0.31893 0.04683 6.810 5.26e-09 ***

Programming-experienced

(Experienced) -0.02056 0.06599 -0.312 0.7565

Gender (Female) -0.03388 0.04834 -0.701 0.4861

* Significance (Sig.) at 0.1 level, ** Sig. at 0.05 level, *** Sig. at 0.01 level.

Table 7.2 Estimated results for learning comprehension in exam 2.

Estimate Std. Error t value Pr(>|t|)

Constant 0.94122 0.07044 13.361 < 2e-16 ***

Goal orientation (Low) -0.10939 0.05494 -1.991 0.0510 * Goal orientation (Middle) -0.08213 0.05143 -1.597 0.1155

Format (Line) 0.07466 0.04960 1.505 0.1375

Format (Radar) 0.08060 0.04941 1.631 0.1081

Task (SearchFact) 0.03074 0.03932 0.782 0.4374

Programming-experienced

(Experienced) -0.08643 0.05123 -1.687 0.0968 *

Gender (Female) 0.06617 0.04134 1.601 0.1147

* Significance (Sig.) at 0.1 level, ** Sig. at 0.05 level, *** Sig. at 0.01 level.

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

Table 7.3 Estimated results for learning comprehension in exam 3.

Estimate Std. Error t value Pr(>|t|)

Constant 0.947248 0.083509 11.343 < 2e-16 ***

Goal orientation (Low) -0.113797 0.061708 -1.844 0.0701 * Goal orientation (Middle) -0.013169 0.056197 -0.234 0.8155

Format (Line) 0.006314 0.058335 0.108 0.9142

Format (Radar) -0.041197 0.057088 -0.722 0.4733

Task (SearchFact) 0.034706 0.043940 0.790 0.4327

Programming-experienced

(Experienced) -0.062621 0.056753 -1.103 0.2743

Gender (Female) 0.042573 0.046107 0.923 0.3595

* Significance (Sig.) at 0.1 level, ** Sig. at 0.05 level, *** Sig. at 0.01 level.

相關文件