• 沒有找到結果。

Chapter 3. A blog application: Assessing the effects of interactive blogging on student

3.3. Methods

3.3.1. Participants

Students aged 20 to 26 from two classes, namely Electronic Commerce, and Design of Internet Applications, were surveyed during the fall semester of 2006 (N=71) and the spring semester of 2007 (N=83), as to the use of blogs as a supplement to traditional classroom lectures. The students were all electronics majors with a male to female ratio greater than 5:1.

All the participants had used computers and the Internet on a day-to-day basis for at least ten years. Viewed in the cultural context, the users, similar to typical college students in Taiwan, were mostly hesitant to raise questions in classroom settings and preferred to study alone rather than sit in study groups. They also tended to consider asking questions in classrooms as an interruption to the ongoing lecture of the professor, and therefore impolite.

3.3.2. Setting

Students were required to create their own blogs as part of a regular face-to-face course that

met once a week for three hours. After each class meeting, participants were required to go online and write essays on ICT subjects such as IT off shoring and globalization, software business models in the third world, and the future of nonprofit computing. Two classes participated in the experiments. Class members enrolled in Electronic Commerce participated in the solitary use of blogs, and members in the Design of Internet Applications class participated in the interactive use of blogs. The former was called the “Solitary” Group or S-Group, and the latter was called the “Interactive” Group or I-Group. The graduate class was assigned to the I-Group that performed interactive blogging, while the undergraduate class was assigned to the S-Group that performed solitary blogging. The graduate class consisted of first-year graduates and undergraduate seniors while the undergraduate class consisted of only undergraduate seniors. The ratio of the class size was about 1:2. Participants in the I-Group were required to electively make comments or express thoughts about their peer blog postings, while participants in the S-Group were not required to do so. Blog comments were intended to be student-led, and the teacher would only intervene if there were problems that students could not resolve, such as severe controversies and emotional disputes. Individuals who were willing to report abuse of the system to the lecturer used a reporting tool. In the orientation session, students received legal and ethical advice against plagiarism and language abuse since they would be making comments in written form.

The I-group was expected to browse blog postings of their peers, and then select three of them to make verbal comments weekly. However, students were not expected to look at the work of other students. Commenting participation was worth 1/30 of the final grade, to minimize the negative impact of being graded, and yet provide incentives for making comments. The grading was based on the quality of comments, efforts made to compose the comments, and practical contextuality. Students in the S-Group could read blogs of their peers, but that was not expressly required. However, the blog system we used was able to track the

viewing history for each post in terms of page views, times pages were visited, and visitor addresses. Students in the S-group were also assigned to summarize in their own blogs what they read on the blogs of others, to further ensure they read the blogs of others.

The instructor also created a blog as a central hub for the students of both groups to be able to communicate with each other. The instructor blog was for posting course materials in the curriculum, categorizing descriptions of resources, and making announcements to class members. Students of both groups were encouraged to read the instructor blog before the class met to better prepare themselves for class activities.

3.3.3. Platform

The blog in our study is based on the platform of Blogger at http://www.blogger.com, which is now a property of Google. Although it is a commercial operation, there are no mandatory advertisements that may pop up. This quiet atmosphere is one of the reasons for its selection, because we do not like to see students distracted in the middle of a lab simply because of eye-catching advertising media. The search engine along with Blogger is Google, with which most students already feel familiar. Blogger provides a set of ready-to-use templates to choose from, and allows users to make a change later on. This personalization function increases sense of ownership. Due to vandalism arising in blogs, we adopted a built-in challenge mechanism to fight with crawler-based vandal programs to filter unwanted posts and comments.

3.3.4. Measures and data collection

With the use of the blog as a learning environment during class, the learning engagement and social networking of students enrolled in the class were of interest. Thus, at the end of

each semester, a questionnaire was used to understand student attitudes for the two groups.

Based on the suggestions of Hinkin (1998) regarding development measures for use in survey questionnaires, we invited education experts to participate in item generation. According to the three factors that we defined for the purpose of our study, online peer interaction, motivation to learn from peers, and academic achievements, a set of five items were designed for each individual factor. The questionnaire was poised by a score on a 5-point Likert scale, where 5 (Strongly Agree) represents the maximum score of the scale and 1 (Strongly Disagree) represents the minimum score. The original questionnaire included 15 questions. For each set of data collected in the survey, we checked its factor loading individually. We kept questions with loadings 0.7 or higher to confirm that independent variables identified a priori are represented by a single appropriate factor. For each of the three factors, there were five questions at the beginning, but only three remained after checking factor loadings.

Confirmatory Factor Analysis also indicated reasonable goodness-of-fit (CFI>0.9, GFI>0.9, NNFI>0.9, RMSEA<0.05). Factor loading of each remaining item showed convergent validity in the empirical data. A chi-square different test on each factor further confirmed discriminant validity of the results collected from the questionnaire. Finally, reliability analysis was used to check the dependability, consistency, and homogeneity of each item in a given factor. Cronbach’s α for all factors were all higher than 0.80 for the two consecutive semesters, satisfying the general requirement of reliability for research instruments (Hatcher 1994). See Appendix for the questionnaire.

To check the difference in samples from the two groups, multivariate analysis of variance (MANOVA) was used to detect the questionnaire to probe background data, including years of computer experiences to date, daily usage of computer, and experiences in web authoring.

The result of the MANOVA showed that the two groups had no statistical significance (F=2.03, p=0.14). Both groups were electronics majors with more than ten years of computer

experiences. Although the two groups were studying separate subjects, the two subjects were both technical in the electronics context, their instructor was the same, and the instruction format was similar.

In addition to investigating student attitudes toward interactive blogging through questionnaire, we were interested in the content of their comments. Using the same method as in Hall and Davison (2007), we performed content analysis by digging into student comments on the posts. As suggested by Oravec (2002) and Yu et al. (2009) in their study of educational blog, reflection is a useful indicator to the learning effectiveness of blogs. Therefore, we read the comments, classified them, and determined the percentage of comments, which could be regarded as “reflective” in nature.

The classification was based on a coding scheme for reflection shown in Table 3.1. The boundary between reflection and non-reflection, as well as between relevance and irrelevance, is somewhat blurred for some comments. Some comments show no reflection at all; some comments demonstrate sufficient reflection, while others possess marginal reflection. Basic rules of good practice in coding (Fielding 2008) were adhered to. When problems or ambiguity arose, the context of the original entry was checked and a comparison of other similar cases was made to resolve the coding issue. Two researchers conducted independent analysis on the same dataset and cross-examined the results. In the presence of any inconsistency, the individual items were retrieved for discussion and recoded by consensus of the two researchers. The degree of care enhanced the reliability of the coded output and confidence in the statistical analysis process towards research findings.

Table 3.1. Coding scheme for reflection Dimension Code Interpretation Evidence

Reflection C Context-free Comments made out of the context of the original entry

U Non-reflective Comments made without demonstrating perceivable reflection on the original entry R Reflective Comments made with substantial reflection

The first semester produced 242 coded comments and the second semester produced 247 coded comments, both in the I-Groups.