• 沒有找到結果。

Methodological review of research in evaluating IT in education initiatives

Chapter 2 Literature Review

2.8 Methodological review of research in evaluating IT in education initiatives

English and Mathematics in secondary schools. The majority of teachers perceived their role to be the transmission of knowledge.

Ÿ The teachers indicated that their preferred mode of professional development was workshops and demonstrations. While they wanted to learn how to communicate with students using email, the need to learn how to use the Internet to carry out collaborative project work with other schools was relatively low.

Ÿ A need was identified to integrate teachers’ teaching with professional development provisions.

School policies and implementation

Ÿ While the school principals’ responses suggested they regarded development of students’

analytical powers and problem-solving abilities as high policy priorities, the actual practice was more concerned with enhancing teachers’ abilities to present information effectively or interestingly. The principals reported that they considered their primary role to be to provide training opportunities and professional development opportunities for teachers and to plan resources, rather than becoming involved in the actual use of IT in their schools.

Ÿ Secondary schools were found to be more out-reaching to the broader network of other schools and the wider community than primary schools, but mostly schools were still behaving as individual units rather than members of the broader community.

Support and the community

Ÿ Between the SITES-M1 study and the Preliminary Study there was a reported shift in teachers’

perception of the main obstacles to using IT, from lack of support and resources in the former to instructional software and teacher competence in the latter.

Ÿ Students reported being generally satisfied with the existing support and assistance from schools but indicated clearly that they wanted more provision of computer access.

Ÿ More than half the teachers had visited the ITERC or Teachers’ Centres. HKedCity, ITEd Web and TSS were the most frequently used resources. They rated general satisfaction with IT courses and resource/support services.

Ÿ More than 70% of the teacher respondents reported positive experiences in sharing their experiences of IT use in teaching and learning with other teachers, although generally their views about the impact of IT were fairly reserved compared to the students’ perceptions.

The present study will make reference to the methods and outcomes of these prior studies as appropriate in designing the research instruments and process and outcome measures to ensure, as far as possible, consistency and comparability of the findings. In this way, meaningful comparisons can be made across the studies to chart the progress of the implementation of the ITEd initiatives over the five-year period.

Chapter 2: Literature Review

These indicators, while useful for monitoring the progress of ITEd initiatives in terms of access, connectivity, teachers’ professional development and classroom use of ICT, are gravely inadequate in answering the most important question about ICT in education – its impact on student learning (Haertel and Means, 2000; Padilla and Zalles, 2001). Indeed, the inadequacy of current practices in evaluating ICT effects has led to a recent surge in interest in the methodological issues in relation to technology evaluation. A number of reports and papers have been published which review the current practices, examine the critical issues, and explore new directions and methodologies for evaluating the effectiveness of educational technology (for example, Haertel and Means, 2000; Heinecke et al., 1999;

Johnston and Barker, 2002; McNabb et al., 1999; Padilla and Zalles, 2001). The major issues and recommendations pertinent to the present study are summarised below.

2.8.1 An expanded definition and measurement of student learning outcomes

Most ITEd initiatives aim not only to promote the development of students’ ICT competence, but also to enable them to apply IT in their learning to become better and more effective learners. Gawith (1994) makes an important distinction between technological literacy and information literacy in developing and measuring students’ IT competence. The former refers to students’ competence in the operation of the technologies, which is a useful but insufficient condition for effective learning. The latter refers to students’ competence in searching, selecting, interpreting and presenting information, which enhances their ability to learn effectively across a range of subject areas. While the two concepts are necessarily related, they involve different kinds of understanding and cognitive processes, and must therefore be conceptualised and measured independently in any evaluation concerning the use of ICT in education.

Furthermore, to demonstrate that the use of ICT can improve learning effectiveness, it is not sufficient just to show that students’ ICT knowledge and skills have increased. There must be evidence that students’ learning in other subjects has also improved as a result of the increasing use of ICT. But the key question is: How is student learning defined? Heinecke et al. (1999) point out that an appropriate measure of learning outcome based upon a definition of learning as “the retention of basic skills and content information” would be very different from another which defines the goal of education as “the production of students who can engage in critical, higher order, problem-based enquiry.”

Past evaluations have relied heavily on norm-referenced standardised achievement tests, students’

self-reports, and/or ratings by significant others (for example, teachers or parents) as outcome measures. Haertel and Means (2000, p. 2) have commented that the use of standardised achievement tests as a student learning outcome measure is problematic: “While standardised academic tests may be effective measures of basic skills… they generally do not tap higher-level problem-solving skills and the kinds of understandings that many technology-based innovations are designed to enhance.”

The over-reliance of existing studies on self reported data and ratings by significant others has also been criticised because there are often discrepancies between what people report and what they actually do (Padilla and Zalles, 2001). Thus, the reliability and validity of assessing learning outcomes through self-report or ratings by others are often suspect (Johnston and Barker, 2002).

The implication is that there is a strong need to develop a new and expanded definition of student learning outcomes, and to explore ways for measuring students’ learning gains in higher-level cognitive processes. A number of recommendations have been made for expanding the definition and measurement of student learning outcomes, including:

Ÿ Using multiple measures of student learning instead of a single measure, Ÿ Performance assessment in extended authentic tasks,

Ÿ Mechanisms for students to demonstrate their information and higher-level cognitive skills (e.g., portfolios or learning records),

Ÿ Direct observation of participants’ actions within learning contexts,

Ÿ Students’ motivation, self efficacy, and attitude toward school and learning, Ÿ Students’ attendance, disciplinary referral, and/or drop out rates,

Ÿ Triangulation of self-reported data and/or ratings by significant others with data collected from other sources.

2.8.2 A combination of methodologies and approaches

The impact of ICT cannot be divorced from the teaching and learning processes, which are embedded within complex systems (the schools). Thus, the evaluation models and methods chosen for evaluating ICT must be able to capture and reflect this complexity (Heinecke et. al., 1999). Most authors on technology evaluation agree that no single evaluation methodology is adequate for addressing the multi-faceted nature of ICT innovations. Most recommend an adoption of a combination of methodologies and approaches, including both quantitative and qualitative measures in the evaluation.

A wide variety of methods can be used to gather information concerning the implementation, context, and outcome of ICT innovations. Some examples are:

Ÿ Performance assessments, Ÿ Surveys,

Ÿ Observations, Ÿ Interviews, Ÿ Focus groups,

Ÿ Student and/or teacher logs, Ÿ Diaries,

Ÿ Reflective journals, Ÿ Document analysis.

While it is true that different data gathering procedures are sensitive to different sources of bias (Padilla and Zalles, 2001), combining the various methodologies can increase the richness, accuracy, and reliability of the data (Haertel and Means, 2000, p. 6) and maximise the validity of the data through triangulation (Popham, 1988).

2.8.3 Contextualised evaluation

Research on the effectiveness of educational technology has indicated clearly the important influence of context on both the implementation of the innovation and its impact (Bodilly and Mitchell, 1997;

see also the review in Section 2.6 above). Coley (1997) argues that the impact of technology is multi-faceted, which cannot be fully understood without considering the interactions among students, teachers, and technology. There is therefore a “need for better and more comprehensive measures of the implementation of technology innovations and the context and contexts in which they are expected to function” (Haertel and Means, 2000). They suggest that in any educational technology evaluation, the following contextual factors must be included in the investigation:

Ÿ Vision of the innovation and its perceived value, Ÿ Physical facilities available,

Ÿ The availability of resources,

Ÿ The climate toward technology, learning, and educational reform that exists in the classroom, Ÿ Degree of support from leaders regarding technology innovation,

Ÿ School board policy that shape technology use,

Ÿ Demographic characteristics of the classroom, school, or community organisation, as well as students’ home.

The inclusion of important contextual variables in the evaluation will enable us to know not only whether ICT innovations have any impact, but more importantly, when and under what conditions the innovations will have the impact. This understanding is crucial for promoting an appropriate and effective use of ICT in education for enhancing student learning.

Chapter 2: Literature Review

2.8.4 Tying data to standards

One of the issues in the evaluation of IT in education initiatives is that many of the instruments used are ‘home-grown’. The advantage is that the assessments are tailored to the specific goals of the particular project, but the disadvantage is that the ability to generalise or compare across initiatives is limited (Johnston and Barker, 2002). Furthermore, the results obtained from such instruments are difficult to interpret because of the lack of an objective referencing point for comparison.

Padilla and Zalles (2001, p. 32) argue that a good evaluation approach is to tie “the data collection activities to state or other technology standards so that linkages could be made with evaluation of progress on achieving these standards.” This approach provides a framework for assessing student, teacher, and administrator competencies against clearly defined criteria, and enables a more meaningful interpretation of the results across time or projects.

To conclude, a good research design for evaluating IT in education initiatives should:

Ÿ Focus on impacts on student learning both in terms of technology literacy and information literacy,

Ÿ Adopt a clear and shared definition of student learning outcomes,

Ÿ Identify and employ an array of methods to more accurately capture student learning outcomes, Ÿ Use multiple methods for collecting multiple data from a variety of stakeholders, and triangulate

the data from different sources,

Ÿ Employ both quantitative and qualitative methodologies, Ÿ Include the important contextual factors in the investigation, Ÿ Link the data collected to clearly defined standards and criteria.

This chapter has provided an overview of recent international literature on factors that contribute to the effective integration of IT into teaching and learning and the consequent impacts upon students’

learning outcomes. It has also described previous Hong Kong studies in order to set the context of ITEd in Hong Kong prior to this Study. The information provided in this review has been used as a basis for the theoretical framework for this Study and the methodology to be described in Chapters 3 and 4.

Chapter 3 Conceptual Framework, Research Questions