• 沒有找到結果。

The following section discusses and concludes the major findings of this study, which concern the types of reading skills measured in the SAET and DRET, similarities and differences of different types of skills measured in both exams, the examinees’ performance on various types of skills, and comparisons of high and low achievers’ performances on different item types.

Reading Skills Measured in the SAET & DRET

A total of 167 reading comprehension test items were analyzed by using the revised Nuttall’s taxonomy as the coding scheme.

In terms of skill types, the findings revealed that seven types of reading skills were tested in the 2002 to 2007 SAET: “Word Inference from Context,”

“Recognizing Cohesive Devices,” “Recognizing and Interpreting Details,”

“Recognizing Functional Value,” “Recognizing Presuppositions Underlying the Text,” “Recognizing Implications and Making Inferences,” and “Recognizing and Understanding the Main Idea.” Items that measured the examinees’ abilities to recognizing text organization were not found in 2002-2007 SAET. As to the DRET, the findings showed that all the eight types of reading skills emerged: “Word Inference from Context,” “Recognizing Cohesive Devices,” “Recognizing and Interpreting Details,” “Recognizing Functional Value,” “Recognizing Text Organization,” “Recognizing Presuppositions Underlying the Text,” “Recognizing Implications and Making Inferences,” and “Recognizing and Understanding the Main Idea.” All categories of reading skills were identified on both reading comprehension tests except for items on “Recognizing Text Organization,” of which only one item was found in 2002 DRET.

One thing worth noting is that “Recognizing Text Organization” is the least tested skill, with no items in the SAET and only 1 item in the DRET. A possible explanation for this might be that the articles in the reading comprehension tests are passages or excerpts from a larger context and this makes it hard for test writers to use them to write items to measure “Recognizing Text Organization.”

Similarities and Differences between the Reading Skills Tested in the SAET and DRET

The similarities and differences between the SAET and DRET lay in the

frequency, occurrences, and distribution of reading skill item types. In terms of frequency, the results showed that in the SAET, 73.1% of test items measured local reading skills while only 26.9% were devoted to testing global reading skills. As for the DRET, 67.6% aimed at measuring bottom-up skills while only 32.4% were dedicated to global skills. Both exams shared a similar pattern: both exams measured more local skills. If we look at the goals of the SAT and the DRT as previously mentioned, it is reasonably to expect more global skills in the DRET. However, the results showed that both exams had around 70% of items on testing local skills and thus the results are not in accord with the testing objectives that the CEEC set for the exams

In addition, the results showed that both examinations revealed a similar pattern:

the most frequently tested items were those on “Recognizing and Interpreting Details”

(accounted for 61.3% on the SAET and 55.4% on the DRET), with much fewer numbers of items devoted to measuring other skills. This finding is in accord with studies on reading comprehension item analyses of the SAET and DRET (e.g., Hsu, 2005; Lan, 2007: Lu, 2002). All of the previous studies showed a consistent finding that items which aimed at measuring the examinees’ abilities to recognize and interpret detailed information are the major types. This indicates that most of the question types were lower order questions whereas higher order question types were less emphasized. A possible explanation for this might be that it is more difficult to write plausible distractors for MCQs when higher order skills are tested.

In the SAET, two types of items occurred every year, including local items on

“Word Inference from Context” and “Recognizing and Interpreting Details.” By examining the SAET and DRET, it was found that items on “Recognizing and Interpreting Details” were the majority each year, ranging from 53.3% to 73.3% in the SAET and 45.5% to 72.7% in the DRET. Each year, around half of the test items were

devoted to measuring the examinee’s abilities to locate specific details. Thus, it seems reasonable to conclude that both exams emphasized the development of this skill in reading. However, the concentration on assessing the more local understanding in the SAET and the DRET could well lead EFL teachers and learners to believe that lower-order reading skills are more important than the higher-order ones. From what was found in the present study, the reading skills measured in the SAET and DRET are more text-based rather than reader-based. The reason why local reading skills were over emphasized in both SAET and DRET may need further exploration in future studies.

Examinees’ Performances on Various Item Types

To find out how the examinees generally performed on reading comprehension questions of various reading skills in the SAET and DRET, a two-way ANOVA were run to examine the results. The findings revealed that the ANOVA analysis of the SAET and DRET were similar. The ANOVA tests for the SAET and DRET did not found a significant effect of item types on the examinees’ average passing rates.

In the SAET, “Recognizing Cohesive Device” had the highest passing rates (see Table 21). However, only one item was categorized as this type. Thus, it was difficult to see whether there is a pattern for item difficulty of this item. All the items in the SAET received high passing rates with an average of around 56.64%. Nevertheless, it was difficult to see which types of questions were best or worst performed throughout the six years. The examinees’ performance varied on different types of skills each year, which indicated that the difficulty level of each item type was inconsistent every year. For example, as noted before, items on “Recognizing and Understanding the Main Idea” had the highest passing rates in 2002, 2003, and 2007 SAET while they were the lowest in 2004 SAET. Another distinct example that revealed the unrelatedness of item type and difficulty level was found in items on “Recognizing

Implications and Making Inferences.” As shown in Table 11 in Chapter Four, Items of this type ranked the lowest in 2002 (P=45), 2003 (P=41), 2007 (P=41) SAET while they were best performed in 2005 (P=66) and 2006 (P=76)SAET.

Similar to the ANOVA results for the SAET, the ranking of passing rates of different item types in the DRET were not consistent throughout the years. The findings showed that no significant effect was found in the factor of year (F=.381, p>0.05) and there was no there was no interaction between the item types and the year (F=.941, p>0.05). However, the results showed that a significant effect was found in the factor of item types (F=2.534, p<0.05), which indicated that the passing rates varied according to the types of reading skills measured on each item. However, if we examine the passing rates of different item types throughout 2002 to 2007 DRET, the passing rates of item types were not consistent throughout the years. For example, “Word Inference from Context” had a passing rate of 54 in 2002, 48 in 2003, 50.5 in 2005, but dropped to only 31 in 2006 DRET. Besides, not all types of skills were measured every year, making it impossible to determine which types of skills were the most or the least difficult ones. Moreover, different kinds of skills appeared in different years, and thus the passing rates of the skills cannot be compared.

It is noteworthy that the average passing rates of items in the SAET are higher than those in the DRET, except that the passing rates of items on “Recognizing Functional Value” in the DRET were slightly higher than those in the SAET. This difference of passing rates between the SAET and the DRET might be that the DRET is aimed to distinguish more proficient students (CEEC); thus, the DRET is expected to be more difficult than the SAET.

As Matthews (1990) argued, it would be easier for readers to reach global understanding than the local one since more redundant information was available for readers to understand the gist. The results of the passing rates on “Recognizing and

Understanding the Main Idea” in the DRET seem to be compatible with Matthew’s argument since the test takers performed much better on “Recognizing and Understanding the Main Idea” items than “Recognizing and Interpreting Details.

“This could imply that the examinees in general are better at understanding the text as a whole; nevertheless, they had difficulties understanding specific detailed information.

Comparisons of High and Low Achievers’ Performances on Different Item Types Regarding the reading skill which best discriminated the high achievers and low achievers, in 2002 to 2007 SAET, three local reading skills best discriminated high and low achievers and three global reading skills best discriminated high and low achievers. In 2002, 2004, and 2005, local reading skills had the highest discrimination indexes; for example, “Recognizing Cohesive Devices” (D=66) in 2002 SAET,

“Word Inference from Context” in 2004 (D=64), and “Word Inference from Context”

in 2005 SAET. This indicated that local items were easy for high achievers but difficult for low achievers in these years. In contrast, in 2003, 2006, and 2007, global reading skills had the highest discrimination indexes; for example, “Recognizing Presuppositions Underlying the Text” (D=69) in 2003, “Recognizing and Understanding the Main Idea” (D=59), and “Recognizing Implications and Making Differences” (D=64). This indicated that global items were easy for high achievers and difficult for low achievers.

Similar to the results in the SAET, most of the skills that best discriminated the high achievers and low achievers in the DRET are local skills as well; for example,

“Recognizing Cohesive Devices” in 2003 (D=64), “Recognizing and Interpreting Details” in 2005, 2006, and 2007. However, In 2002 DRET, the skill that best discriminated the high achievers and low achievers was a global skill “Recognizing Text Organization” (D=72). In 2004 DRET, one local skill and one global skill both

best discriminated the high achievers and low achievers, i.e., “Recognizing Cohesive Devices” (D=48.5) and “Recognizing Functional Value” (D=48.5). In sum, in both SAET and DRET, most of the skills that best discriminated high and low achievers were local skills. This suggested that both in the SAET and DRET, questions testing local skills were easy items for both the high achievers but difficult for the low achievers, and items on global skills were difficult for both high and low achievers.

In general, in the SAET from 2002 to 2007, the discrimination indexes of all item types reached the ideal discrimination index of 30 established by Jeng (1999) and most of the discrimination indexes were far above the minimum desirable index.

This indicated that all of the items in the SAET seemed to be easy for high achievers but difficult for low achievers, meaning all of the items truly discriminated high achievers and low achievers. Also, most of the items had rather good discriminatory power since they had discrimination indexes far above the ideal index. However, the findings in the DRET from 2002 to 2007 showed a different pattern. In the DRET, the discrimination indexes of most items were above the ideal discrimination index of 30 but some items failed to reach the minimum desirable index, such as global items on

“Recognizing Implications and Making Inferences” (D=22 in 2002, and D=22 in 2004) and “Recognizing Presuppositions Underlying the Text” (D=12 in 2006 and D=28 in 2007). As aforementioned, most of the skills that best discriminated high and low achievers in the DRET were local skills. Thus, the results indicated that these two types of items were probably too difficult for most examinees and did not appropriately distinguish the high achievers from low achievers. This finding indicated that there is a need for the CEEC to be more careful when designing items to assess students’ abilities to make inferences and understand the presuppositions underlying the texts.

Conclusions

In this section, we will first present a summary of the major findings. Then, pedagogical implications based on the results and limitations and suggestions for future research will be provided.

Summary of the Major Findings

The present study was conducted to explore the reading skills measured on the reading comprehension test items of the SAET and DRET administered from 2002 to 2007. Furthermore, the examinees’ performances on each skill type were analyzed to see their strengths and weaknesses while taking reading comprehension tests. The main findings related to the research questions were summarized as follows.

First, the major findings of the study revealed that a total of eight types of reading skills were identified in the SAET and DRET from 2002 to 2007. These skills were “Word Inference from Context,” “Recognizing Cohesive Device,” “Recognizing and Interpreting Details,” “Recognizing Functional Value,” “Recognizing Text Organization,” “Recognizing Presuppositions Underlying the Text,” “Recognizing Implications and Making Inferences,” “Recognizing and Understanding the Main Idea.

Eight types of reading skills were identified on both reading comprehension tests except for items on “Recognizing Text Organization.” Only one item was labeled as

“Recognizing Text Organization” in 2002 DRET.

Second, items on “Recognizing and Interpreting Details,” a more local skill, were the major types in the SAET and DRET throughout the six years. This indicated that local reading skills instead of global reading skills were favored in both exams.

Third, the major differences lay in the frequency, occurrence, and distribution of items measuring different types of reading skills. In the SAET, two types of skills appeared each year: items on “Word Inference from Context,” and items on

“Recognizing and Interpreting Details.” As to the DRET, only item on “Recognizing

and Interpreting Details” were tested every year. This indicated that local reading skills instead of global ones were favored in both exams.

Finally, in both exams, most item types reached the ideal discrimination index of 30 established by Jeng (1999) and most even had discrimination index far above the minimum desirable discrimination index. However, two item types in the DRET failed to reach the minimum desirable index, such as items on “Recognizing Implications and Making Inferences” (D=22 in 2002, and D=22 in 2004) and items on

“Recognizing Presuppositions Underlying the Text” (D=12 in 2006 and D=28 in 2007).

Pedagogical Implications

The findings of the current research, which explores the reading skills measured in the SAET and DRET, had some pedagogical implications for reading instruction and testing in senior high schools. One implication drawn from the results is that by knowing what reading skills are measured on these two exams could help teachers to know what skills are required when taking the SAET and DRET and to enhance their students’ reading ability by teaching them how to use these eight reading skills: (1)

“Word Inference from Context,” (2) “Recognizing Cohesive Devices,” (3)

“Recognizing and Interpreting Details,” (4) “Recognizing Functional Value,” (5)

“Recognizing Text Organization,” (6) “Recognizing Presuppositions Underlying the Text,” (7) “Recognizing Implications and Making Inferences,” (8) “Recognizing and Understanding the Main Idea.” In addition, by knowing what item types the examinees performed poorly, teachers could help their students practice these skills more.

In the DRET, the examinees performed the worst on “Recognizing Presuppositions Underlying the Text,” with the passing rates lower than the standard ones set by Jeng (1999). Teachers then are highly recommended to improve students’

abilities of recognizing a writer’s purpose, attitude, mood, and tone while reading texts. Teachers need to help the students identify related details, main ideas, and cause-effect relationships in order to make appropriate inferences and generalizations (Dillner & Olson, 1982). For example, to raise the students’ awareness, as Dillner &

Olson (1982) suggested, teachers may have them compare two articles written by different writers on the same topic in order to distinguish each writer’s purpose. As Nuttall (1996) stated, presupposition is bound up with inference. One thing important to be noted is that “Recognizing Implications and Making Inferences” also obtained low passing rates in the DRET. Making inferences requires readers to use their knowledge and it is often regarded as an advanced skill and tends to be overlooked (Nuttall, 1996). Teachers are strongly recommended to train the students to select and relate relevant facts stated in texts in order to synthesize unstated meaning in texts.

Teachers can also use questions of inference to train students to consider what is implied but not explicitly stated in texts (Nuttall, 1996). Moreover, teachers need to encourage their students to make use of syntactic, logic, and cultural clues to understand the meaning of unknown elements in texts (Grellet, 1982).

Additionally, five of the item types were much worse performed on the DRET than on the SAET, with a gap of discrimination index ranging from 10 to 27. These skills were “Recognizing Cohesive Devices,” “Recognizing and Interpreting Details,”

“Recognizing Presuppositions Underlying the Text,” “Recognizing Implications and Making Inferences,” and “Recognizing and Understanding the Main Idea.” Of these five skills, “Recognizing Cohesive Devices” had the largest gap (27). In a study of college freshmen’s comprehension and application of text cohesion, Huang (1993) argued that most college students were poor at detecting cohesion in English texts.

Thus, to better prepare students for the DRET, teachers should help them recognize cohesive devices and practice this skill while reading.

Another important implication derived from the current research is that

“Recognizing Text Organization” should be tested more often since only one item was labeled as this type in the DRET and none was found in the SAET. As stipulated in the curriculum guidelines for senior high school English instruction, students are required to be able to understand the organization of texts. If the SAET and DRET do not test what teachers are trying to teach, it’s very likely that students will not pay attention to what teachers teach and eventually teachers will stop teaching it (Nuttall, 1996). As a result, the present study calls for more attention for the CEEC to include more items that measure students’ abilities to recognize and understand textual organization in the SAET and DRET.

In addition, as revealed in the findings, in both SAET and DRET, items on local skills best discriminated high and low achievers. Global items seemed too difficult for all examinees and a few global items even had discrimination index lower than the ideal index suggested established by Jeng (1999). Hence, it is suggested that teachers should teach students global skills and to help them practice those skills. Lastly, as the findings revealed, both SAET and DRET favored items on local skills. This could lead teachers and students to believe that local items are more important than global items. Thus, it is suggested that test writers write more items which measure global skills in the SAET and DRET so that a washback would be formed to urge teachers to familiarize students with global skills.

Limitation and Suggestions for Future Research

This study explored the reading skills tested in the SAET and DRET. In this section, limitations of the present study are presented. First, item analysis in the present study can only help to predict the reading skills attempted to be measured on each item. No experimental measures were taken to probe into the examinees’ minds to investigate their reading process and to see whether the examinees in fact apply the

reading skills when answering a question. To do that would require a think-aloud

reading skills when answering a question. To do that would require a think-aloud