• 沒有找到結果。

國中基測與會考英文閱讀測驗試題之比較:以布魯姆認知分類( 修訂版)析之

N/A
N/A
Protected

Academic year: 2021

Share "國中基測與會考英文閱讀測驗試題之比較:以布魯姆認知分類( 修訂版)析之"

Copied!
76
0
0

加載中.... (立即查看全文)

全文

(1)國立臺灣師範大學英語學系 碩. 士. 論. 文. Master’s Thesis Department of English National Taiwan Normal University. 國中基測與會考英文閱讀測驗試題之比較: 以布魯姆認知分類( 修訂版)析之. A Comparison of English Reading Comprehension Questions on the BCT and the CAP Using Revised Bloom’s Taxonomy. 指導教授:陳. 秋. 蘭. Advisor: Dr. Chiou-lan Chern 研 究 生:李. 宜. 璟. Katrina Yi-ching Li 中 華 民 國 一百零六年八月 August , 2017. i.

(2) CHINESE ABSTRACT 本研究使用修正版之布魯姆分類認知分類來分析 2011 年至 2013 年之國中基 本學力測驗與 2014 年至 2016 年之國中教育會考中之英語閱讀測驗題目的認知分 類,探討當中認知分類出現之頻率與考生答題之表現。 研究結果摘要如下: 布魯姆認知類型中,「記憶」、「理解」、「應用」與「分析」在基測與會考中 最常出現,「評鑑」與「創造」則不曾出現,原因可能為評鑑與創造不易以選擇 題呈現。其中「理解」占大多數之試題,且通常屬於簡單之題型。大部分屬於理 解之題型的難易程度為簡單到中間程度,但仍有少數為困難。 「應用」與「分析」 之試題數目被限制在一定範圍內,且多數試題難易度為困難,但仍有例外。近六 年以來,上述二類試題數量總計占所有題目數量的 10.5% 到 36%,當中多數題目 屬於困難程度,少數為簡易。相較於基測,會考中「應用」的試題題數大幅增加。 這證明了「應用」試題日益重要,也呼應了會考試題與學生生活相關且能活化學 習之精神。同樣的,相較於基測,會考包含更多屬於「應用」與「分析」之試題。 這也顯示會考涵蓋了更多屬於高層次認知之題型。 本研究結果建議教師可設計更多較有挑戰性之「理解」試題與較簡易之「應 用」與「分析」試題,以協助學生因應會考。. 關鍵字:國中教育會考,英語閱讀測驗,修正版之布魯姆認知分類,PISA, PIRLS, 高中入學測驗 i.ii.

(3) ABSTRCT This research aimed to analyze the cognitive categories in Bloom’s revised taxonomy on the English reading comprehension test items on BCT (Basic Competence Test) and CAP (Comprehensive Assessment Program) for junior high school students from 2011 to 2016, and to explore examinees’ performance on each cognitive level. The major results are as follows: The study found that Remember, Understand, Apply, and Analyze appeared in both exams. Evaluate and Create didn’t appear. The reason may be that Evaluate and Create weren’t easily measured in multiple choices. Items of Understand were the majority and they tended to be easy or average in difficulty. Only a few of them were difficult. However, as for items of Apply and Analyze combined, though they only accounted for 20% of test items, they tended to be difficult. When items on the two exams were compared, it was found that items of Apply increased at a faster speed in CAP than in BCT, which indicated its rising importance and this matched CAP’s goal of relating to students’ living experiences and activating their learning. Also, CAP tended to contain more items of Apply and Analyze, which showed that CAP included more items of higher cognitive levels than BCT did. It is suggested that teachers can design more difficult items of Understand and easy items of Apply and Analyze for students to prepare for CAP in the future.. Keywords: CAP, English reading comprehension test, the revised Bloom’s taxonomy, PISA, PIRLS, senior high school entrance exams 3. ii.

(4) ACKNOWLEDGEMENTS First of all, I’d like to thank my advisor, Dr. Chern, who played the most important role in my thesis. Dr. Chern’s gentle guidance fostered my interests to explore in the academic areas. Her patience made me grow as time went by and I became more mature and confident. Our monthly meeting was one of my most expectant moments in the past two years. From her, I learned a model of a great teacher who won her student’s respect and heart and deeply influenced a student. Besides, I want to thank all my professors and classmates in the TESOL program. Every professor in NTNU I met had an impact on me. Among them, I’d like to thank Professor Hsin-nan Yeh and Wu-chang Chang. Their insightful comments on my thesis helped me to make my thesis more persuasive. As for all my classmates, our supporting friendship, company, and care for each other made my days in NTNU memorable and sweet. Also, I am so grateful for all my colleagues in Da-dun Junior High School. Without their support, I won’t have such a chance to study in NTNU. Thank them for sharing their experiences of getting the master degree so I would know how to choose my thesis topic and arrange my time. Thank them for the encouragement that made me seize time to finish my thesis. Last but not least, I want to thank my family and my boyfriend, Bo-wen Bao. I am grateful for my family’s support, love, and care from the beginning to the end of this journey. Their encouragement and reminding marks made me discipline myself to accomplish the task on time. Most importantly, I want to thank my boyfriend, Bo-wen Bao, who accompanied me along the way. His support and love were the strength that motivated me to complete my thesis.. iii 4.

(5) TABLE OF CONTENTS CHINESE ABSTRACT…………………………….………………………….……...i. ENGLISH ABSTRACT………………………………………………………..……..ii. ACKNOWLEDGEMENTS………………………………………………………….iii. TABLE OF CONTENTS……………………………………………………….…….iv. LIST OF TABLES…………………………………………………………………...vii. LIST OF FIGURE…………………………………………………………………...vii.. CHAPTER ONE INTRODUCTION……………………...…………………………..1 Background and Motivation………………………….……………………………..1 Purposes of the Study……………………………………………………….……....5 Research Questions…………………………………………………………………5 Significance of the Study …………………………………………………………..5 Organization of the Thesis……………………………………………………….....6.. CHAPTER TWO LITERATURE…………………………………………….…….....7 The Bloom’s Taxonomy ……………………………………………………………7 The Original Bloom’s Taxonomy…………………………………………..….7 Discussions of the Original Bloom’s Taxonomy……………………….….9 The Revised Bloom’s Taxonomy ………………............................................10 Differences between the Two Versions………………………………………13 Application of the Revised Bloom’s Taxonomy ………………………….....17 PISA, and PIRLS……………………………..……………………………………18 History of Senior High School Entrance Exams in Taiwan……………………….21 Studies Related to Entrance Exams in Taiwan…………………..………………...24 5 iv..

(6) Summary…………………………………………………………………………..27. CHAPTER THREE METHODOLOGY ……………………………………….…....28 Data Source……………………………..…………………………………..……..28 Data Analysis and Coding Procedure……………………………………..………29 Coding Examples……………………………………………………….…….…...29 Coding Results………………………………………………………………….....33 Discussion of Interrater Reliability………………………………………………..33 Summary………………………………………………………………………..…33. CHAPTER FOUR RESULTS ……………………………………….………..…..…34 Analysis of Cognitive Types in BCT and CAP……………….…………..……….34 Cognitive Types Measured in BCT from 2011 to 2013…………………..……….35 Cognitive Types Measured in CAP from 2014 to 2016……………………..…….36 Similarities and Differences between the Cognitive Types Measured in the BCT and CAP…………………………………………………………..…………….37 Examinees’ Performance on Each Cognitive Category ……………..……………38 Difficult Items of Understand …………………………………….………..…...44 Easy Items of Apply……………………………………………….……….…...46 Easy Items of Analyze…………………………………………….……….……48 Summary…………………………………………………………….…………….50. CHAPTER FIVE DISCUSSION AND CONCLUSION ……………………………51 Major Findings……………………………………….……………………………51 Conclusions…………………………………………….………………………….52 Pedagogical Implications…………………………….…………………………53 6. v..

(7) Limitations of the Research and Suggestions for Future Study………………..55. REFERENCES……………….……………………………………………..………..56 APPENDIX 1: Sample Coding Sheet ……………………………………….……....62 APPENDIX 2: Coding Results ……………….………………………......................63 APPENDIX 3: Rubrics………………………………………………………..……...67. 7. vi..

(8) LIST OF TABLES Table 1. Cognitive Process in the Original Bloom’s Taxonomy…………...………….9 Table 2. Revised Bloom’s Taxonomy………………………………………..……….11 Table 3. Main and Sub-category of the Cognitive Process Domain…………..……..12 Table 4. Comparison between the Original Bloom’s Taxonomy (1956) and the Revised Bloom’s Taxonomy (2005)………………………………….………16 Table 5. PIRLS Framework…………………………………………………………..21 Table 6. Comparison between BCT and CAP………………………………………..24 Table 7. Reading Passages and Comprehension Test Items Analyzed………….……28 Table 8. Interrater Reliability………………………………………………………...33 Table 9. Cognitive Types in BCT and CAP…………………….…………………….34 Table 10. Cognitive Types Measured in BCT from 2011 to 2013……………………35 Table 11. Cognitive Types Measured in CAP from 2014 to 2016……………………36 Table 12. Analysis of Cognitive Types and Items’ Difficulty Levels in 2012 BCT….39 Table 13. Analysis of Cognitive Types and Items’ Difficulty Levels in 2013 BCT….40 Table 14. Analysis of Cognitive Types and Items’ Difficulty Levels in 2014 CAP….41 Table 15. Analysis of Cognitive Types and Items’ Difficulty Levels in 2015 CAP….42 Table 16. Analysis of Cognitive Types and Items’ Difficulty Levels in 2016 CAP….43 Table 17. Item Analysis of Question 52 in 2014 CAP……………………………….44 Table 18. Item Analysis of Question 54 in 2014 CAP……………………………….45 Table 19. Item Analysis of Question 34 in 2016 CAP……………………………….46 Table 20. Item Analysis of Question 26 in 2015 CAP……………………………….48 Table 21. Item Analysis of Question 35 in 2015 CAP……………………………….49 Table 22. Item Analysis of Question 22 in 2016 CAP……………………………….50 List of Figure Figure 1. Relationship between the Reading Framework and the Aspect Subscales of the PISA…………………………………………………………………..19. 8vii..

(9) CHAPTER ONE INTRODUCTION. Background and purposes of the study are provided in this chapter. The first part introduces the background of the research, and the second part presents the purposes of the study and research questions. The significance of the study and the organization of the thesis are presented in the last two parts. Background and Motivation Senior high school entrance exams are very important for junior high school students and teachers in Taiwan because it affects students’ future studies and influences teachers’ teaching focus. However, some common phenomenon exists in regular English classrooms in Taiwan every day. In daily English classes, teachers still let students spend lots of time doing mechanical drills and grammar practice. They focus more on the language forms, not on contextual meanings, let alone high cognitive thinking training, no matter it was during the time of traditional senior high school entrance exam, the Basic Competence Test for Junior High School Students (BCT), or the current Comprehensive Assessment Program (CAP). Another fact is that many students complain that CAP is more difficult than BCT and that mechanical drills don’t seem to be able to equip students with enough English abilities to cope with CAP. Besides, based on the K12 Curriculum Guidelines1, the cultivation of individual and logical thinking abilities will be emphasized in future English curriculum and classes in junior high schools. Hence, using an appropriate framework to probe into CAP and analyze its cognitive levels to examine whether it includes higher thinking levels becomes important. If CAP’s testing trend can be analyzed and its cognitive levels can be examined, the teachers can find the direction of classroom 1. Related data of K12 Curriculum Guideline was retrieved from the English Education Resource Center website (http://english.tyhs.edu.tw/~english/download/k12%20Curriculum%20Guidelines%2020160314). 1.

(10) instructions and train students’ thinking abilities, which not only help students get higher scores in CAP but also meet the spirit of the K12 Curriculum Guidelines. Before this study, related studies of analyzing English reading comprehension tests items were done by Lan (2007) and Chen, H. C. (2009). In Lan’s studies, she adopted the revised Bloom’s taxonomy and identified two main findings. To begin with, Remember, Understand, Apply, and Analyze in the revised Bloom’s taxonomy were identified in test items. Moreover, Remember and Understand were the two main categories identified in test items. The categories of Apply and Analyze were not tested much. In Chen, H.C.’s studies, she used Nuttall’s taxonomy as the framework and identified three points. First of all, “Word Inference from Context,” “Recognizing and Interpreting Details,” “Recognizing Implication and Making Inferences,” and “Recognizing and Understanding the Main Idea” were reading skills tested in Scholastic Achievement English Test (SAET) and Department Required English Test (DRET). Secondly, questions concerning “Recognizing and Interpreting Details” appeared most frequently in both the SAET and DRET. Finally, local reading skills like bottom-up skills made advanced learners stand out. Though they examined university entrance exams clearly, few studies concerning CAP can be found. Hence, it is necessary to examine CAP with a suitable framework such as the revised Bloom’s taxonomy. When it comes to CAP, the history of senior high school entrance exam should be introduced first. Traditional senior high school entrance exam was implemented from 1959 to 2000. Because it was held once a year, students were under great pressure if they lost the only chance to perform well to get to their ideal schools. To relieve students’ pressure and give them more opportunities, in 2001, the traditional senior high school entrance exam was replaced by the Basic Competence Test for Junior High School Students (BCT), which was held twice a year. 2.

(11) Besides being held twice a year, BCT has the following features. First, it was a norm-referenced test, which identified students’ performance relative to others in one group. Second, the test adopted scale scores, which came from the statistical distribution of the number of correct answers. Those who got more answers correct obtained a higher scale score. Besides, answering harder test items correctly will result in higher scale scores. Though one could get a higher scale score if he or she answered a difficult question correctly, there existed a wide grade gap among advanced learners. Therefore, any slight loss of points would result in a big difference in choices of the senior high school, which caused great tension among students, parents, and teachers. To relieve the great pressure brought by the scale scores, in 2014, Comprehensive Assessment Program for Junior High School Students (CAP) took the place of BCT and adopted the criterion-referenced standard. Being a criterion-referenced test, CAP evaluates students’ learning outcomes according to a pre-determined standard. Instead of giving scores, categories like A++, A+, A, B++, B+, B, and C are given to reduce the tension of over-competition due to minute difference in scores. In addition, according to Research Center for Psychological and Education Testing (RCPET), CAP is claimed to be of appropriate difficulty and item discrimination, and harder than BCT, which solves BCT’s problem of being too easy for students. CAP sets different standards for students of level A, B, or C. If students are able to integrate complex syntactic and morphological knowledge, and understand multiple and long passages, they might be classified as belonging to level A. Moreover, those who are able to identify main ideas, conclusions, and authors’ ideas of different texts might also be identified as belonging to level A (CAP, 2017). As for students who can understand basic morphological and contextual meanings, comprehend complex texts, point out texts’ main ideas, and make correct inferences based on the texts, they might 3.

(12) be categorized as belonging to level B. For students of level C, those who are able to comprehend basic conversations or simple texts as well as sentences are regarded as of level C. Besides, students who can do the simple inferring from text clues might also be viewed as belonging to level C. Based on the above description of students’ abilities, CAP is found to focus on students’ abilities to understand various texts, but there is an urgent need to find a framework to analyze test types. The revised Bloom’s taxonomy, which has been used in curriculum or test analysis, can probably fill in the gap and provide the framework of the cognitive domain needed for the assessment analysis. This study aims to use Bloom’s revised taxonomy, which includes the cognitive domain (Remember, Understand, Apply, Analyze, Evaluate, and Create) for analyzing English reading comprehension test items on the CAP. However, the Bloom’s taxonomy has undergone changes. In 1956, Bloom designed a taxonomy of educational goals for educators to evaluate teaching materials and test results (Halawi, McCarthy & Pires, 2009). The original Bloom’s taxonomy was to classify learner’s thinking levels (Akinde, 2015). Its cognitive domain contained six categories: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. Because these nominal terms couldn’t fully depict the active cognitive processing of readers, in 2001, Anderson at al. (2001) proposed the revised Bloom’s taxonomy. In terminology, these categories are changed from nominal to verbal forms. That is, knowledge has become remember, comprehension has become understand, application has become apply, and analysis has become analyze (Bezuidenhout & Alt, 2011). The revised Bloom’s taxonomy provides a broader framework of assessment (Airasian &Miranda, 2002). Since it has been used in curriculum or test analysis, it can provide the framework of the cognitive domain needed for the assessment analysis. Therefore, it is a framework used in this study to analyze English test types 4.

(13) in the BCT and the CAP. Purpose of the Study The purpose of the study is to analyze the reading items in the BCT and the CAP to identify the cognitive categories tested. Revised Bloom’s taxonomy will be used as the coding framework. Research Questions The purposes of the study can be divided into two parts. First, it is designed to identify the cognitive levels of the English reading comprehension tests in the BCT and the CAP from 2011 to 2016. Second, learners’ performance on different questions will be closely examined to help teachers plan their instructional activities to help their students. In sum, the study intends to explore two questions as follows: 1. What cognitive types in the revised Bloom’s taxonomy were measured in the BCT and the CAP English comprehension test items between 2011 and 2016? 2. How did learners perform on reading comprehension items of different cognitive levels on the BCT and the CAP English test? Significance of the Study The significance of the study is two-fold. First of all, the study will examine the design of the nationwide tests, BCT and CAP, in terms of Bloom’s different cognitive levels in the past six years. It aims to compare the changes in the exams to identify the possible cognitive types involved. In short, this research aims to promote better understanding of the cognitive levels measured in the reading comprehension test items in BCT and CAP and then provide further implication for reading instruction in junior high school English classes.. 5.

(14) Organization of the Thesis The structure of the thesis is listed as follows. Chapter Two introduces the literature review concerning the original and the revised Bloom’s taxonomy, PISA, PIRLS, the history of senior high school entrance exams, and studies related to entrance exams in Taiwan. In Chapter Three, the methodology regarding the data source, data analysis, coding procedure, coding examples, coding results, and discussion of interrater reliability is included. Chapter Four presents research results and examinees’ performance on each cognitive category. Finally, Chapter Five discusses the findings and conclusions and elaborates on pedagogical implication, research limitation, and suggestions for further studies.. 6.

(15) CHAPTER TWO LITERATURE REVIEW. The chapter contains four parts. The first part reviews the original Bloom’s taxonomy. The next part focuses on the revised Bloom’s taxonomy. After that, reviews of international assessment—PISA and PIRLS, will be presented. The last part contains the history of senior high school entrance exams in Taiwan and studies related to them The Original Bloom’s Taxonomy In 1956, Bloom, Engelhart, Frost, Hill, and Krathwohl designed “Bloom’s Taxonomy of Educational Objectives” which was the original Bloom’s taxonomy. They designed the taxonomy of educational goals for educators to evaluate teaching materials and test results (Halawi, McCarthy& Pires, 2009). The goals of education were divided into two parts: “knowledge” and “intellectual abilities and skills.” “Knowledge” contained one category, and “intellectual abilities and skills” were divided into five categories. Therefore, there were six categories in total. They were knowledge, comprehension, application, analysis, synthesis, and evaluation. The original Bloom’s taxonomy, a multi-tiered model, is to classify learners’ cognitive thinking levels (Akinde, 2015). The framework aims to describe the educational goals according to learners’ behaviours which indicate the achieved educational objectives (Andrich, 2002). As for the cognitive domain, Bloom’s original taxonomy aimed to provide systematic cognitive categories (Halawi, McCarthy & Pires, 2009). These cognitive levels are depicted as a stairway which helps learners to rise up to a higher thinking level. Six categories are included in the original Bloom’s taxonomy. They are Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. For the original Bloom’s taxonomy, these categories are aligned hierarchically, from the basic cognitive processing to the advanced evaluation, from 7.

(16) the concrete knowledge form to abstract thinking level. From Table 1, the Knowledge level refers to the requirement of an idea or item with the help of recalling. That is, readers locate and extract certain information. The Comprehension level means readers can understand a concept or theory through an example or paraphrasing and learn the meaning of the passage. The Application level has something to do with the practical use of a concept or problem solving based on what readers have learned under other circumstances. The Analysis level stresses the interrelationship of variables. It requires the identification of causes and effects. Readers examine passages in detail and distinguish the connection between parts. The Synthesis level requires the integration of different concepts to create a new one. In other words, readers mix parts to make a brand-new idea. The Evaluation level pays attention to the value or reflection against current standards, so readers used to judge and calculate pieces of information. The original Bloom’s taxonomy, which is summarized in Table 1, plays an important role in educational assessment (Andrich, 2002).. 8.

(17) Table 1. Cognitive Process in the Original Bloom’s Taxonomy Knowledge. Comprehension. Application. Analysis. Synthesis. Evaluation. Retrieving of. Understanding. Using facts in. Identifying. Using old. Discriminating. information. of facts and. different. the. concepts to. between facts. or facts. building up. situations. relationship. invent new. and valuing. meanings from. between. ones. them. texts. various parts. Knowledge. Comprehending. Using required. Organizing. Predicting. Judging. of time and. meanings. skills under. different. and finding. values,. different. parts. results. concepts or. occurrences. circumstances. theories. Explaining. Problem-solving. Identifying. Drawing. Proving values. information. by using needed. components. related. of facts. strategies. information. Inferring and. Recognizing. Generalizing. Deciding. predicting. meanings. from original. based on. ideas. reasons. results Adopted from of Hanna Discussion the (2007).. Discussion of the Original Bloom’s Taxonomy The original taxonomy needed modification. First, since there was a need to add. learning objectives to the taxonomy, knowledge should be transformed to contain more contexts, and more detailed cognitive levels should be designed for deeper cognitive processing (Adams, 2015). Second, a single “Knowledge” category couldn’t fully depict what instructors teach. Therefore, the knowledge category was taken out and was further categorized as factual knowledge, conceptual knowledge, procedural knowledge and metacognitive knowledge. Third, the old version could not describe the learning behaviors in detail. Since the new learning theories tend to stress the importance of active, cognitive, and constructive learning, using nouns rather than verbs couldn’t indicate the active learning process (Chen, F.X., 2009). Therefore, the original Bloom’s categories were transformed and became verbs: remember, understand, apply, analyze, evaluate, and create in Anderson and Krathwohl’s (2009) 9.

(18) model. The Revised Bloom’s Taxonomy In 2001, Anderson and Krathwohl redesigned the original taxonomy and made the revised Bloom’s taxonomy. The new taxonomy has undergone changes in terminology, hierarchy, and structure. In terminology, categories varies from noun to verb forms due to their active processing. For instance, knowledge has become remember, comprehension has become understand, application has become apply, and analysis has become analyze (Bezuidenhout & Alt, 2011). The revised Bloom’s Taxonomy separated the structure into two parts: the knowledge domain and the cognitive processing domain. On one hand, there are four sub-categories under the knowledge domain—Factual Knowledge, Conceptual Knowledge, Procedural Knowledge, and Metacognitive Knowledge. On the other hand, the cognitive processing domain adopts six verb forms as its sub-categories—Remember, Understand, Apply, Analyze, Evaluate, and Create. The Knowledge Domain Four sub-types belong to the knowledge domain. First, factual knowledge denotes the facts that students may acquire to attain a discipline or to pick up problem-solving skills. Second, conceptual knowledge deals with thinking processing and interrelationship among different ideas and humans’ brains. Third, procedural knowledge indicates methods of doing something or ways of acquiring certain techniques. In general, factual and conceptual knowledge denotes present knowledge concerning what, and procedural knowledge represents that of how. Finally, metacognitive and self-regulatory processes, which are regarded as one of the knowledge of cognitive strategies, help students monitor and control their noticing abilities and learning paces. The metacognitive and self-regulatory processes are revealed in numerous missions such as planning and generating (Pintrich, 2002). 10.

(19) From Table 2, we can see that the cognitive process dimension deals with the active processing, like remembering, understanding, applying, analyzing, evaluating, and creating. As for knowledge dimension, it refers to the content that readers need to process. The content may include factual knowledge, conceptual knowledge, procedural knowledge, and metacognitive knowledge Table 2. The Revised Bloom’s Taxonomy Knowledge. Cognitive Process Dimension. Dimension. A.. Factual. Remember. Understand. Apply. Analyze. Evaluate. Create. Recall facts. Comprehend. Apply facts. Analyze facts. Evaluate. Create facts. Facts. B.. Conceptual/ Principle. C. Procedural. D. Metacognitive. facts. Recall. Comprehend. Apply. concepts. concepts. concepts. Recall. Comprehend. Apply. procedures. procedures. procedures. Recall. Comprehend. Apply. Analyze. Evaluate. Create. metacognitive. metacognitive. metacognitive. metacognitive. metacognitive. metacognitive. skills. Skills. Skills. skills. skills. skills. Knowledge. Skill. Adopted from Anderson et al (2001).. 11. Ability.

(20) In Table 3, main and subcategories of the cognitive process domain in the revised Bloom’s taxonomy are shown. Their further explanations are listed as follows. The Cognitive Process Domain Table 3. Main and Sub-category of the Cognitive Process Domain Main category. Remember. Understand. Apply. Analyze. Evaluate. Create. Sub-category. Recall,. Interpret,. Execute,. Organize,. Critique,. Generate, Plan,. Recognize,. Exemplify,. Implement,. Differentiate,. Check, Judge,. Produce,. Retrieve,. Summarize,. Carry out,. Attribute,. Detect,. Hypothesize,. Identify. Infer, Compare. Use. Deconstruct,. Coordinate. Construct. Integrate. Adopted from Jideani (2012).. Remember. Remember involves retrieving knowledge from long-term memory. It is the basic cognitive process level. Its subcategories include recognizing and recalling. Recognizing, which is called identifying, involves pointing out knowledge in long-term memory and putting it with current information to make comparison. Recalling, which is retrieving, involves recapturing knowledge from memory. Understand. Understand refers to illustrating or constructing meaning by giving examples or generalization. Understand include interpreting, exemplifying, classifying, summarizing, inferring, comparing, and explaining. Apply. Apply implies carrying out a procedure in a certain situation. Its subcategories consist of executing and implementing. Executing, which is called carrying out, happens when students adopt a method to do a task. Implementing, which is called using, takes place when learners use more methods and ways to deal with a task. Analyze. Analyze means dividing the whole into pieces and getting to know the interaction and relationship among them and how they affect the whole structure. Its 12.

(21) subcategories include differentiating, organizing, and attributing. Differentiating, which is called distinguishing, takes place when students pick up related messages. Organizing, which is called structuring, implies how elements are put together. Attributing, which is called deconstructing, happens when students can pinpoint the underlying lines like viewpoints or values in the materials. Evaluate. Evaluate involves using criteria to give a critique or make judgement. The criteria may be based on effectiveness, efficiency, or consistency. Its subcategories include checking and critiquing. Checking, which is called coordinating or detecting, means detecting the internal consistency within a product or in the process. Critiquing, which is called judging, involves judging the consistency of a product. Create. Create calls for putting different elements in the process and then generating a new product. If a teacher sets up a goal called Create, students are expected to produce a brand-new product. Its subcategories include generating, planning, and producing. Generating, which is also called hypothesizing, requires learners to provide an alternative option for a hidden problem to see if one hypothesis works out. Planning, which is called designing, demands students make up plans by dividing a plan into smaller steps to accomplish a task. Producing, which is called constructing, talks about asking learners to either invent a product or execute a plan to solve problems. Differences Between the Original and the Revised Taxonomy The revised taxonomy reveals a dual viewpoint of learning as well as cognition and gives rise to clearer assessment and a stronger connection between assessment and instruction (Airasian & Miranda, 2002). Anderson (1999) listed the major differences between the original and the revised Bloom’s taxonomy are listed as follows. 13.

(22) (1) The revised Bloom’s taxonomy presents two dimensions instead of one. Two dimensions act as a hierarchy of more complicated learners’ mental thinking or cognitive processing behaviors which represent learning outcomes. (2) There is wider expansion in terms of knowledge dimension. First, knowing is different from knowledge. Knowing is a person’s unique and personal sense or belief. However, knowledge is a common and consensual beliefs of the public. Second, besides academic knowledge, cultural/social and motivational/strategic are added to the original knowledge categories. (3) Cognitive processes are conceptualized in the revised taxonomy. Cognitive processes are ways concerning how knowledge is obtained and constructed in one’s brain. These differences lie in the relationship among the cognitive processes, the generalizability of cognitive processes, the contextualized nature of cognitive processes and the role of cognitive processes in problem solving (Anderson, 1999). (4) The revised taxonomy shows more complexity and hierarchical structure. In the original taxonomy, knowledge is the least complex processing whereas evaluation is the most complicated. In the revised taxonomy, it becomes more evident that there is increasing complexity and cumulative hierarchical structure. (5) Cognitive processes are contextualized in the revised version. The revised taxonomy points out a wider range of contextual factors involved in cognitive processing. For example, knowledge, attitudes, abilities, motivation, and conditions are taken into account. (6) Problem-solving has played a role in the revised version. Intellectual abilities and skills have something to do with problem-solving abilities. Metacognition in problem-solving is also essential in cognitive processes. (6) Regarding testing and any other forms of assessment, the revised taxonomy provide implications for curriculum, instruction, testing, and policy. (7) The new structure of assessment may be presented in different dimensional tools. The scoring or evaluation rules and procedures may be embedded in a scoring key, a scoring rubric, rating scales, 14.

(23) checklists, or a computer algorithm (Anderson, 1999). (8) The revised taxonomy brings out the diversity in educational assessment. It offers a diverse but appropriate criterion-referenced tool for testing and assessment (Anderson, 1999).. 15.

(24) Table 4. Comparison between the Original Bloom’s Taxonomy (1956) and the Revised Bloom’s Taxonomy (2005) The Original Bloom’s Taxonomy (1956). Revised Bloom’s Taxonomy. Knowledge Define, duplicate, label, list, memorize, name. order. Recognize, recall, reproduce, state Comprehension Classify, describe, discuss, explain, express, identify, indicate, locate, recognize, report, restate, review, select,. Remember Retrieve knowledge from long-term memory, recognize, recall, locate, identify Understand Construct meaning, clarify, paraphrase, represent, translate, illustrate, provide examples, classify, categorize,. translate. summarize, infer a logical conclusion (such as from examples given), predict, match similar ideas, explain, compare/ contrast, construct models (e.g. cause-effect) Apply Carry out or use a procedure in a given situation; carry out (apply to a familiar task) or use (apply) to an unfamiliar task. Application Apply, choose, demonstrate, employ, illustrate, interpret, practice, schedule, sketch, solve, use, write Analysis Analyze, appraise, calculate, categorize, compare, criticize, discriminate, distinguish, examine, experiment, explain. Analyze Break into constituent parts, determine how parts relate, differentiate between relevant and irrelevant, distinguish, focus, select, organize, outline, find coherence (e.g. for bias or point of view) Evaluate Judge based on criteria, check, detect inconsistencies or fallacies, judge, critique. Synthesis Rearrange, assemble, collect, compose, create, design, develop, formulate, manage, organize, plan, propose, set up Evaluate Appraise, argue, assess, choose, compare, defend, estimate, explain, judge, predict, rate, core, select, support, value, evaluate. Create Combine elements to form a coherent whole, reorganize elements into new patterns/ structures, generate, hypothesize, design, plan, construct, produce for a specific purpose. 16.

(25) Application of Bloom’s Taxonomy Using the revised taxonomy gives us a tool to pay attention to certain specific cognitive behaviours by a set of learning plans. The plan was to translate the educational objectives immediately in terms of the behaviours that would provide the manifest evidence that the objective was achieved (Andrich, 2002). Researchers and test writers often believe that reading skills can be taught by instruction and tested out by assessment. However, researchers found little consensus of terminologies in commonly-seen taxonomies. Therefore, Bloom’s taxonomy comes to fill in the gap to solve the confusion of defining different reading skills (Beatty, 1975). In 2001, Anderson and Krathwohl (2001) revised the original Taxonomy (1956) and divided the framework into two dimensions: the knowledge dimension and the cognitive process dimension. Experts have been using both the original and revised taxonomies specifically in assessment and test evaluation. Due to its potential impacts on curriculum and instruction, the taxonomy table has been used for the analysis of state-wide assessments (Airasian & Miranda, 2002). The taxonomy can be adopted to make necessary adjustments in curriculum and instruction to improve the effectiveness of the entire educational system (Airasian & Miranda, 2002). The taxonomy provides us with a way to look not only at the cognitive depth of an activity but also at how those depths interact with different types of knowledge (Green, 2010). The taxonomy provides a powerful tool for checking whether the problems we encounter are aligned with our teaching techniques and our goals (Green, 2010). That is, it offers a framework for us to see if we give our students the right metacognitive skills or demonstrate the appropriate thinking cognitive processes for the problem-solving. As for studies concerning the revised Bloom’s taxonomy, Lan (2007) studied the cognitive processing categories in the revised Bloom’s taxonomy on the reading 17.

(26) comprehension tests on the Scholastic Achievement English Test (SAET) and the Department Required English Test (DRET) from 2002 to 2006. She identified three main findings. First, Remember, Understand, Apply, and Analyze in the revised Bloom’s taxonomy appeared in test items. Second, Remember, and Understand were the two main categories identified in test items. The Apply category appeared the most frequently in SAET, while in DRET, readers’ inferring abilities (a type belonging to Understand) are tested. Third, the Understand category was important in DRET, and test takers did poorly on items that required inferring detailed information. Finally, she asserted that teachers should instruct more on the four categories in the revised Bloom’s taxonomy—Remember, Understand, Apply, and Analyze, especially the inferring ability under the Understand category to prepare their students for the university entrance exams.. International Assessment: PISA One of the famous international assessment is Program for International Student Assessment (PISA). It is a test that assesses reading, mathematics, science, and problem-solving abilities of the 15-year-old students. It started from 2000 and is held every three years. Organisation for Economic Cooperation and Development (OECD) is the main organization responsible for the test. The latest PISA was held in 2015. The test aims to evaluate 15-year-old students’ abilities to use what they have learned, which will show the effectiveness of the compulsory education in different nations. According to PISA 2000, reading literacy is defined as “understanding, using and reflecting on written texts, in order to achieve one’s goals, to develop one’s knowledge and potential, and to participate in society” (OECD 2013, p.61). PISA proposes the following framework. 18.

(27) Reading Framework of PISA Reading literacy Use content primarily within the text Access and retrieve. Draw primarily upon outside knowledge. Integrate and interpret. Retrieve information Form a broad understanding. Develop an interpretation. Reflect and evaluate. Reflect on and evaluate content of text. Reflect on and evaluate form of text. Figure 1. Relationship between the Reading Framework and the Aspect Subscales of the PISA (From PISA 2009 Assessment Framework) As seen in Figure 1, the reading framework of PISA can be divided into two. parts: using content primarily from within the text and drawing primarily upon outside knowledge. In the first part, readers may access, retrieve, integrate, and interpret information. Retrieving information is similar to Remember in the revised Bloom’s taxonomy. Generally speaking, readers locate reading details to get specific information from the texts. Next, readers integrate pieces of information from the text to form a global understanding. Then, readers reflect and examine reading texts. These processes are metacognitive in nature, and they require readers to check their comprehension again and fine-tune their skill. When this framework is compared with Bloom’s revised taxonomy, forming a broad understanding is similar to Understand in Bloom’s revised taxonomy. Developing an interpretation is similar to Analyze. In the second part, readers reflect, examine, and value both the content and the form of the reading passages. Reflecting and evaluating content and form of the text is like Evaluate in Bloom’s revised taxonomy. In this way, we can see that PISA matches the cognitive levels in the 19.

(28) revised Bloom’s taxonomy to some extent. Another International Assessment: PIRLS Another famous international assessment for teenagers is Progress International Reading Literacy Study (PIRLS). This exam assesses the reading abilities of the 10-year-old readers in different countries. International Association for the Evaluation of Educational Achievement (IEA) is the organization responsible for the test. The test was first introduced in 2001 and is held every 5 years. The latest exam was held in 2016. It aims to evaluate trends of reading and digs into young readers’ reading experiences at home or at school. There are two main purposes of PIRLS: reading for literary experience and reading to acquire and use information. In PIRLS’ framework, processing can be divided into two parts: direct processing and explaining processing. In the direct processing, readers may either retrieve specific ideas or make inference. When this framework is compared with Bloom’s revised taxonomy, retrieving information is similar to Remember in the revised Bloom’s taxonomy and making inference is similar to Understand and Analyze. Interpreting is also similar to Understand, but examining is similar to Evaluate in the revised Bloom’s taxonomy due to the judgment. Hence, we can find that in the cognitive reading process, PIRLS may test students’ remembering, understanding, and evaluating abilities which also exist in the revised Bloom’s taxonomy. See Table 5 for a summary of this framework.. 20.

(29) Table 5. PIRLS Framework Direct processing. Retrieve specific ideas. Readers may locate certain information or find out the topic sentence or main ideas.. Make inferences. Readers may describe the relationship between characters, conclude main points of articles, or find out what pronouns refer to. Readers may compare and contrast obtained information, infer from the plots, and explain the availability of text messages in reality. Readers may do the critical thinking based on text messages, find out the author’s viewpoint,. Explaining Interpret and processing integrate information Examine, evaluate text features. and evaluate texts. History of Senior School Entrance Exams in Taiwan After reviewing two famous international exams for teenagers, we are to discuss senior high school entrance exams in Taiwan. Senior high school entrance exam is important to junior high school students and teachers in Taiwan because it affects students’ future studies and teachers’ teaching directions. Traditional senior high school entrance exams had been in existence from 1959 to 2000. Because it was held once a year, students were under great pressure if they lost the only chance to get to their ideal schools. To relieve students’ pressures and give them more opportunities, traditional senior high school entrance exam was replaced by the Basic Competence Test for Junior High School Students (BCT) in 2001, which was held twice a year. Besides being held twice a year, BCT was a norm-referenced test, which identified students’ relative performance in one group and a scale score was used. Yet, one point made the difference and it might not be able to distinguish advanced students. This has caused many criticisms; hence, it was replaced by the CAP. In 2014, the CAP took the place of BCT, and has since become the current senior high school entrance exam in Taiwan. In K-12 compulsory education objectives, CAP claims to achieve four purposes. First, it helps to relieve students’ 21.

(30) learning pressure and activate their learning. Second, it evaluates students’ proficiencies to make them reach certain learning levels. Third, it serves as the learning outcomes to foster counseling for fitness. Fourth, it provides academic information for teachers to teach students in accordance with aptitudes. Students’ grades of CAP may serve as an indicator for further counseling for their senior high schools, vocational schools, or junior colleges. Besides, CAP has the following features. First, it belongs to criterion-referenced test, which evaluates students’ learning results according to these standards: A++, A+, A, B++, B+, B, and C. With the grades in CAP, students may go to appropriate senior high schools, vocational schools, or junior colleges. Second, , the assessment is of appropriate difficulty and item discrimination. CAP aims to evaluate students’ mastery of knowledge and acts as an indicator for the development gap between cities and countryside for further adjustment. According to the Research Center for Psychological and Educational Testing (RCPET), CAP stresses multiple reading abilities listed as follows. First, advanced students need to be able to integrate complex syntactic and morphological knowledge, understand multiple and long passages. Second, they must be able to point out main ideas of different texts, conclusions, and authors’ ideas. Third, they should be able to integrate different information, such as text structures, explanations, or examples, to give further inferring or comments. CAP focuses on students’ abilities of understanding, but there is a lack of a framework to analyze what abilities the test items measured in BCT and CAP. Therefore, a framework which aims to analyze test items for teachers is important. From RCPET official website (See Table 6), we find that BCT and CAP have two things in common. First, BCT and CAP both serve as the standard of the admission to a new school. Second, their participants are both ninth graders in junior 22.

(31) high schools. However, there are seven points that differentiate them. First, in CAP, test score is just one of the indicators of the admission to a new school, but in BCT, it was the only standard. Second, CAP can be the indicator of further remedial instructions in new schools, but BCT didn’t have this function. Third, CAP is held only once a year, but BCT was held about twice a year. Fourth, English listening is tested in CAP, but not in BCT. Fifth, CAP is criterion-referenced that students may either pass or fail the exam while BCT was norm-referenced that a student’s academic performance was shown by his or her relative position in all ninth graders in the exam. Sixth, CAP is of moderate difficulty, which is not easy for half of the test takers. However, BCT was easier than CAP. Seventh, CAP has the following grading standards: A++, A+, A, B++, B+, B, and C, but BCT only provided scale scores.. 23.

(32) Table 6. Comparison between BCT and CAP (From RCPET)2 Exam. BCT. CAP. Function. 1. BCT served as the standard 1. CAP helps teachers, parents, and of entering senior high students know students’ learning school, vocational school, results. or junior college. 2. Exam scores are one of the 2. Exam scores were used for indicators of the application for the application for a school. a school. 3. CAP’s results serve as an indicator of whether students need remedial instructions in new schools.. Participant. All ninth graders. All ninth graders. Exam Time. Once in May from 2014. Subject. Twice from 2001 to 2011; Once in 2012 Chinese, English, math, social science, natural science, and Chinese composition. Item Type. Multiple choice. Chinese, English (including listening), math (including written calculation), social science, natural science, and Chinese composition Multiple and non-multiple choice. Test Difficulty Criterion Category. Moderate, but easy. Moderate difficulty. Norm-referenced X. Criterion-referenced Proficient- A++, A+, A; Basic-B++, B+, B; Below basic- C.. Studies Related to Entrance Exams in Taiwan Yang (2007) studied and analyzed the English reading comprehension tests in BCT from 2002 to 2004. He used Mo (1987) and Johnson’s (2004) reading strategy categorization, probed into different question types in BCT, and conducted an experimental study. He identified question types in BCT, including getting the main ideas, finding details, deciding the contextual meanings, obtaining implications, drawing inferences, and getting conclusions. Besides, he also found that about 50 % 2. The comparison between BCT and CAP was from RCPET official website (http://cap.ntnu.edu.tw/background.html). 24.

(33) of the questions were about getting the details and that literal skill abilities were more common than critical skill abilities. Finally, he found that the instruction of reading strategies was useful and helpful. Though Yang (2007) used a different framework, his studies uncovered the question types in BCT, and the importance of literal skills and reading instruction. Since CAP took the place of BCT, it is also worth comparing CAP with BCT. Liu (2016) analyzed and compared BCT and CAP English test trends from 2012 to 2015. She analyzed the multiple-choice questions in BCT and CAP and studied the test trends from 2012-2013 BCT to 2014-2015 CAP. She pointed out that readers’ reading strategies of decoding and comprehending text meanings were more important than word knowledge. She also found that the abilities to obtain understanding of the context and understand long reading passages were vital in the CAP. Another related study was Wang’s (2015) investigation of the washback effect of CAP on the 9th graders. She found that the gender effects, experiences of taking part in other English tests like GEPT and TOEIC, and parents’ social and economic status and abilities had impact on students’ academic performances in the CAP. Her studies revealed other factors involved in students’ academic results in the CAP. Still another study used picture books to foster ninth graders’ reading comprehension abilities in the CAP (Chen, L.H, 2016). Chen used reading skills and picture books to develop students’ comprehension abilities. Her instructional intervention went through stages such as overviewing and predicting the texts, understanding vocabulary, examining predictions, and having extended activities. Her instruction improved students’ abilities of using context to comprehend vocabulary, getting important clues from context, comparing various information and viewpoints, and using illustrations for comprehension. Finally, she pointed out the existence of individual differences in reading skills. Chen’s studies revealed that common reading 25.

(34) strategies helped readers with their reading comprehension. The above studies related to testing in Taiwan revealed four main points. First, decoding and comprehending from context are important reading skills. Second, gender, test-taking experiences, parents’ social and economic status can affect students’ performances in the CAP. Third, stimulus like using picture books might enhance students’ reading abilities. Fourth, individual differences still exist when it comes to the need of reading skills. One study with the use of the revised Bloom’s taxonomy was done by Lan (2007). In Lan’s study, she adopted qualitative and quantitative methods to analyze the cognitive levels and knowledge types of English reading comprehension test items in the Scholastic Achievement English Test (SAET) and Department Required English Test (DRET). She found Remember, Understand, Apply, and Analyze were measured. Among them, Remember and Understand were the majority. In addition, she also found that inferring abilities distinguished advanced learners and bottom scorers. Therefore, she suggested that students develop the above four skills, inferring abilities in particular, to pass the two entrance exams. Another study with the use of Nuttall’s taxonomy to examine university entrance exams in Taiwan was done by Chen, H.C (2009). In her study, both qualitative and quantitative methods were adopted. She identified three points. First, “Word Inference from Context,” “Recognizing and Interpreting Details,” “Recognizing Implication and Making Inferences,” and “Recognizing and Understanding the Main Idea” were reading skills tested in Scholastic Achievement English Test (SAET) and Department Required English Test (DRET). Secondly, questions concerning “Recognizing and Interpreting Details” appeared most frequently in both the SAET and DRET. Third, local reading skills like bottom-up skills made advanced learners stand out. After we review the studies related to entrance exams in Taiwan, a framework 26.

(35) which aims to analyze test items is important. Hence, the Bloom’s taxonomy, which has been used for curriculum and testing assessment, may be a helpful tool for us to analyze the CAP. Summary The chapter has reviewed the literature on the original and the revised Bloom’s taxonomy, international assessment such as PISA and PIRLS, history of senior high school entrance exams in Taiwan, and studies related to entrance exams in Taiwan. Although Liu (2016) has analyzed and compared the BCT and the CAP English test trends from 2012 to 2015, and Lan (2007) has adopted the revised Bloom’s taxonomy to analyze university entrance exams, there aren’t many studies concerning the application of the revised Bloom’s taxonomy to analyze English reading comprehension tests of the CAP. Due to the lack of related studies, the present study was conducted to fill this gap.. 27.

(36) CHAPTER THREE. METHODOLOGY. In this study, reading comprehension items of the BCT from 2011 to 2013 and those of the CAP from 2014 to 2016 are collected and analyzed, using the revised Bloom’s taxonomy. In this section, data analysis, coding procedure, coding examples, coding results, and interrater reliability are discussed in detail. Data Source The reading passages included for analysis in this study were be from those of the BCT from 2011 to 2013 and those of the CAP from 2014 to 2016. In BCT, a total of 10 passages with either 19 or 21 test items on the tests from 2011 to 2013 were collected. In CAP, a total of 9 passages, 21 test items on the test in 2014, 8 passages, 22 test items on the test in 2015, 7 passages, 19 test items on the test in 2016 were collected (See Table 7). Generally speaking, in the BCT, each test contains 10 reading passages, and each passage also included 2 to 4 test items while in the CAP, each test contained 7 to 9 reading passages, and each passage contained 2 to 4 test items. All reading test items were analyzed according to the cognitive levels in the revised Bloom’s taxonomy. Examinees’ passing rates were collected from the Research Center for Psychological and Educational Testing (RCPET). Table 7. Reading Passages and Comprehension Test Items Analyzed Exam. Year. Number of Reading Passages. Number of Test Items. BCT. 2011. 10. 19. CAP. 2012 2013 2014 2015 2016. 10 10 9 8 7. 19 21 21 22 19. 28.

(37) Data Analysis and Coding Procedure The purposes of the analysis are two-folded: First, it aims to categorize all reading comprehension test items based on the revised Bloom’s taxonomy and calculate test questions that match Bloom’s six levels in Bloom’s revised taxonomy. Second, it shows the difficulty levels of different items in English tests in the BCT and the CAP. There are two phases of the data analysis. One is the coding process, and the other is the analysis of the difficulty levels and passing rates. First, in the coding process, two experienced junior high school English teachers and the researcher were the raters. The revised Bloom’s taxonomy, the sample coding sheet (Appendix 1), and the reading comprehension passages from 2011 to 2016 were given to raters for coding. A meeting was held for raters to see if they all agreed on the revised Bloom’s classification of test items. After the coding was done, interrater reliability was checked by comparing the coding of these raters. If their coding results weren’t consistently categorized, the coders would read the principles of classification again and had a discussion until they reached a consensus on this issue. Afterward, the coded data would be compared with test items’ difficulty level data from the RCPET, and items’ passing rates would be provided to indicate test takers’ performance. Coding Examples Coding examples were taken from Lan (2009)’s and Li (2009)’s studies3. 1. Example of “Remember” (From Lan, Q42 of 2004 SAET) A sense of humor is just one of the many things shared by Alfred and Anthony Melillo, 64-year-old twin brothers from East Haven who made history in February 2002. On Christmas Eve, 1992, Anthony had a heart transplant from a 21-year-old donor. Two days before Valentine’s Day in 2002, Alfred received a 19-year-old heart, marking the first time on record that twin adults each received 3. Examples of Evaluate and Create were provided from Li (2009)’s study: A content analysis of the heart transplants. ability of high-order-thinking in social studies workbooks in primary school: taking the cognitive 15 minutes older him, but now I’m younger because of my heart and process “I’m dimension in a revision of than bloom's taxonomy of educational objectives as analysis framework. 29. I’m not going to respect him,” Alfred said with a grin, pointing to his brother while talking to a roomful of reporters, who laughed frequently at their jokes.. While the twins knew that genetics might have played a role in their condition, they recognized that their eating habits might have also contributed to their heart problems..

(38) a 19-year-old heart, marking the first time on record that twin adults each received heart transplants. What did Alfred and Anthony have in common? (A) Lifespan. (B) Career goals. *(C) A sense of humor. (D) Love for bicycling. Explanation: In the first line, it is said that“A sense of humor is just one of the many things shared by Alfred and Anthony. Because a sense of humor is stated directly, without any changes or word transformation, it belongs to Remember. 2. Example of “Understand” (From Lan, Q47 of 2004 SAET) The British Museum has not signed the declaration, but says it fully supports it. Over the recent years, it has faced growing pressure to hand back the Elgin Marbles, sculptures taken from the Parthenon in Athens, Greece, in the 19th century. But the British Museum has said that the Museum is the best possible place for them. “They must remain here if the museum is to continue to achieve its aim, which is to show the world to the world,” said the director of the museum. What does “the world” mean in “show the world to the world? (A) The global village. (B) The leading museums. * (C) The ancient civilization. (D)The international public. Explanation: In this paragraph, we know that the British Museum was under the pressure to send its sculptures to countries of their origins. Yet, it asserted that putting these sculptures in the British Museum was the best choice so that the public could see these old cultures and civilizations. Therefore, it said that it showed the world, which meant the ancient civilization, to the world. 3. Example of “Apply’’ (From Lan, Q41 of 2002 SAET) They set out from Japan on May 17, 2001. They had rowed nearly 5,500 miles when their boat was hit by a fishing ship on September 17, 2001. How long had Tim and Dom been at sea when their boat was hit by a fishing 30.

(39) boat? (A) One month. (B) Two months. (C) Three months. * (D) Four months. Explanation: From the statement, it says that they started their journey on May 17 and finished it on September 17. If students apply the written information and do the calculation, they’ll know that Tim and Dom had been at sea for four months. 4.Example of “Analyze” (From Lan, Q50 of 2006 DRET) Native Americans could not understand the white man’s war on the wolf. The Lakota, Blackfeet, and Shoshone, among other tribes, considered the wolf their spiritual brother. They respected the animals’ endurance and hunting ability, and warriors prayed to hunt like them. They draped themselves in wolf skins and paws, hoping they could acquire the wolf’s hunting skills of stealth, courage, and stamina. Plains Indians wore wolf-skin disguises on raiding parties. Elite Comanche warriors were called wolves. The white settlers’ war on the wolf raged on. Western ranchers continued to claim that thousands of cattle were killed every year by wolves. In 1884, Montana created its first wolf bounty—one dollar for every dead wolf, which increased to eight dollars in 1893. Over a period of thirty-five years, more than eighty thousand wolf carcasses were submitted for bounty payments in Montana. Moreover, the government even provided free poison. Finally, in 1914, ranchers persuaded the United States Congress to provide funds to exterminate wolves on public lands. The last wolves in the American West died hard. No place was safe, not even the nation’s first national park, Yellowstone. The park was created in 1872, and from its very beginning, poisoned carcasses were set out to kill wolves. Nearly 140 wolves were killed by park rangers in Yellowstone from 1914 to 1926. In October 1926, two wolves cubs were trapped near a bison carcass. They were the last animals killed in the park’s wolf control programs. Ranchers had won the war against the wolf. Only in the northern woods of Wisconsin, Minnesota, and Michigan could the howl of native gray wolves be heard. The vast lands of American West fell silent. The country had lost its greatest predator. This passage was most likely written by someone who _____________. (A) liked hunting wild animals (B) made laws against the gray wolf * (C) advocated the protection of the gray wolf 31.

(40) (D) appreciated the gray wolf’s hunting skills Explanation: Items of Analyze refer to the analysis of author’s viewpoints or values. For instance, readers must read beyond the lines to get the author’s hidden meaning to know his or her true viewpoints and stands. Such viewpoints or stands are abstact and implicit. From the above underlined sentences, we know the wolves in America are going to become extinct and they can only be seen in few places. Mother Nature’s silence seems to mourn for the losing of the wolves. Hence, human beings must stand up to prevent the wolves from extinction, which is the hidden meaning that the author is trying to convey. The example question here requires readers to realize his or her viewpoint and value in the passage, so it belongs to the Analyze category. 5.Example of “Evaluate” (From Li, 2009) If you were against the set-up of landfills, under what circumstances would you change your opinions, or not? Explanation: From the statement, we know that students are required to evaluate the pros and cons of the set-up of landfills. This is an example of high cognitive thinking process. Students need to think and judge the advantages and disadvantages of building up landfills, and even come up with some possible and doable suggestions. 6.Example of “Create” (From Li, 2009) A Medical Record of the Earth From the perspective of the ______, I’ve diagnosed the Earth’s environmental problems. (1) Possible reasons: ________________________________________________ (2) Potential impacts: _______________________________________________ (3) Solutions to the problems:_________________________________________ Explanation: This is an example of Create. Students need to choose a perspective and come up with some current environmental problems of the Earth, like plastic bags, air pollution, global warming, acid rain, the decreasing biodiversity, and so on. They may 32.

(41) brainstorm and create their own answers based on the issue. Coding Results In this section, the coding results and interrater reliability are presented and discussed. The researcher and two other raters coded 121 English reading comprehension test items, 59 on the BCT and 62 on the CAP, by using the revised Bloom’s taxonomy as the coding rubric. Discussion of Interrater Reliability After the first coding results, the interrater reliability was shown in Table 8. Table 8. Interrater Reliability (Total Number: 121) Coder 1 vs. Coder 2. Coder 1 vs. Coder 3. Coder 2 vs. Coder 3. Agreement: 119 Disagreement: 2 Interrater reliability: 119/121=98%. Agreement: 102 Disagreement: 19 Interrater reliability: 102/121= 84%. Agreement: 104 Disagreement: 17 Interrater reliability: 104/121= 86%. Table 8 showed that Coder 1 and Coder 2 had the highest interrater reliability (98%). They only had two items in disagreement. Coder 1 and Coder 3 had the lowest interrater reliability (84 %). They had 19 items in disagreement. The interrater reliability between Coder 2 and Coder 3 was 86%. They had 17 items in disagreement. After the first coding meeting, 85% of the items were coded consistently by the three raters. When raters disagreed with one another on certain items, the majority would decide the results. However, another expert was invited to code the inconsistent ones. The three raters then had the second coding meeting to classify some item definition before finalizing the results and coding rubrics (See Appendixes 2 & 3). Summary In this chapter, the method adopted in this study is reported. The participants, data source, data analysis, coding procedures, coding examples, and interrater reliability are shown. The study results will be introduced in the next chapter. 33.

參考文獻

相關文件

This research project aims to grasp the current employment status of key industries which includes “Agricultural food and agritech industries”, (AFA industries) “Textile

For example, the teacher librarians teach students reading strategies while English and Chinese language subject teachers provide reading materials for students to

Lower order of thinking Higher order of thinking Recall, understand, apply Analyze, evaluate, create. Rewards

In order to understand the influence level of the variables to pension reform, this study aims to investigate the relationship among job characteristic,

推理論證 批判思辨 探究能力-問題解決 分析與發現 4-3 分析文本、數據等資料以解決問題 探究能力-問題解決 分析與發現 4-4

中國語文科卷一 閱讀理解 學生做小測.. 中國語文科卷一 閱讀理解

一、職能標準、技能檢定與技能職類測驗能力認證政策、制度、計畫之研 擬、規劃及督導。. 二、職能標準、技能檢定與技能職類測驗能力認證法規制(訂)定、修正

將基本學力測驗的各科量尺分數加總的分數即為該考生在該次基測的總 分。國民中學學生基本學力測驗自民國九十年至九十五年止基測的總分為 300 分,國文科滿分為 60