• 沒有找到結果。

Computer-aided Question Generation

How to generate personalized questions in different question types? In this

chap-ter, we will respectively describe the constraints on vocabulary questions, grammar

questions, and reading comprehension questions.

Table 1 summarizes how to define the question difficulty and how distractors are

selected and Figure 2 shows four questions (also called items) generated from a

docu-Table 1 Design of personalized questions with different question types:

vocabu-lary, grammar, and reading comprehension questions.

Vocabulary

How to select

nouns or proper nouns) in the

given article

Distractor word difficulty disambiguation non-anaphora

selection part-of-speech not pronoun

word length number

Levenshtein

distance

gender

Document

Halloween, which falls on October 31, is one of the most unusual and fun

holi-days in the United States. It is also one of the scariest! It is associated with ghosts,

skeletons, witches, and other scary images. …Many of the original Halloween

traditions have developed today into fun activities for children. The most popular

one is "trick or treat." On Halloween night, children dress up in costumes and go to

visit their neighbors. When someone answers the door, the children cry out, "trick or

treat!" What this means is, "Give us a treat, or we'll play a trick on you!"… This

tra-dition comes from an old Irish story about a man named Jack who was very stingy.

He was so stingy that he could not enter heaven when he died. But he also could not

enter hell, because he had once played a trick on the devil. All he could do was walk

the earth as a ghost, carrying a lantern…

Quiz

1. In the sentence "It is __________ with ghosts, skeletons, witches, and other

scary images.", the blank can be:

(1) distributed (2) associated (3) contributed (4) illustrated

2. In the Sentence, "Many of the original Halloween traditions __________ today

into fun activities for children.", the blank can be filled in:

(1) have developed (2) have developing (3) is developed (4) develop

3. The word “he” in this sentence “All he could do was walk the earth as a ghost,

carrying a lantern” refer to:

(1) ghost (2) devil (3) witch (4) Jack

4. Which of the following statement is TRUE?

(1) On Halloween night, neighbors dress up in costumes and go to visit their children.

(2) What this means is, "Give us a trick, or we'll play a treat on you!"

(3) But the devil also could not enter hell, because he had once played a trick on the

witch.

(4) Jack was so stingy that he could not enter heaven when he died.

Figure 2 A paragraph and example generated questions: the bolded words represent

stems, the bold italics are answers and the other plausible choices in the questions are

called as distractors.

3.1 Vocabulary question generation

The difficulty of a vocabulary question is determined by the difficulty of the

cor-rect answer. We assume if a student selects the corcor-rect answer, he/she probably

under-stood the question stem and distinguished the correct answer from distractors. Here,

the difficulty of a word refers to word acquisition, the temporal process by which

learners learn the meaning, understanding and usage of new words. For most of

Eng-lish as foreign language learners, the acquisition grade distributions of different words

can be drawn from the inference from textbooks or a word list made by experts,

be-cause English as foreign language learners learn foreign language depending on

mate-rials they study, not the environment they live. In this study, the word difficulty is

de-termined by a word list made by an education organization. We adopted a wordlist

from the College Entrance Examination Center (CEEC) of Taiwan

(http://www.ceec.edu.tw/research/paper_doc/ce37/5.pdf ). It contains 6,480 words in

English, divided into six levels, which represent the grade in which a word should be

taught, as the word acquisition grade distrbutions. For each word from the given text,

we identify its difficulty by first referencing its difficulty level from within the word

list. When given the vocabulary proficiency level of a student, words with the same

difficulty level in the given document are selected as the basis to form test questions.

In the distractor selection, we also consult the same graded word list as the source

of distractor candidates. The distractors were selected by the following criteria: word

difficulty, part-of-speech (POS), word length and Levenshtein distance.

• Word difficulty: Distractors are selected with the equal difficulty for two rea-sons. One is for personalization. A student has personalized generated questions

whose difficulty is as the same as the student’s proficiency level. The other is

for familiar. Choices must be familiar to students; otherwise the correct answer

may be selected because students only know it.

• Part-of-speech (POS): Distractors have the same POS as the answer because

this makes the target sentence grammatical, but is semantically inconsistent

with the context of the target sentence. In this way, students can be tested the

lexical knowledge and comprehension instead of syntax. We use Stanford POS

Tagger (Toutanova, Klein, Manning, & Singer, 2003) to identify words as

nouns, verbs, adjectives, or adverbs.

• Word length and Levenshtein distance: Distractors are ranked by the least small word length difference between a distractor and the correct answer and

Le-venshtein distance based on changing the prefix or postfix of a distractor into

the correct answer. According to the (Perfetti & Hart, 2002), high-skilled

stu-dents easily have confusion when words share phonological forms with other

homophones. We try to catch the grapheme-phoneme by considering the word

length and Levenshtein distance.

The first question in Figure 2 is a vocabulary question. When a knowledge level

four student is given, difficulty level four words, e.g. “associate”, are identified by the

graded word list. The sentence containing the word, “It is associated with ghosts,

skel-etons, witches, and other scary images”, is then extracted to form a question and take

“associate” as the correct answer. We also consult the same word list to select

distrac-tors which have same difficulty (level 4) and part-of-speech (verb), and the least small

distance of word length (9) and Levenshtein distance (distributed:6, illustrated:7,

con-tributed:7).

3.2 Grammar question generation

The difficulty of a grammar question, which similar as that of a vocabulary

ques-tion, is determined by the difficulty of the grammar pattern of the correct answer.

Un-fortunately, unlike the aforementioned word list, there is no predefined grammar

diffi-culty measure available. In addition, second language learners usually learn

grammati-cal structures simultaneously and incrementally, while native speakers have learned all

grammar rules before formal education. Second language learning materials are

pre-dominated by the well-thought learning plan. Thus, we assigned the difficulty of a

grammar pattern based on the grade level of the textbook, which represents the

gram-mar acquisition grade distributions.

The difficulties of grammar patterns rely identify the grade level of the textbook in

which it frequently appears, representing the grammar acquisition grade distributions.

We manually predefined 44 grammar patterns from a grammar textbook for Taiwan

high school students and automatically calculated the rate of occurrence of grammar

patterns in a set of English textbooks. First, we used Stanford Parser (Klein and

Man-ning, 2002) to produce constituent structure trees of sentences. And next Tregex (Levy

& Andrew, 2006), a searching tool for matching patterns in trees, was used to recognize

the instances of the target grammar patterns in the set of textbooks. Finally, we counted

the frequencies of the syntactic grammar patterns in a set of corpus. This set of corpus

contains 342 articles written by different authors and collected from five different

pub-lishers (including The National Institute for Compilation and Translation, Far East Book

Company, Lungteng Cultural Company, San Min Book Company, and Nan-I

Publish-ing Company).

For generating grammar distractors, we also consult the same grammar textbook

and manually predefine distractor templates. These templates also need to ensure no

ambiguous choices in the templates. Sometimes, not only one grammar pattern could be

correct answer in a sentence. For example, the stem in the second question in Figure 2, a

distractor “develop” could be consistent with the syntax of the target sentence

regard-less of the global context. Thus, we referred to the grammar textbook and an expert for

designing distractor templates for each grammar pattern (examples shown in Table 2).

Table 2 Distractor templates were referred by a grammar textbook and an expert in

or-der to ensure the disambiguation of distractors.

level function name example

answer

distractor 1 distractor 2 distractor 3

1 PerfectTense has grown have growing have been grown had grown

1 OnetheOther one…the other one…another one…other one…the others

2 TooAdjectiveTo too happy to too happy that too happiest to

none of the

above

2 soThat so heavy so heavier so heaviest

none of the

above

2 PastPerfectTense had taken had had taken have taken had been taken

3 prepVing in helping in being help in helped in being helping

4 GernudasObject avoid taking avoid to taking avoid to take avoid to took

The second question in Figure 2 is a grammar question. The target testing purpose

in the second question is “present perfect tense”, which is taught in the first grade. The

original sentence is “Many of the original Halloween traditions have developed today

into fun activities for children”. The parse structure of the original sentence is in Figure

3. The grammar pattern of this parse structure can be automatically identified by the

Tregex patterns: /S.?/ < (VP < (/VB.?/ << have|has|haven't|hasn't)): /S.?/ < (VP < (VP <

VBN)). When a grammar pattern is recognized (the green part of the parse tree, the

dif-ficulty degree of the grammar question is assigned based on the matched grammar

pat-Figure 3 The parse structure of the sentence “Many of the original Halloween

tradi-tions have developed today into fun activities for children”.

3.3 Comprehension question generation

The difficulty of the reading comprehension questions is based on the reading

level of the reading materials themselves. We assume that an examinee correctly

an-swers a reading comprehension question because he/she could understand the whole

story. The difficulty level of an article is correlated with the interaction between the

lexical, syntactic and semantic relations of the text and the reader's cognitive aptitudes.

The reading level estimation of a given document in recent years has increased

noticea-bly. Most past literature was designated for first language learners, but the learning

timeline and processing between first language learners and second language learners is

different. In this study, we adopt the measure of reading difficulty estimation [6]

de-signed for English as second language learners to identify the difficulty of reading

ma-terials, as a difficulty measure for the reading comprehension questions.

Reading Comprehension replies on a highly complicated set of cognitive

pro-cesses (Nation & Angell, 2006). In these propro-cesses, it is a key to make an anaphora

res-olution, construction-integration model and build a coherent knowledge representation

(Kintsch 1998). Thus, in this work, we focus on a relation between sentences to

gener-ate two kinds of meaningful reading questions based on noun phrase coreference

resolu-tion. Similar to Mitkov and Ha (2003), who extracted nouns and noun phrases as

im-portant terminology in reading material, we also focus on the interaction of noun

phrases as the test purpose. The purpose of noun phrase coreference resolution is to

de-termine whether two expressions refer to the same entity in real life. An example is

ex-cerpted from Figure 2 (This tradition…on the devil5). It is easy to see that Jack2 means

man1 because of the semantic relationship between the sentences. The following he3 and

he4 are more difficult to judge as referring to Jack1 or devil5 when examinees do not

clearly understand the meaning of the context in the document. This information is used

in this work to generate reading comprehension questions, in order to examine whether

learners really understand the relationship between nouns in the given context.

There are two question types generated in the reading comprehension questions: an

independent referential question for the single concept test purpose and an overall

ref-erential question for overall comprehension test purpose. When a noun phrase is

select-ed as a target word in the stem question, it should have an anaphoric relation with the

other noun phrase. In the first type, a noun phrase (a pronoun, a common noun or a

proper noun) is selected as a target word in the stem question, a noun phrase (a common

noun or a proper noun) will the same anaphoric relation will be chosen as the correct

answer and other noun phrases (common nouns or proper nouns) will be determined as

the distractors. In the second type, the same technique of the question generation applies

to a sentence level. We regenerate new sentences as choices by replacing a noun (a

pronoun, a common noun or a proper noun) with an anaphoric noun (a common noun or

a proper noun) as the correct answer and substituting a noun with a non-anaphoric noun

as distractors.

The distractors should be satisfied with the following constraints:

• Non-anaphoric relation: Distractors should have non-anaphoric relations. The anaphoric and non-anaphoric relations can be identified by the Stanford

Coref-erence system (Raghunathan, Lee, Rangarajan, Chambers, Surdeanu, Jurafsky,

and Manning 2010).

• Not pronoun: Pronoun is a replacement of a noun and a dependent on an ante-cedent (a common noun or a proper noun). Thus, distractors should be common

nouns or proper nouns in order to have a clear test purpose.

• Number: Distractors should have the same number attributes (singular, plural or unknown) in order to make the sentence grammatically. For example, “devil”

in the Figure 2 is singular; the number attribute of a distractor should be the

same. If not, an unacceptable distractor (a plural noun or a collective noun)

could violate the subject-verb agreement. The number attributes were given by

the Stanford Coreference system (Raghunathan, Lee, Rangarajan, Chambers,

Surdeanu, Jurafsky, and Manning 2010), based on a dictionary, POS tags and

Named Entity Recognizer (NER) tool.

• Gender: Distractors should have the same gender attributes (male, female,

neu-tral or unknown) in order to make the sentence semantically. For example,

“Jack” in the Figure 2 is male; the gender attribute of a distractor should be

“male”, “neutral” rather than “female”; otherwise, students could answer the

question directly instead of reading the passages. The gender attributes were

as-signed by the Stanford Coreference system (Raghunathan, Lee, Rangarajan,

Chambers, Surdeanu, Jurafsky, and Manning 2010), which is from static

lexi-cons.

The third question in Figure 2 independent referential question, which assesses

one’s understanding of the concept of an entity involved in sentences. The word “he” in

the original sentence “All he could … a lantern” refers to “Jack”, the distractors

“ghost”, “devil”, and “witch” have non-anaphoric relation, not pronouns, and are

“sin-gular” and “neutral”. The fourth question in Figure 2 the overall referential question,

which contains more than one concept that needs to be understood. The correct answer

is from the sentence “He was so stingy … died,” and the word “He” is replaced with

“Jack” because they have referential relation. One of distractors is from “But he also

could not … devil,” the word “he” refers to “Jack” instead of “devil”. But we replace it

with the non-anaphoric noun as a distractor. This approach further examines in the

con-nection of concepts in the given learning material.

相關文件