• 沒有找到結果。

Personalized E-Learning System by Using Item Response Theory

N/A
N/A
Protected

Academic year: 2021

Share "Personalized E-Learning System by Using Item Response Theory"

Copied!
8
0
0

加載中.... (立即查看全文)

全文

(1)Personalized E-Learning System by Using Item Response Theory Hahn-Ming Lee1, Chih-Ming Chen2 and Ya-Hui Chen1 1. Department of Computer Science and Information Engineering National Taiwan University of Science and Technology, Taipei, Taiwan {hmlee}@mail.ntust.edu.tw, {m8915001}@mail.ntust.edu.tw 2. Institute of Information Science Academia Sinica, Taipei, Taiwan {cmchen}@iis.sinica.edu.tw Abstract Personalized service is an important issue on the Web, especially on Web-based learning. In general, most personalized systems consider learners’ preferences, interests, and browsing behaviors to provide personalized services. However, learners’ abilities are usually neglected as an important factor to implement personalized mechanism. Besides, too many hyperlink structures on Web-based learning systems bring a large number of information burdens to learners. Hence, in Web-based learning, disorientation (losing in hyperspace), cognitive overload, lack of adaptive mechanism, the information overload problem, and the adaptation of course materials’ difficulties are main research issues. In this study, a personalized e-learning system based on Item Response Theory (PEL-IRT) which considers both the difficulties of course materials and learners’ abilities is proposed to provide personalized learning. The item characteristic function with single difficult parameter is used to model a course material. Moreover, the maximum likelihood estimation (MLE) is applied to estimate the learners’ abilities based on learners’ explicit feedback. We also propose a collaborative voting approach to adjust the difficulties of the course materials in order to appropriately determine the difficulties of the course materials. Experimental results show that Item Response Theory applied to Web learning can achieve personalized learning and assist learners to learn more effectively and efficiently.. 1. Introduction In recent years, some surveys of network behaviors on Yam [1] show that most users apply search engines to find their desired information, and. to learn via Web environment is a growing trend. Learning via electronic appliances with Internet environment is called e-learning [2], also called distance learning, on-line learning (training) or Web-based learning, which helps learners to learn by themselves through Internet. According to analyses at International Data Corporation (IDC) [3], the worldwide corporate e-learning market will exceed US$ 24 billions by 2004. The reason of e-learning becoming a trend is that it can provide convenient and efficient learning environments and practical utilities at anytime and anywhere. Many universities [4], corporations [5], and educational organization [6] develop the platforms of distance learning to provide course materials for Web-based learning. They are also often used for on-line employee training in business [5]. An important issue on Web-based learning is how to provide personalized mechanism to help learners learning more efficiently. Therefore, many researchers have taken efforts on providing personalization mechanism for Web-based learning in recent years [7][8][9]. Nowadays, most personalized systems [8][10][11] consider learners’/users’ preferences, interests, and browsing behaviors to analyze their behavior for personalized services. These methods neglect learners’/users’ abilities as an important factor of implementing personalized mechanism. Also, some researchers emphasized that personalization should consider different levels of learners’/users’ knowledge, especially in learning aspect [7]. That is, each person may have different ability based on his/her major fields and subjects. Therefore, if we can consider the learners’ abilities, the performance of personalized learning might be efficiently promoted. Item Response Theory is a robust theory in education measurement domain. It has been applied.

(2) on Computerized Adaptive Test (CAT) [12][13][14][15][16] to select the most appropriate item to examinees based on various users’ abilities. Computerized Adaptive Test not only efficiently shorten the testing time and reduce the number of testing items, but also can precisely estimates learners’ abilities by using Item Response Theory. At present, the concept of CAT has been applied to replace the traditional measurement instruments (which are typically fixed-length, fixed-content and paper-pencil tests) in several real-world applications, such as TOELF [17], GRE [18], and GMAT [19]. Based on previous analyses, a personalized e-learning system based on Item Response Theory [12][13][14][15][16], termed as PEL-IRT, is proposed to provide personalized e-learning services on the Web. In our approach, the abilities of learners and the difficulties of course materials are simultaneously taken into account to implement personalized mechanism. PEL-IRT can dynamically estimate learners’ abilities by collecting learners’ explicit feedback after learners have studied the recommended course materials. Based on the estimation of learners’ abilities, our system can recommend appropriate course materials to the learners. Experimental results show that our proposed personalized e-learning system can recommend most appropriate course materials to learners based on their abilities as well as help learners to learn more effectively.. 2. Personalized Course Recommendation System In this section, we will describe our system architecture and personalized mechanism implemented by using Item Response Theory (IRT). We will first give the overview of our system architecture in Section 2.1. In Sections 2.2 and 2.3, we will describe system’s components in detail. 2.1 System Architecture In our study, a personalized e-learning system based on Item Response Theory (PEL-IRT) is proposed to provide adaptive learning. Figure 1 depicts our system architecture which can be divided into two main parts according to system operation procedures, i.e. front-end and back-end parts. Front-end part manages communication with learners and records learners’ behavior. On the other hand, back-end part aims to analyze learners’ abilities and select appropriate courses according to the estimated learners’ abilities. In Figure 1, interface agent belongs to front-end part. It can identify learners’ statuses, transfer learners’ queries to system and return the suggested course materials to learners. It provides a friendly human-machine interactive interface. Besides, Item Response Theory (IRT) agent manages back-end operation which can be divided into two separated. agents, i.e. feedback agent and courses recommendation agent. Feedback agent aim at collecting learners’ feedback information, updating learners’ abilities, and adjusting the difficulties of course materials. Moreover, courses recommendation agent selects most appropriate course materials to learners based on learners’ abilities. Contain courses with various difficulties Courses Database. 10. User Account Database IRT Agent 12 2 Courses Recommendation Agent. 4 1 User. 5/15. Interface Agent. 14. 3 a.Very Hard b.Hard c.Middle d.Easy e.Very Easy Yes or No System Architecture. 13. 11. 6. Feedback Agent. User Profile Database. 16. 9. 7. 8. Mapping learners' feedbacks into quantitative information.. Figure 1. System Architecture Furthermore, our system also provides a searching and browsing interface to help learners to retrieve course materials in a specified course unit. Learners can browse course materials even learners do not login our system, but personalized service now only is provided to registered learners. In our system, each course material is exactly classified into a predefined course unit. Thus, learners must first select interested course unit or use keywords to search needed or interested course materials. While a new learner visits our system, our system will assign course materials with moderate difficulties to learners. If learners click course materials and reply the predefined questionnaires, then personalized learning services will be started. Feedback agent will estimate learners’ abilities and adjust the difficulties of the course materials based on learners’ explicit feedback information. Recommendation course agent will then use learners’ new abilities to select appropriate course materials to learners. The information function [13][20] mentioned in Section 2.3.2 is applied to select appropriate course materials based on learners’ new abilities. While learners click the recommended course materials, IRT agent will repeat the recommended action until learners give other query terms or logout this system. Additionally, PEL-IRT includes three databases: user account database, user profile database and courses database. In order to identify learners’ statuses, user account database records learners’ e-mail addresses, learners’ query terms, clicked behaviors, the responses of questionnaires, learners’ abilities and the difficulties of the clicked course materials. That is, all browsing information about learners is stored in user profile database. Courses database contains courses materials with different.

(3) difficulties, clicked times of various difficulties, course category, course unit, course title and brief description of course material. Furthermore, the difficulties of course materials are jointly determined by the experts and learners’ collaborative voting approach [20] detailed in Section 2.3.1.1. Moreover, the learning process of our proposed system can be depicted in Figure 2. First, system must identify learners’ statuses, the system will provide the original course material list (non-personalization list) to learners based on learners’ query terms if learners login this system at the first time. After learners visit some course materials and respond the given questionnaires, our system will estimate learners’ abilities, adjust the difficulty of the selected course material, and recommend appropriate course materials to learners until learners logout this system. Detail description will be introduced in next section. Beginner. Provide original relevant information by system Visited pages. Experienced User. Respond questionnaire. Provide personal information according to the profile. Courses display. Feedback agent. Analyzed result. Ranking. Courses recommendation agent. Profile and difficulty modification. Figure 2. The Learning Process of PEL-IRT 2.2 Interface Agent Interface agent provides a friendly interface to interact with learners and is served as an information channel to communicate with IRT agent. Interface agent includes the mechanisms of account management, authorization and query searching. While learners visit this system, they can select interested course categories and units from course database, and might give appropriate keywords to search interested course materials. If learners visit this system at the first time, they need to perform registration action for personalized services. At the beginning, our system only depends the query term to recommend course materials to learner according to the selected course category and unit. After learners login our system successfully and browses some interested course materials, they must reply some assigned questionnaires described later. Then, these replied answers are sent to the IRT agent to infer learners’ new abilities and suggest appropriate course materials. 2.3 IRT Agent After learners respond the given questionnaires, their responses are sent to IRT agent which contains feedback agent and courses recommendation agent as shown in Figure 1. Feedback agent aims at recording learners’ feedback information and sending these learners’ responses to courses recommendation agent. in order to evaluate learners’ new abilities. After new abilities of learners are evaluated, course recommendation agent suggests appropriate course materials to learners. In next subsections, we will first describe feedback agent, then illustrate courses recommendation agent in detail. 2.3.1 Feedback Agent Feedback agent records learners’ responses, analyzes learners’ abilities, and tunes the difficulties of course materials. It can communicate with interface agent and courses recommendation agent simultaneously. Also it contains three main operations: collecting learners’ feedback information, reevaluating learners’ abilities based on feedback information and updating course difficulties in course database. The detailed flowchart of feedback agent is shown as Figure 3. The collected information from interface agent includes learners’ e-mail addresses, the clicked courses’ ids and the learners’ responses to questionnaires. In PER-IRT system, the difficulties of courses materials are tuned based on the collaborative voting approach [21] and the learners’ new abilities are reevaluated by applying maximal likelihood estimation method mentioned in Section 2.3.1.2 [13][20][22]. These corresponding updated information will be sent to user profile database and courses database, respectively. Since learners’ abilities are reevaluated according to their explicit feedback information, new learners’ abilities can be adjusted dynamically. In the meanwhile, the values of new learners’ abilities are also sent to course recommendation agent as an index to rank course materials in course database based on the values of information function [20]. Next, we will describe how to adjust the difficulties of course materials and how to estimate learners’ abilities. Interface agent. User profile database. Collect feedback information. Courses database. Estimate learner's by using maximum likelihood estimation. Reevaluate course difficulty. Feedback Agent. Courses recommendation agent. Ranking list of Courses materials. Figure 3. Operation Flowchart of Feedback Agent 2.3.1.1 Adjusting the Difficulties of Course Materials In order to recommend appropriate course materials to learners based on personalized.

(4) requirement, Item Response Theory with single difficult parameter is used to model a course material. In our system, we consider both the difficulties of course materials and learners’ abilities because they will affect the learners’ interests and the learning results. We think that too hard or too easy course materials will make learners lose learning interests. In general, too hard course materials make users feel frustrating. On the contrary, too easy course materials cannot satisfy learners’ learning requirement and learners may waste too much time on these course materials. Thus, providing appropriate course materials to learners is an important issue for Web-based learning systems. In most Web-based learning systems, the difficulties of course materials are determined by course experts. However, it is not an appropriate approach because most learners are not course experts. In order to satisfy real needs, our system automatically adjusts the difficulties of course materials based on the collaborative voting approach [21]. Namely, course experts first initialize the difficulties of course materials based on their domain knowledge, then the difficulties of course materials are automatically adjusted according to the learners’ feedback information. From experimental analyses, we find that the difficulties of course materials will gradually approach to reasonable and stable statuses after a large number of learners use this system. Furthermore, our system will automatically filter noise feedback information if learners give abnormal feedback. In what follows, we will describe the adjusting procedure of the difficulties of course materials. In learners’ collaborative voting approach, 5-point Likert-scale proposed by Likert in 1932 is applied in our system [23]. In past researches, Likert-scale is used in attitude surveys. Likert-scale defines the scaled answers from “strongly disagree” to “strongly agree” based on the agreement or disagreement degree with the question for a person. The most common scale measure is defined from 1 to 5. Frequently, the scale 1 stands for “strongly disagree”, 2 is “disagree”, 3 is “not sure”, 4 is “agree” and 5 is for “strongly agree”. Based on Likert-scale, we define the scales: -2 stands for “very easy”, -1 represents “easy”, 0 is “moderate”, 1 stands for “hard” and 2 represents “very hard” in our system. As a user logins our system, the system will record the learners’ browsing behavior. After learners browse a course material suggested by our system, two questions must be reply. One is “How do you think about the difficulty of this course material? ”, another is “Can you understand the content of the course material? ”. The first question includes five levels’ choices: “very hard”, “hard”, “moderate”, “easy” or “very easy”. The reason of using 5-point Likert scale is that too many options items will make learners fell. confusion, and too few options items cannot distinguish the various difficulties of course materials. The second question has two crisp options: “yes“ or “no“. The reason is that our method needs yes or no pattern to evaluate learners’ abilities. Furthermore, the tuned difficulties of course materials are a linear combination of the course difficulty defined by course experts and course difficulty replied by learners with different weights. In order to describe our proposed method, three definitions about our proposed collaborative voting approach are described as follows: Definition 3.1: Difficulty levels of course material Assume that D = {D1 , D 2 ,..., Di ,...D5 } is a set of difficulty levels of course material which includes five different difficulty levels. D1 represents very easy which is quantified as -2, D2 represents easy which is quantified as –1, D3 represents moderate which is quantified as 0, D4 represents hard which is quantified as 1, and D5 represents very hard which is quantified as 2. Definition 3.2: The average difficulty of the j th course material based on learners’ collaborative voting 5 nij (1) b j (voting ) = Di Nj i =1. ∑. where b j (voting ) is the average difficulty of the j th course material after learners give collaborative voting, nij represents the number of learners that. give the feedback responses belonging to the i th difficult level for the j th course material, and N j denotes the total number of learners to rate the j. th. 5. course material, and N j =. ∑n. ij. .. i =1. Definition 3.3: The tuned difficulty of course material b j (tuned ) = w × b j (initial ) + (1 − w) × b j (voting ) (2) where b j (tuned ) is the tuned difficulty of the j th course material based on learners’ collaborative voting , b j (initial ) is the initial difficulty of the j th course material initialized by course experts, and w is an adjustable weight.. Finally, our system can use Equation (2) to automatically tune the difficulties of course materials in course database. Equation (2) is a linear combination of the courses’ difficulties defined by course experts and the courses’ difficulties derived from learners’ collaborative voting. Moreover, the.

(5) time complexity of the tuned difficulty of course material is constant because our system preserves all old voting results. 2.3.1.2 The Estimation of Learners’ Abilities Before discussing how to estimate learners’ abilities, we first describe some assumptions in Item Response Theory. Assume that a randomly chosen learner responds a set of n items with response pattern (U 1 , U 2 ,..., U j ,..., U n ) , where U j is either 1 or 0 on the j th course material. In our study, U j = 1 represents that learner can completely understand the selected course material. On the contrary, U j = 0 represents that learner cannot completely understand the selected course material. By the assumption of local independence [20], the joint probability of observing the response pattern is the product of the probabilities of observing each learner response, that is, P(U1, U 2 ,..., U j ,..., U n | θ ) = P (U1 | θ ) P(U 2 | θ )...P (U j | θ )...P(U n | θ ),. which may be expressed more compactly as, P(U 1 , U 2 ,..., U j ,..., U n | θ ) =. n. Π P(U. j. | θ ),. j =1. Since U j is either 1 or 0, this can be taken into account by writing the likelihood function as, n. P(U1,U 2 ,...,U j ,...,U n | θ ) =. Π. 1−U j U , P(U j | θ ) j [1 − P(U j | θ )]. j =1. or simplify as,. P(U1, U2, L, Un | θ ) =. n. ∏P. Ui 1−U j j Qj. (3). j =1. where Pj = Pj (U j | θ ) and Q j = 1 − Pj (θ ) . Equation (3) is an expression of the joint probability of a response pattern. When the response pattern is observed, U j = u j , the probabilistic interpretation is no longer appropriate; the expression for the joint probability is now called the likelihood function and is denoted as L(u1 , u 2 , L , u n | θ ) where u j is observed response to the j th item. Thus, the estimated formula of learners’ abilities based on the tuned difficulty of course material is illustrated as follows: L(u1, u2 , L , un | θ ) =. n. ∏ P (θ ) j. uj. 1−u j. Q j (θ ). (4). j =1. where P j (θ ) =. e. (θ −b j (tuned )). , and Q j (θ ) = 1 − Pj (θ ) , (θ −b j (tuned )) 1+ e Pj (θ ) is the probability that learners can completely. understand the. j th course material under their. abilities level θ , Q j (θ ) is the probability that learners cannot understand the j th course material under their abilities level θ , and U j is the answer of yes or no obtained from users’ feedback to the j th course material, i.e. if answer is yes then U j = 1 ; otherwise, U j = 0 . Since. Pj (θ ). and Q j (θ ). are functions of. learner’s ability θ and course material’s difficulty parameter, the likelihood function is also a function of these parameters. The learner’s ability θ can be estimated by computing the maximum value of likelihood function [20]. That is, learner’s ability is equal to the θ value with maximum value of likelihood function. The method of maximum likelihood function estimation needs two input parameters to evaluate learners’ abilities: one is the tuned difficulties of the course materials based on the collaborative voting approach, another is the yes or no responses of learners to the given questionnaires. Restated, the learners must give yes or no crisp response after they browse a course material. In our system, learners’ abilities are limited between –3 to 3. That is, learner with ability θ = −3 is viewed as poorest, learner with ability θ = 0 is viewed as moderate, and learner with ability θ = 3 is viewed as best. While learners login the system at the first time, our system will recommend the course materials with moderate difficulty to learners. Then our system will adaptively adjust learners’ abilities according to the learners’ feedbacks. If learners can understand completely the content of the suggested course material, then learners’ abilities will be promoted based on the estimated formula of learners’ abilities mentioned in Equation (4), otherwise their abilities will be descended. Our system will send the new learners’ abilities to course recommendation agent, then course recommendation agent ranks a series of appropriate course materials in course database according to the new ability. In next subsection, we will introduce the courses recommendation agent. 2.3.2 Courses Recommendation Agent After feedback agent reevaluates learners’ abilities, course recommendation agent can recommend course materials to learners by using the new abilities estimated by feedback agent. The relationship of course recommendation agent with feedback agent is shown as Figure 3. In this study, the information function [20], shown in Equation (5), is applied to recommend appropriate course materials. (1.7) 2 (5) I j (θ ) = −1.7 (θ −b j (tuned )) 2 1.7 (θ −b j (tuned )) e 1+ e. [. ][. ].

(6) where θ stands for learners’ new abilities estimated after n preceding course materials, P (θ ) is the j. probability of a correct response to the j th course material for learners with ability θ , b (tuned ) is j. the tuned difficulty of the j. th. course material.. That is, course recommendation agent can recommend a series of course materials to learners with ability θ according to the ranking order of information function’s value. A course material with a maximum information function value under learner with ability θ indicates that our system will give a highest recommendation priority.. 4. Experiments In order to evaluate the performance of our proposed personalized e-learning system for course materials recommendation, we develop a system prototype to provide personalized e-learning services. 3.1 Experimental Environment Our system prototype is implemented on the platform of Windows 2000 with IIS 5.0 Web server. The front-end script language is PHP 4.0 and the database is Microsoft SQL 2000 server. At present, our system only contains small amounts of course materials obtained from Internet. For the course category of neural network in courses database, we have predefined 3 course units and collected 43 course materials so far. A course unit in our system indicates the collection of course materials with high relevance, such as the course unit of “Back-propagation” in the course category of neural network. These course materials were classified into some predefined course units in advance. Each course material has its corresponding difficulty initialized by experts. 3.2 Experimental Results and Analysis Next, we use “Perceptron” unit in “Neural Network” (NN) to illustrate the experimental results. There are 20 course materials in “Perceptron”, 18 learners login our system, and 195 records in user profile database. All of learners are Master students and 13 of them have taken the course of neural network. 3.2.1 Difficulties’ Adjustment of Course Materials In our system, the difficulties of course materials can be dynamically tuned based on our proposed collaborative voting approach after learners give feedback responses. We use three course materials, i.e. Course A, Course B, and Course C, to illustrate the difficulties’ adjustment of course materials. In this. work, course A, B, and C belong to difficult, moderate, and easy course materials, respectively. We normalize the values of the difficulties and learners’ abilities within -1 to 1. Figure 4 shows the tuned curves of difficulties of three different course materials. The tuned margin at the beginning stage is large because the initial difficulty of course material can not fit learners’ abilities. We can observe that the curves gradually approach to stable status as the clicked times gradually increase, i.e., difficulties of course materials can be correctly determined by a large amount of learners’ voting. The difficulties adjustment of course materials. Course A. Course B. Course C. Difficulty 0.6 0.5 0.4 0.3 0.2 0.10 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 -1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. Course A 0.33 0.5 0.5 0.42 0.38 0.35 0.33 0.36 0.31 0.28 0.25 Course B. 0.3 -0.3 -0. Course C. -0.7 -0.5 -0.6 -0.7 -0.7 -0.6 -0.6 -0.6 -0.6 -0.6 -0.6 -0.6 -0.6. -0 0.04 0.03 0.02 0.01 -0. -0 0.03 0.04. Clicked times. Figure 4. The tuned curves of the difficulties of course materials 3.2.2 The Adaptation of Learners’ Abilities Learners’ abilities can be dynamically evaluated according to learners’ responses after they visit the recommended course materials. Learners’ abilities will be promoted or descended based on the learners’ responses. If learners can understand the content of the recommended course materials, their abilities will be promoted. Otherwise, learners’ abilities will be descended. We select three learners’ with various learning abilities to illustrate the experimental results. Figure 5 shows the curves of the adaptation of three various learners’ abilities. We can observe that learners’ abilities are drastically tuned at the beginning stage. While learners learn the appropriate course materials during the learning process, learners’ abilities will gradually approach to stable status. Moreover, we can observe the relationship of the difficulties of the clicked course materials with the adaptation of the learner A’ ability in Figure 6. In this figure, assume learner’s responses “yes” to the question of “Can you understand the content of the recommended course material?” in twenty clicked course materials. We find that if learners can understand the recommended course material with higher difficulty will contribute a higher tuned value of learner’s ability. On the contrary, if learners understand the course material with lower difficulty will contribute a lower tuned value of learner’s ability. Figure 7 shows the relationship of the learners’.

(7) abilities with the difficulties of the recommended course materials. We find that the difficulties of the recommended course materials are high relevance with the learners’ abilities. This result shows that our system indeed can recommend appropriate course materials to learners according to different learners’ abilities. The adapation of learners' abilities Learner A. Learner B. Learner C. Ability1. 19. 16. 13. -1. 10. 7. 4. 1. 0. The number of clicked course material. Figure 5. The adaptation of learners’ abilities Difficulty v.s. Ability Difficulty. 4. Conclusion. Ability. 1Ability. Difficulty1. 19. 16. 13. 10. -1. 7. 4. 0 1. 0. -1. Number of of clicked course materials. Figure 6. The relationship between the difficulties of clicked course materials and the adjustment of the learner A’s ability Ability v.s. The difficulty of recommended course material Ability. Recommend-Diffiuclty. Ability. Difficulty. 1. 0. 0. -1. 1 5 9 13 17. 1. Viewpoint. 3.3 Satisfaction Evaluation analyze. Table 1. The learners’ satisfaction evaluation. -1. Figure 7. The relationship between learner A’s ability and the difficulties of recommended course materials. we. In our study, we propose a personalized e-learning system based on Item Response Theory, termed as PEL-IRT, which can online estimate learners’ abilities to recommend appropriate course materials to learners. It provides personalized Web-based learning according to the visited course materials of learners and learners’ responses. Moreover, the difficulties of course materials are automatically adjusted by using our proposed collaborative voting approach. Experimental results show that our proposed system can immediately provide personalized course materials’ recommendation based on learner’s abilities and speed up learners’ learning efficiency. Furthermore, learner only needs to reply simple questionnaires for personalized services.. (a) Can you understand the content of course materials? (1: yes, 0: no). Number of clicked course materials. Finally,. information to evaluate if the recommended course materials satisfy most of learners’ requirement from two different views, i.e. learners’ and course materials’ viewpoints. The results are illustrated in Table 1. We collect the learners’ responses to the question of “Can you understand the curse material?”. From learners’ viewpoint, the average degree of understanding the recommended course material is 0.8. This result indicates that the learners’ comprehension is high for the recommended course materials. From course materials’ viewpoint, the average degree of the recommended course material that can be comprehended by learners is 0.84. This result indicates that most recommended course materials can be comprehended by learners. Moreover, from two different viewpoints, we find that the average difficulty of the course materials recommended by our system respectively is 1.764 and 1.596, which is close to 2. Namely, most learners think the recommended course materials belonging to moderate difficulty. This phenomenon can be explained as our system can recommend appropriate course materials to learners. We find that our system indeed can satisfy most of learners’ personalized requirements.. learners’. feedback. Learners’ viewpoints Course materials’ viewpoints. The learners’ comprehension degree for the course material recommended by our system 0.8 0.84.

(8) (b) How do you think about the difficulty of the course material? (0: very easy, 1: easy, 2: moderate, 3: hard, 4: very hard)) Viewpoint. The average difficulty of the first recommended course material. Learners’ viewpoints. 1.764. Course materials’ viewpoints. 1.596. References [1] [2] [3] [4] [5] [6] [7]. [8]. [9] [10]. [11]. [12]. [13] [14]. The investigation of Yam, Available at http://survey.yam.com. Barker, K., “E-learning in Three Step,” School business affairs, 2002, Available at http://www.asbointl.org. International Data Corporation, Available at http://www.idc.com. E-learning in Nation Tsing Hua University, Available at http://elearn.cc.nthu.edu.tw. E-learning in Cisco, Available at http://www.cisco.com. Distance Learning Resources Network (DLRN), Available at http://www.dlrn.org. Brusilovsky P., “Adaptive and Intelligent Technologies for Web-based Education,” In Rollinger, C.; Peylo, C. (eds.), Special Issue on Intelligent Systems and Teleteaching, Künstliche Intelligenz, vol. 4, pp. 19-25, 1999. Kao T. T., Personalization Courseware Navigation Using Grey Ranking Analysis, Master’s thesis, Department of Electronic Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, 2001. Khan B. H., Web Based Instruction, Englewood Cliffs, New Jersey: Educational Technology Publications, 1997. Balabanovic M. and Shoham Y., “Fab: Content-based, Collaborative Recommendation,” Communications of the ACM, vol. 40, no. 3, pp. 66-72, March 1997. Fu X., Budzik, J. and Hammond J. K., “Mining Navigation History for Recommendation,” Proceedings of the 2000 International Conference on Intelligent User Interfaces, pp. 106-112, 2000. Baker F. B., “The Basics of Item Response Theory,” ERIC Clearinghouse on Assessment and Evaluation, University of Maryland, College Park, MD, Available at http://ericae.net/irt/baker. Hambleton R. K., Item Response Theory: Principles and Applications, Boston, Kluwer-Nijhoff Publisher, 1985. Horward W., Computerized Adaptive Testing: A Primer, Hillsdale, New Jersey: Lawerence Erwrence Erlbaum Associates, 1990.. [15] Hulin C. L., Drasgow F. and Parsons C. K., Item Response Theory: Application to Psychological Measurement, Homewood, IL: Dow Jones Irwin, 1983. [16] Hsu T. C. and Sadock S. F., Computer-assisted Test Construction: A State of Art, TME report 88. Princeton, New Jersey, Eric on Test, Measurement, and Evaluation, Educational Testing Service, 1985. [17] OEFL, Avaliable at http://www.toefl.org. [18] GRE, Avaiable at http://www.gre.org. [19] GMAT, Available at http://www.gmat.org. [20] Hambleton R. K., Swaminathan H. and Rogers H. J., Fundamentals of Item Response Theory, Newbury Park: Sage, 1991. [21] Lin Y. T., Tseng S. S. and Jiang M. F., “Voting Based Collaborative Platform for Internet Content Selection,” The 4th Global Chinese Conference on Computers in Education Proceeding, Singapore, 2000. [22] Walope R. E., Myers R. H. and Myers, S. L., Probability and Statistics for Engineers and Scientists, sixth edition, Prentice Hall, New Jersey, 1998. [23] Likert R., A Technique for the Measurement of Attitudes, New York: Archives of Psychology, 1932..

(9)

數據

Figure 1. System Architecture
Figure 3. Operation Flowchart of Feedback Agent  2.3.1.1 Adjusting the Difficulties of Course  Materials
Figure 4. The tuned curves of the difficulties of  course materials
Figure 7. The relationship between learner A’s ability  and the difficulties of recommended course materials  3.3 Satisfaction Evaluation

參考文獻

相關文件

a) Excess charge in a conductor always moves to the surface of the conductor. b) Flux is always perpendicular to the surface. c) If it was not perpendicular, then charges on

Therefore, in this research, we propose an influent learning model to improve learning efficiency of learners in virtual classroom.. In this model, teacher prepares

• Nearpod allows the teacher to create interactive lessons that are displayed on the student device while they are teaching2. • During the lesson students interact with the program

• We will look at ways to exploit the text using different e-learning tools and multimodal features to ‘Level Up’ our learners’ literacy skills.. • Level 1 –

computational & mathematical thinking by task-based teaching as a means to provide an interactive environment for learners to achieve the learning outcomes; and (ii) how

To cope with these problems, we propose and develop a ubiquitous virtual tutoring assistant system which incorporates a supplement-material base as well as a solution extraction

This research is focused on the integration of test theory, item response theory (IRT), network technology, and database management into an online adaptive test system developed

This study reviewed ecological economics, general system theory and adopted the concept of emergy of ecosystem proposed by Odum, then built a ecological energetic system model of