• 沒有找到結果。

Minimum Weight Branching

4.1.1 Minimum Branching Overview

In the previous section we discussed our goal of finding an efficient learning path. This is achieved through the application of the minimum

weight branching algorithm. In an e-learning environment, we can apply Minimum Weight Branching to find the least difficult learning path.

Minimum weight branching [27] works in the following way:

A natural analog of a spanning tree in a directed graph is branching (also called arborescence). For a directed graph G and a vertex r, a branching rooted at r is an acyclic sub graph of G in which each vertex but r has exactly one outgoing edge, and there is a directed path from any vertex to r. This is also sometimes called an in-branching. By replacing “outgoing" in the above definition of a branching to

“incoming," we get out-branching. An optimal branching of an edge-weighted graph is the branching of minimum total weight. Unlike the minimum spanning tree problem, optimal branching cannot be computed using a greedy algorithm. Edmonds gave a polynomial time algorithm to find optimal branching.

Suppose the following: let G=(v ,e) be an arbitrary graph, let r be the root of G, and let w(e) be the weight of edge e. Consider the problem of computing an optimal branching rooted at r. In the following discussion, assume that all edges of the graph have a nonnegative weight. In the first, we will give a directed graph, weight of every edge, the position of root node. The goal is to find a tree structure that is minimum weight of total edge weight.

4.4.2 Example

Suppose we would like to find the least difficult path for a learner to completing a curriculum. In this example, we use eight nodes (courses).

The root is at node eight and we would like to find the learning sequence path using minimum weight branching. Figure 12 shows the relationships between courseware units. Each node represents one course. The direction of arrows depicts the order of relations with respect to one another. Each arrow is depicted by a difficulty level. In this example, our goal is to find learning sequence path for each node to node eight. Suppose a learner is currently at node 1, he has the choice of choosing either node 2, node 3 or node 4 as possible next stage courses. This process continues until the learner reaches node 8 and all possible paths to node 8 are exhausted.

In total, there are 8 possible paths with varying difficulties to completing the curriculum. To find the best sequence (i.e. the path with the least difficulty); we apply the minimum weight branching algorithm.

Figure 12: Relationship of courseware unit

Figure 12 depicts the initial relationships between each courseware unit.

After applying the minimum weight branching algorithm to find the best learning path, we obtain our result, depicted in Figure 13.

Figure 13: Branching of the relations

As a result, the best learning path for the learner at node 1 is node 1 -> node 4 -> node 7 -> node 8, and the difficulty is 9 (3+3+3)

Here we state all the learning paths and their difficulties.

z Learning path from node 1 is to take another 3 courses (in node 4, 7, 8), and the difficulty is 9(3+3+3).

z Learning path from node 2 is to take another 3 courses (in node 4, 7, 8), and the difficulty is 11(5+3+3).

z Learning path from node 3 is to take another 2 courses (in node 6, 8), and the difficulty is 4(1+3).

z Learning path from node 4 is to take another 2 courses (in node 7, 8), and the difficulty is 6(3+3).

z Learning path from node 5 is to take another 1 course (in node 8), and the difficulty is 2

z Learning path from node 6 is to take another 1 course (in node 8), and the difficulty is 3

z Learning path from node 7 is to take another 1 course (in node 8), and the difficulty is 3.

Depending on where a learner is in the physical network, the minimum weight branching algorithm can be used effectively to find the best learning path to node 8.

In the next section, we discuss the process of using data allocation to allocate the courseware unit in the physical network environment (i.e.

the process of mapping the logical view to the physical view). We also discuss the use of a conditional Multi-access Probability algorithm to compute the resultant probability.

4.5 Using data allocation to allocate courseware unit in the

physical network and compute the probability

In the previous section we discussed using minimum weight branching to find the learning paths in a graph. In this section, we propose using a data allocation scheme to put the courseware unit in a physical setting, such as the physical network environment in a defined order. As we mentioned previously in section 4.2, under the capability indicator structure, each courseware unit is related to other units. Thus, after using the minimum weight branching algorithm to determine the learning sequence, we then use data allocation to assign the courseware unit into the physical network environment (discussed previously in chapter 3).

Having done this, we then apply a conditional multi-access probability algorithm to obtain the resultant probability.

4.5.1 Conditional Multi-access Probability

The proposed conditional Multi-access Probability algorithm is a modified version based on the original Multi-access Probability algorithm (discussed in chapter 3). We use the modified algorithm because courseware unit is defined in a specific order.

After a request is sent by a user, the material provider may need to send several resources back. The access time of these resources may not be equal.

The following is the definition of multi-access in ordered sequence reliability for courseware unit Si:

)

We use Conditional Multi-access Probability to model the problem.

Initially, given the course relations (capability indicator obtained), difficulties, reliability of each link and physical network topology, our goal is to allocate these courses in a specific order that decreases difficulty and increases access reliability.

4.5.3 Example

In this example, we explain the entire process of finding the learning sequence (using the minimum weight branching algorithm), allocating them into the physical network (using the data allocation scheme) and computing the probability (using the conditional Multi-access Probability algorithm).

Suppose we are given the following courseware unit relations and the physical network topology as shown in Figure 14. Suppose the user is at site 5 and begins with course 3. Assume each link probability (in the physical network topology) is 0.9. We would like to find the best learning path for the learner.

Figure 14 course relation and physical network topology We summarize this process below:

z Step 1.: Apply the minimum weight branching algorithm to generate the learning path (depicted in Figure 15)

z Step 2: Find learning path for learner (in this example the learning path is course 3 -> course 6 -> course 8).

z Step 3: Allocate course of learner into physical network using the data allocation scheme

z Step 4: compute the CMP.(conditional Multi-access Probability) to find the resultant probability

Figure 15: Branching of physical network topology

Figure 16: Data allocation tree for mesh physical network

After applying the data allocation scheme (step 3), we find the result that:

z course 3 is in site 5

z course 6 is in site 4, and z course 8 is in site 2

This is depicted in Figure 16 and Figure 17.

Figure 17:A simple Mesh connected network

Using CMP (Conditional Multi-access Probability) to compute the probability, we get the following answer:

CMP (User) =Pr (course2)*Pr (course4| course 2)*Pr (course 8| course 2, course 4) = 0.5904899878335

4.5.4 Simulations

In this sub-section we run some simulations to present the results for all the possible probabilities supposing the learner is at different sites.

From Figure 16, simulation 1 supposes the learner is at site 1, simulation 2 supposes the learner is at site 2...and so on.

Simulation 1: CMP (User) =Pr (2)*Pr (4|2)*Pr (6|2, 4) =0.9304899878335 Simulation 2: CMP (User) =Pr (2)*Pr (4|2)*Pr (8|2, 4) =0.9304899878335 Simulation 3: CMP (User) =Pr (2)*Pr (6|2) =0.8095491254

Simulation 4: CMP (User) =Pr (2)*Pr (6|2) =0.8095491254 Simulation 5: CMP (User) =Pr (8) =0.899395994

Simulation 6: CMP (User) =Pr (8) =0.899395994 Simulation 7: CMP (User) =Pr (8) =0.899395994

In Figure 18 and Figure 19 we graph the simulation results. In Figure 18, the x-axis represents the number of courses taken by a learner and the y-axis represents the successful probability. In Figure 19, the x-axis represents the number of courses taken by a learner and the y-axis the difficulty of the courses.

Figure 18: Successful probability

Figure 19: Course Difficult

In Figure 18, we note that as the number of courses increase, the probability decreases meaning there is a low probability of success when learning several courses. In Figure 19, we note that as the number of courses a learner studies increases, the difficulty increases.

In the next two sections (section 4.6 and section 4.7), we present the quiz paper problem and the time restriction problem as two examples that effectively demonstrate the applicability of our approach.

4.6 The Quiz Paper problem

The Quiz paper problem is based on achieving a particular goal of a learner based on statistics from past history quiz papers. The idea of this problem is that learners can know what they need to learn based on past quiz statistics. For example, Figure 20 depicts a database of examination papers. This database contains several questions from past examination papers. Each of these questions has a capability indicator and distribution. The distribution refers to the frequency of that particular question in the database that has been used for quiz or test (e.g. question 1 has a capability indicator of 9-a-01 and distribution of 14.5%, question 2 has a capability indicator 9-a-02 and a distribution of 28.6% and so on).

Figure 20: Data Base of examination

In section 4.6.1 we state our assumptions and in section 4.6.2 will present an example.

4.6.1 Assumptions

The goal of this example is for a learner to achieve a predefined score.

4.6.2 Example

Suppose a student needs to achieve 60% of the grade in the Quiz paper, and the only information about the examination is the history of quiz paper statistics. In this example, suppose we have 4 nodes (questions with capability indicators), and assume the following capability indicators and distributions for the following questions:

z Question 1 with a capability index of 9-a-02 and distribution of 28.6%

z Question 2 with a capability index of 9-a-03 and distribution of 14.5%

z Question 3 with a capability index of 9-a-05 and distribution of 28.6%

z Question 4 with a capability index of 9-a-08 and distribution of 28.6%

Figure 21 shows the distribution of past statistics and present them in the directed graph.

Figure 21: course distribution We then undertake the following steps:

z We Input a grade to be achieved, e.g. 60% of the grade in the Quiz paper.

z We have the distributions for these questions according to the past history of examinations as well as the capability indicators

Figure 22: Results of the quiz paper

To determine the learning path, we consider two factors. First, the difficulty levels associated with each question and the distribution of each question. The difficulty levels associated with each particular question takes precedence over the distribution percentage of the question in determining the learning path.

Figure 22 demonstrates the step-by-step process to the discovery of the learning path. In step 1, we obtain a distribution of 28.6%, and then we use the difficulty level to determine which questions (question 2 or question 3) to study next. The difficulty to question 2 is lower than that of question 3, so in step 2, we study question 2 (which has a distribution of 14.5%). In step 3, the difficulty to question 4 is higher than the difficulty to question 3, so we study question 3 (which has a distribution of 28.6%). At this point we have achieved 60% (in fact we have achieved 71.7%), thus achieving our goal set out earlier.

4.7 The Time restricted problem

The Time Restricted Problem deals with some learners who have time constraints for some reasons. In this example, our goal is to meet their requirements within this time constraint. We map the TLCS (Time Limited Candidate Selector) algorithm [28] to distribute time over the learning path. We find time constraints particularly important because time factors can play an important role in exam preparation.

4.7.1 Assumptions

We share the given time (requested by the learner) to the required courses (after we have applied the minimum weight branching algorithm to find his learning sequence). Two important parameters in this problem

are “learning curves" and “pre-requests" (which we discuss further in section 4.7.2 and section 4.7.4 respectively). In the Time Restricted Problem, we are given the following parameters:

z the time constraints z the difficulty levels z the learning curve and z the course relations

4.7.2 Learning curves

The concept of learning curve is that when learning a course, if we spend more time, the score should be higher, we use figure 23 to show this concept. The X-axis is time and the y-axis is performance the learner achieves. 100% is the highest possible score.

Figure 23: Learning curve

4.7.3 Example

Suppose we have eight courses (Table 5) depicted in the directed graph shown in Figure 24. Each course has an examination date. The examination dates for each course are listed in the table. We assume that the learning curve of each course is one month (meaning that the learner must spend one or more months studying the course in order to pass). The learner starts with calculus (course1), and the given time constraint is the examination due date. Our goal is to reach (pass exams) course 8 given the time constraints distributed across the learning path and the starting time is in January 1.

Table 5: The courseware unit Course1 Calculus(exam in April)

Course2 Linear Algebra(exam in March)

Course3 Discrete Mathematics(exam in April) Course4 Differential Equation(exam in June) Course5 Probability Theory(Exam in June) Course6 Vector Analysis(exam in July) Course7 Concrete Mathematics(exam in July) Course8 Pattern Recognition(exam in November)

Figure 24: Relations of the courseware unit

First, we apply the Minimum weight branching algorithm to the directed graph (in Figure 24) to get learning path in Figure 25。

Figure 25: Find the branching

Our result is four learning paths:

z path 1:Course1ÆCourse4ÆCourse7ÆCourse8 z Path 2:Course2ÆCourse4ÆCourse7ÆCourse8 z Path 3:Course3ÆCourse6ÆCourse8

z Path 4:Course5ÆCourse8

Because the learner begins at course 1 (calculus), his learning path is path 1(Course1ÆCourse4ÆCourse7ÆCourse8). We map the TLCS algorithm to sequence the four courses.

1. Sequence the courses in list E.

2. If no courses in list E are late, stop, otherwise, identify the first late course, k.

3. Identify the latest position in the list in which the course k would not be late. Place course k in that position if it does not make any of the courses before k late, otherwise place it in late List L. Revise the sequence in list E and return to step 2.

According to TLCS algorithm, in step 1, the sequence in E is path 1(course1->course4->course7->course8). In step 2, we check if there is any late course (e.g. course 1 exam is in April and we need 1 month to study, therefore the time is enough). There in no course that is late.

At this point, we distribute our time constraint to these four courses.

First we distribute the whole of January to course 1 (we assume the learning curve requires one month at least) and test in April. We then share 1 month (whole of February) to course 4 and test in June, and then share 1 month (whole of March) to course 7 and test in July, and finally, share 1 month (whole of April) to course 8 and test in November. Under this arrangement, the learner can prepare his course sufficiently before tests.

4.7.4 Pre-requests

In time restriction problem, we discussed time sharing problem.

However, in some cases, we might encounter “pre-request" courses. The concept of a “pre-request" course states that when a learner completes a particular course, the learning curve will change. Courses that apply

this concept are called “pre-request" courses.

Figure 26 shows four learning curves, we note that after completing a pre-request course, the learning curve changes from learning curve 1 to learning curve 2 and so on.

Figure 26: Score percentage Example

Suppose the learning curve is typically 2 months for each course and the pre-request course is course 3. After finishing course 3, learning curve for each course becomes 1 month under the “pre-request" concept.

Suppose the learner begins at course 1 and the time constraint is 7 months. The directed graph for this is depicted in Figure 27. We approach this problem using two methods.

Figure 27: course relation

Method 1:

In this method, under normal conditions, we attempt to find the learning path.

First, we use the minimum weight branching algorithm to obtain the learning path in figure 28.

Figure 28 The learning paths are as follow:

path 1: Course1ÆCourse4ÆCourse7ÆCourse8 Path 2: Course2ÆCourse4ÆCourse7ÆCourse8 Path 3: Course3ÆCourse6ÆCourse8

Path 4: Course5ÆCourse8

Since the learner starts from course 1, we use path 1 and distribute the 7 months to these courses (Course1ÆCourse4ÆCourse7ÆCourse8), noting that we need a minimum of 2 months per course:

z course 1:2 months z course 4:2 months z course 7:2 months z course 8: 2 months

We note that the total is 8 months, thus, over our time constraint limitation of 7 months. We find that, under normal conditions, this method does not achieve our goal.

In the next method, we attempt to find the learning path in the directed graph that contains “pre-request" courses.

Method 2:

We suppose that course 3 is a pre-request course. If the learner starts from course one, the next logical course will be the pre-request course (i.e. course 3). From there on, we continue to follow the learning path in Figure 28 to course 8. Thus, our path is (course 1 -> course 3 -> course 6 -> course 8). We show the time distribution below:

z course 1: 2 months z course 3: 2 months

z course 6:1 month (it becomes one our because of pre-request course 3)

z course 8: 1 month

In this method, our total months spent is 6 months – achieving our goal within the time constraint of 7 months.

The results are shown below in figure 29.

Figure 29: Result learning path

Having demonstrated the above two-methods, we can conclude that if a directed graph contains a pre-request course, method one is not suitable, in other words, using our original algorithm. So, we modify our algorithm in method 2 (by changing the learning path to suit the pre-request course)

to suit the pre-request course.

Summary

In this chapter, we discussed our model and how it maps the logical view to the physical view. First, we obtained a directed graph containing difficulty levels, time constraints, capability indicators, course relations and later, learning curves. Using the minimum weight branching algorithm, we were able to find the learning path of the user that took these parameters into account and modeled them in the physical environment.

We demonstrated the applicability of this approach through several examples.

In chapter 5 we will demonstrate the applicability of our model into some physical networks such as the ARPA network, the pacific basin network and so forth.

Chapter 5

Illustrative examples and simulation results

In this chapter, we use some examples to see the course access reliability from the logical to the physical view. We demonstrate the applicability of this in four different physical networks: The ARPA network, The Pacific Basin Network, the TANet network and the three-Cube Network. In this chapter, we use the logical view that depicted in the figure 24 as an example and applied it into the physical network mentioned above. Figure 25 is redrawn as Figure 30, and map the courseware unit into the physical network, compute the difficulties of each learning path and their successful probability in the physical network.

Figure 30: Learning paths

All the possible Paths for each course in Figure 30:

Path for course 1: Course1ÆCourse4ÆCourse7ÆCourse8 Path for course 2: Course2ÆCourse4ÆCourse7ÆCourse8 Path for course 3: Course3ÆCourse6ÆCourse8

Path for course 4: Course7ÆCourse8 Path for course 5: Course5ÆCourse8 Path for course 6: Course6ÆCourse8 Path for course 7: Course7ÆCourse8

ARPA network:

Consider the physical network topology in figure 31 which consists of 21 nodes, the user is in node 11; there are 26 links, and other data storage locations.

Figure 31: ARPA network

The Allocation tree of ARPA network is depicted below:

Figure 32: Allocation of ARPA network

We wish to determine the probability that a user can finish a course successfully in the ARPA network environment.

From figure 32, suppose we run simulations from all courses, for example,

simulation 1 starts at course one, simulation 2 starts at course two and so on. Our results are as follows:

Simulation 1

z Simulation 1 needs to take another 3 courses (c4, c7 and c8 in figure

z Simulation 1 needs to take another 3 courses (c4, c7 and c8 in figure

相關文件