• 沒有找到結果。

Chapter 6: System Implementation

6.2 The Implementation of LPM System

Figure 6.3: The interface of LPM

In this Section, we describe the LPM system in detail. Figure 6.3 shows the interface of LPM. It includes three parts: input data, seven components and command buttons. The details of each component will be described in following subsections.

z Input Data

There are three input data:DBURL,TableName and minimal support. DBURL denotes the path of the system used to access the database and the TableName denotes name of the input table of web logs. Finally, the minimal support allows user to scroll the value of support between of 0 and 1.

z Preprocesses Function

This function is used to preprocess the input data. In our system, the original web log is the input data and it is transformed to other representation format using symbol

‘A’, ‘B’, etc. the output is shown in Figure 6.4.

Figure 6.4: The output of preprocess phase

z Sequential Pattern Function

After the original web logs are preprocessed into our representation format, we use the “SP” button to extract the maximal frequent learning patterns from learning portfolio. The GSP algorithm will be implemented and applied to mine maximal frequent patterns, and the output is shown in Figure 6.5.

Figure 6.5: The result of applying GSP algorithm

z Independent Frequent Set Function

IF denotes “Independent Frequent Set”. This function is used to modify the GSP algorithm. And output of this step is taken as the input data of feature transform process.

z Feature Transformation Function

In this step, based upon maximal learning patterns, the original learning sequences of every learner can be mapped into a bit vector where the value of each bit is set as 1 if the mined maximal learning pattern is a subsequence of original sequence.

A new table, called “FeatureTransformTable”, is created with each learning mined sequence as one dimension to store this information. Therefore, we can use these bitwise vectors of each learner to group them into several clusters. Figure 6.6 shows the result of the Feature Transformation process.

Figure 6.6: The result of Feature Transform of each learner

z Clustering Function

The ISODATA [16] algorithm is implemented and executed to group learners into several clusters through several iterations. The bit vector in Cluster Centroid

Filed denotes the representative learning patterns set in a cluster, which will be used to generate the sequencing rules of SCORM later. Figure 6.7 shows the output of this process.

Stored Cluster Information

Figure 6.7: The result of Clustering

z Decision Tree Function

The ID3 algorithm is implemented and executed by clicking the Decision Tree button, and the output of this process is shown in Figure 6.8.

Figure 6.8: The result of Decision Tree Construction

z Content Package Function

Finally, based upon the cluster centroid of each cluster which is corresponding to several learning patterns as sequencing rules in sequencing and navigation (SN) of SCORM 2004, a personalized activity tree is generated for each cluster. To implement our PATC algorithm, the SCO which is imported into SCORM RTE or others SCORM compliant LMS is needed to reorganize and reuse in order to generate a content package. Indeed, we never want to create a new content package instead of reusing existing materials. Therefore, we should know the structure of an SCORM content package, as Figure 6.9 shows.

Figure 6.9: SCORM 2004 Content package Concept diagram

A SCORM Content Package contains two major components [2]:

z A special XML document describing the content structure and associated resources of the package called the manifest file (imsmanifest.xml). A manifest is required to be present at the root of the content package.

z The physical files making up the content package.

Therefore, we implement our PATC according to several steps in below:

(1) Getting the information of each learning material

As mentioned above, the granularity of learning material in our system is SCO.

Fortunately, the SCORM RTE will store all information about SCOs, such as CourseID, ItemIdentifier, path, etc., it is that even if different SCOs in different Courses which may be imported into the RTE system by different authors. In other words, we really achieve the reusability by reorganizing different SCOs into one content package. Figure 6.10 shows the detailed information about SCOs in SCORM RTE. Then we can get the CourseID and its item identifier of each SCO.

Figure 6.10: SCO’s information in SCORM’s ItemInfo Table

(2) Parsing physical files of each SCO

Although we know the CourseID and identifier of each SCO, the physical files those make up this SCO material aren’t still known. To get this information, the

“imsmanifest.xml” file which is the main file in every imported course is parsed.

Due to using XML, an existing parser, Xerces [1], is used. Therefore, all information of each SCO’s physical files is obtained.

(3) Generating the main file: “imsmanifest.xml”

In this process, the main component of our activity tree: “imsmanifest.xml” is generated. W3C (World Wide Web Consortium) defines the Document Object Model (DOM) that allows programs and scripts to dynamically access and update the content and structure of documents. As a result, an xml file using DOM

structure is created and added nodes with SCOs’ information including metadata, organizations and resources. In additionally, we also add sequential rules defined in PATC in Chapter 3 to the “imsmanifest.xml” file.

(4) Zip them into one content package

Finally, a Zip package in JAVA is used to create a content package which use all information mentioned above.

After generating personalized activity tree, this zip file is imported into SCORM RTE v 1.3 beta 3 to verify that whether it is compliant with SCORM standard or not.

Figure 6.11 shows the result of our finally content package be workable in RTE.

Figure 6.11: Personalized Activity Tree imported in SCORM RTE

相關文件