• 沒有找到結果。

To demonstrate the performance of the proposed top down approach for algorithm music composition, we have implemented a music composition system on the World Wide Web (http://avatar.cs.nccu.edu.tw/

~stevechiu/cms/experiment2). Our music composition system was implemented in Java along with jMusic [42]

and Weka [51]. Both jMusic and weak are open source packages. jMusic is a Java library written for musicians. It is designed to assist the compositional process by providing an environment for musical exploration, musical analysis and computer music education. jMusic supports music data structure based upon note/sound events, and provides methods for organizing, manipulating and analyzing musical data. Weka is a collection of libraries for data mining tasks. It contains tools for data pre-processing, classification, regression, clustering, association rules, and visualization. In our implementation, jMusic is utilized to extract MIDI messages, maintain music data structure and output MIDI message. The chord assignment algorithm in the melody style analysis component is also developed in jMusic. Weka is used to implement the music style mining component.

Little attention in the research literature has been paid to the problem of evaluating the music generated by systems for algorithmic music composition. This comes from the fact that evaluation of aesthetic value in works of art often comes down to individual subjective opinion. The majority of music composition systems proposed in the research literature evaluate the performance by presenting the examples of composed music works only. Some researches performed qualitative analysis by asking subjects about the preference of the generated music works. Only few researches have conducted experiments for quantitative analysis of performance by asking subjects to discriminate system generated music from human composed music.

However, to the best of our knowledge, there exists no research on algorithmic composition which performs comparative analysis of performances among different systems. This is partly owing to the availability or implementation of other music composition systems.

To evaluate the effectiveness and efficiency of the proposed music generation approach, three experiments were performed. One experiment is a test designed to discriminate system-generated music from conventionally-composed music. Another experiment is to test whether the music style of the generated music is similar to that of the given music objects. Moreover, to evaluate the efficiency of the proposed approach, the other experiment was conducted to measure the elapsed time of music generation. At last, a case study is given to demonstrate an example generated by our system.

6.1 Turing-like Test

It is difficult to evaluate the effectiveness of a computer-generated music composition system because the evaluation of effectiveness in works of art often comes down to subjective opinion. In 2001, M. Pearce addressed this problem and proposed a method to evaluate the computer music composition system [37]. This study has adopted this method to design the necessary experiments.

Table 1: The results of the discrimination test.

SD: the standard deviation, DF: the degree of freedom, t: t statistic.

Mean SD DF t P-value

All subjects 0.522 0.115 35 1.16 0.253 All subjects except

experts

0.503 0.106 31 0.166 0.869

The proposed system can be considered successful if the subjects cannot distinguish the system-generated music from the human-composed music. There were 36 subjects, including four well-trained music experts. The prepared dataset consisted of 10 machine-generated music objects and 10 human-composed music objects. The latter comprised “Beyer 8”, “Beyer 11”, “Beyer 35”, “Beyer 51”,

“Through All Night”, “Beautiful May”, “Listen to Angle Singing”, “Melody”, “Moonlight”, and “Up to Roof.” These music objects are all piano music containing melody and accompaniment. These 20 music objects were sorted randomly and displayed to the subjects. The subjects were asked to listen to each music object and to determine whether it is system-generated or human-composed music. The proportions of correctly discriminated music were calculated from the obtained result (Mean is the average of the accuracy).

The significant test was performed with the one-sample t-test against 0.5 (the expected value if subjects discriminated randomly).

The results of the experimental test are shown in Table 1. The results show that it is difficult to discriminate the system-generated music objects from the human-composed ones. All the subjects (including the experts) displayed a higher degree of discrimination because some of them possess extensive musical backgrounds.

6.2 Effectiveness Evaluation for Styled Composition

In the second experiment, an attempt was made to evaluate whether or not the music style of the

system-generated music is similar to that of the given music. The proposed system was demonstrated for the subjects on the world-wide web: http://avatar.cs.nccu.edu.tw/~stevechiu/cms/experiment2. For each round of music generation, subjects were asked to give a score, from 0 to 3, to denote the degree to which they felt it was dissimilar or similar. Each subject repeated this process three times. A total of 31 subjects performed the test with a resulting mean score of 1.405 and a standard deviation of 0.779.

6.3 Efficiency Evaluation of Music Generation

To evaluate the response time of the developed music composition system, the third experiment was conducted. The experiment was conducted on an IBM desktop computer with a 2.4 Ghz Intel(R) Pentium(R) quad-core processor with 4 gigabytes main memory running Linux 2.6 operating system.

There are 39 music objects collected from the Internet. The average number of notes per music in database was 145.2. The analysis component, especially the motif mining, takes most of the execution time in the whole process. While the analysis component was executed offline instead of online, the elapsed time for learning step and generating a new music object is shown in Figure 15 as a function of the number of selected music examples. It can be seen that with the increasing number of selected music examples, the elapsed time of on line processing takes less than 10 milliseconds.

Figure 15: Elapsed time of the online process of our system.

6.4 Case Study

The results of the proposed approach are shown by using an example. Using six music objects as input,

“Beyer 55,” “Grandfather’s clock,” “Little Bee,” “Little Star,” “My Family,” and “Ode to Joy,” the obtained result is an AABA form. The resulting phrase arrangements are 1-1-2 in Section A and 2-2 in Section B. At the melody style mining step, the following patterns were discovered: {{C}, {G}, {C, G}}, {(G, C), (C, G)}, {<C, G>, <G, C>, <C, G, C>}. Furthermore, the chord generation component was found to generate the chord progressions <C, C, G, C> in Section A and <C, G, G, C> in Section B. The motif selection model chose a motif “sol-mi-mi” for generating the first phrase and the melody generation component develops this motif for the second phrase melody in Section A. Finally, the resulting musical composition is shown in Figure 16.

Figure 16: An example of composed music using the proposed approach.

相關文件