• 沒有找到結果。

參考文獻

在文檔中 摘要 (頁 51-77)

[1] Alonso, M., Bertrand, David and Gael, Richard, “Tempo and beat estimation of musical signals,” Journal of the Acoustical Society of America, 2004.

[2] Adams, B., Dorai, C. and Venkatesh, S., “Towards automatic extraction of expressive elements from motion pictures: Tempo, " IEEE International Conference on Multimedia and Expo, volume II, New York City, USA, July 2000, pp. 641–645.

[3] Aucouturier, J.J., and Pachet, François, “Representing musical genre: a state of the art,”

Journal of New Music Research, Volume 32, Issue 1 , pages 83 – 93, March 2003.

[4] Baumann, S., and Klüter, A., “Super-convenience for Non-musicians: Querying MP3 and the Semantic Web,” Proceedings of the International Symposium on Music Information Retrieval, Paris, France, 2002.

[5] Calder, A.J., Burton, A. M., Miller, P., Young, A. W., and Akamatsu, S., “A principal component analysis of facial expressions,” Vision Research 41, 1179–1208, 2001.

[6] Cosi, P., De Poli, G., and Lauzzana, G., "Auditory Modelling and Self-Organizing Neural Networks for Timbre Classification," Journal of New Music Research, 23, pp.

71-98, 1994.

[7] Dixon, S., “A lightweight multi-agent musical beat tracking system.” In PRICAI 2000 Topics in Artificial Intelligence: 6th Pacific Rim International Conference on Artificial Intelligence, pages 778–788, Berlin. Springer, 2000

[8] Dixon, S., “An empirical comparison of tempo trackers,” Proceedings of the 8th Brazilian Symposium on Computer Music, 2001

[9] Dixon, S., “On the analysis of musical expression in audio signals,” SPIE, 2003

[10] Dellaert, F., Polzin, T. and Waibel, A., “Recognizing Emotion In Speech,” Proc. ICSLP '96.

[11] Farnsworth., Paul R., The social psychology of music. The Dryden Press, 1958.

[12] Feng, Y., Zhuang, Y., and Pan, Y., “Music retrieval by detecting mood via computational media aesthetics,” in Proceedings of the IEEE/WIC International Conference on Web Intelligence (WI’03), pp235-241, Oct 2003.

[13] Freeman, J. A., and Skapura, D. M., Neural networks algorithms, applications, and programming techniques, Addison-Wesley, Reading, Michigan, 1992.

[14] Foote, J. and Uchihashi, S. “The beat spectrum:A new approach to rhythm analysis,”

IEEE International Conference on Multimedia and Expo(ICMC2001), 2001

[15] Gunn, S. R., “Support vector machines for classification and regression,” Technical Repor,t University of Southampton, 1998.

[16] Goto, M., Muraoka, Y., “A real-time beat tracking system for audio signals,” In Proceedings of the International Computer Music Conference, Computer Music

50

Association, San Francisco CA, 1995.

[17] Goto, M. and Muraoka, Y., “An audio-based real-time beat tracking system and its applications,” In Proceedings of the International Computer Music Conference, Computer Music Association, San Francisco CA, 1998

[18] Hevner, Kate, “Experimental studies of the elements of expression in music,” American Journal of Psychology, Vol. 48, No. 2, pp. 246-268, Apr., 1936.

[19] Huron, D., “Perceptual and cognitive applications in music information retrieval,”

International Symposium on Music Information Retrieval, 2000.

[20] Huron, D., “The ramp archetype and the maintenance of passive auditory attention, ” Music Perception,10(1)83-92, 1989.

[21] Huron, D., Kinney, D., “Relation of pitch height to perception of

dominance/submissiveness in musical passages,” SRI international’s STAR laboratory, 2003.

[22] İzmirli, Ö., “Template based key finding from audio,” Proceedings of the International Computer Music Conference (ICMC2005), Barcelona, Spain, 2005

[23] İzmirli, Ö.,”Tonal similarity from audio using a template based attractor model,” ISMIR, 2005

[24] İzmirli, Ö., “An algorithm for audio fey finding,”1st Annual Music Information Retrieval Evaluation eXchange(MIREX2005), 2005

[25] ISO/IEC 11172-3, “Information Technology: Coding of moving pictures and associated audio for sigital storage media at up to about 1.5 Mbit/s, part 3: audio,” 1993.

[26] Juslin, P.N., “Communication of emotion in music performance: A review and a theoretical framework. In P. N. Juslin & J. A. Sloboda (Eds.), Music and emotion:

Theory and research (pp. 309-337),” New York: Oxford University Press, 2001.

[27] Juslin, P.N., A. Friberg, E. Schoonderwaldt, R. Bresin, “Automatic Real-Time Extraction of Musical Expression,” in Proceedings of the International Computer Music Conference, 2002.

[28] Juslin, P.N., and Laukka, P., “Improving emotional communication in music performance through cognitive feedback,” Musicae Scientiae, Vol.4, num.2, pp.

151-183, 2000

[29] de Laar, B.V., “Emotion detection in music, a survey,” 4th Twente Student Conference on IT , Enschede 30 January, 2006.

[30] Juslin, P.N., “Cue utilization in communication of emotion in music performance:

Relating performance to perception,” J. Experimental Psychology, 26, 2000, pp.

1797-1813.

[31] Krumhansl, Carol L., “Music: A Link Between Cognition and Emotion,” American Psychological Society, 2002.

51

[32] Katayose, H., Imai, M., and Inokuchi, S., “Sentiment extraction in music,” Proceeding of 9th Int. Conf. Pattern Recognition, Vol. 2, pp.1083-1087.

[33] Kin, H.G., Moreau, NN., Sikora, T., MPEG-7 Audio and beyond audio content indexing and retrieval, John Wiley and Sons, Ltd, 2005

[34] Kohonen, T., Self-Organizing Maps. 3rd Edition Springer, 2001.

[35] Kohonen, T., Self-Organization and Associative Memory. Springer-Verlag,1984

[36] Krumhansl, C. L., Cognitive Foundations of Musical Pitch, Oxford University Press, New York, 1990.

[37] Lippmann, R. P., An Intoduction to Computing with Neural Nets. IEEE ASSP Magazine, 1987.

[38] Liu, C.C., Hsu, J.L., and Chen, A. L. P., “Efficient Repeating Pattern Finding in Music Databases,” Proceedings of ACM 7th International Conference on Information and Knowledge Management, 1998

[39] Liu, D., Lu, L., and Zhang, H.J., “Automatic mood detection from acoustic music data,”

Proceedings of the International Symposium on Music Information Retrieval, Baltimore, MD, USA, 2003.

[40] Liu, D., Lu, L., and Zhang, H.J., “Automatic mood detection and tracking of music audio signals,” IEEE Transactions on Audio, Speech and Language Processing, vol. 14, no.1, January 2006.

[41] Li, T., and Ogihara, M., “Detecting emotion in music,” in International Symposium on Music Information Retrieval, 2003.

[42] Li, T., and Ogihara, M., “Content-based music similarity search and emotion detection,”

ICASSP, 2004.

[43] Large, E.W., “Beat tracking with a nonlinear oscillator,” in Proceedings of the IJCAI’95 Workshop on Artificial Intelligence Music, 1995

[44] Laroche, J., “Estimating Tempo, Swing and Beat Locations in Audio Recordings,” in Proc. Int. Workshop on applications of Signal Processing to Audio and Acoustics(WASPAA), IEEE , pp. 135–139, Mohonk, NY, 2001.

[45] Logan, B., “Mel Frequency Cepstral Coefficients for Music Modeling,” In International Symposium on Music Information Retrieval, 2000.

[46] Lippmann, R. P., “A Introduction to computing with neural nets,” IEEE ASSP Magazine, April, 4-22 , 1987.

[47] Meyer, L.B., Emotion and meaning in music. University of Chicago Press, Chicago, 1956.

[48] Neighbour, O.W., Schoenberg, Arnold, The New Grove Dictionary of Music and Musicians, ed. S.

Sadie and J. Tyrrell (London: Macmillan, 2001), xxii, 577-604

[49] Polzin, T.S. and Waibel, A.H., “Detecting Emotions in Speech,” in Proceedings of Cooperative Multimodal Communication, 1998.

52

[50] Petrushin, V.A., “Emotion in speech: Recognition and application to call centers,” in Proceedings of the Artificial Neural Networks In Engineering ‘99, 1999.

[51] Russell, J.A., “Core affect and the psychological construction of emotion,”

Psychological Review Vol. 110, No. 1, pp.145-172, Jan 2003.

[52] Russell, J.A., A circumplex model of affect. J. Personality Social Psychol, 39, 1161–1178, 1980.

[53] Russell, J.A. and Bullock, M., “Multidimensional scaling of emotional facial expressions: similarity from preschoolers to adults,” J. Personality Social Psychol. 48, 1290–1298, 1985.

[54] Rosenthal, D., “Emulation of human rhythm perception,” Computer Music Journal, vol.16, no.1, pp64-76, 1992

[55] Schapire, R.E., and Singer, Y., “Boostexter: A boosting-based system for text categorization,” Machine Learning, vol. 39, no. 2/3, pp. 135–168, 2000.

[56] Skowronek, J., et. al “Ground truth for automatic music mood classification,” in International Symposium on Music Information Retrieval, 2006

[57] Scaringella, N., Zoia, G. and Mlynek, D., “Automatic genre classification of music content: a survey,” IEEE Signal Processing Magazine, Volume 23, Issue 2, Page(s):133 – 141, Mar 2006.

[58] Seppanen, J., “Tatum grid analysis of musical signals,” in Proc. Int. Workshop on applications of Signal Processing to Audio and Acoustics WASPAA, IEEE, Mohonk, NY, 2001, pp. 131–135.

[59] Scheirer, E.D., “Tempo and beat analysis of acoustic musical signals,” Journal of the Acoustical Society of America, vol.103, no.1, January,1998.

[60] Thayer, R. E., The biopsychology of mood and arousal. Oxford University Press, 1989.

[61] Tellegen, A., “Structures of mood and personality and their relevance to assessing anxiety, with an emphasis on self-report,” In A. H. Tuma and J. D. Maser (Eds.), Anxiety and the anxiety disorders (pp. 681-706). Hillsdale, NJ: Erlbaum., 1985.

[62] Tato, R, et al., “Emotional space improves emotion recognition,” ICSLP, 2002

[63] Tzanetakis, G., and Cook, P., “Music genre classification of audio signals,” IEEE Trans.

Speech Audio Processing, 10 (5), 293-302, 2002.

[64] Tzanetakis, G., Perry Cook, “Human perception and computer extraction of musical beat strength,” in Proc. of the 5th Int. Conference on Digital Audio Effects (DAFx-02), Hamburg, Germany, September 26-28, 2002.

[65] Temperley, D., “What’s Key for Key? The Krumhansl-Schmuckler Key-Finding Algorithm Reconsidered,” Music Perception, 1999

[66] Temperley, D., The Cognition of Basic Musical Structures, Cambridge, MA: MIT Press, 2001.

[67] Temperley, D., “A bayesian key-finding model,” MIREX Symbolic Key-Finding entry,

53

2005, [web site], [2006 Jul 04]

[68] Versanto, J., Himberg, J., Alhoniemi, E., and Parhankangas, J., Self-Organizing Map in Matlab: the SOM Toolbox. Helsinky, Finland,: Helsinky University of Technology, 2000.

[69] Watson, D., and Tellegen, A., Toward a consensual structure of mood. Psychol. Bull. 98, 219–235, 1985.

[70] Watson, D., Clcrk, L.A. and Tellegen, A., “Development and vialdation of brief measure of Posive and Negative Affect,” Journal of Personality and Social Psychology, 1988.

[71] Watson, D., “Strangers' ratings of the five robust personality factors: Evidence of a surprising convergence with self-report,” Journal of Personality and Social Psychology, 57, 120-128, 1989

[72] Watson, D., Clcrk, L.A., McIntyre, C.W., Hamaker, S., “Affect, personality, and social activity,” Journal of Personality and Social Psychology, 63, 1011-1025, 1992

[73] Watson, D., Clcrk, “Affect separable and inseparable: On the hierarchical arrangement of the negative affects,” Journal of Personality and Social Psychology, 62, 489-505, 1992

[74] Watson, D., Tellegen, A., and Clark, L.A., “On the dimensional and hierarchical structure of affect,” Psychological Science, Vol. 10, No. 4, July 1999.

[75] Watson, D., Mood and Temperament. Guilford Press, New York, NY, USA, 2000.

[76] Watson, D., Tellegen, A., and Clark, L.A., “Cross-cultural convergence in the structure of mood: A Japanese replication and a comparison with U. S. findings,” Journal of Personality and Social Psychology, 47, pp127-144, 1984.

[77] Watson, D., and Clark, L.A., “The PANAS-X manual for the positive and negative affectschedule – expanded form,” the university of Iowa, 1994.

[78] Wieczorkowska, et. al, “Extracting Emotions from Music Data,” International Syposium on Methodologies for Intelligent Systems , pp 456-465, 2005.

[79] Wei, Cheng-Yu, Dimitrova, Nevenka,Chang, Shih-Fu, “Color-Mood analysis of filmsbased on syntactic and phycholigic models,” IEEE International Conference on Multimedia and Expo(ICMC2004), 2004

[80] Yang, D., and Lee, W.S., “Disambiguating music emotion using software agents,” in International Symposium on Music Information Retrieval, 2004.

[81] Zhang, Y.B., and Zhou, J., “A study on content-based music classification,” ISSPA, vol.

2, 2003.

[82] Zhang, T., and Kuo ,C.C.J., “Audio content analysis for online audiovisual data segmentation and classification,” IEEE Transactions on Speech and Audio Processing, vol. 9, pp. 441 - 457, MAY 2001.

[83] 馮觀富, 心理學 22-情緒心理學. 心理出版社。

54

[84] 陳若涵, 許肇凌, 張智星, 羅鳳珠, “以音樂內容為基礎的情緒分析與辨識,” 2006年

電腦音樂與音訊技術研討會, 2006.

[85] 李宏儒, 鄭雯妮與張智星, “以統計方法與音樂理論為基礎之哼唱譜曲系統,” 第七

屆人工智慧與應用研討會, 台中, 2002.

[86] 董信宗, 沈錳坤, “基植於音樂風格的電腦音樂自動伴奏,” 第五屆數位典藏技術研

討會, 2006.

[87] 吳金池, “語者辨識系統之研究,” 碩士論文, 國立中央大學電機工程研究所, 2002.

[88] 林勝儀(譯), 平野 昭, 門馬直美, 作曲家別_名曲解說_珍藏版3-貝多芬, 美樂出

版社

[89] 林勝儀(譯), 寺西基之, 野村光一, 作曲家別_名曲解說_珍藏版4-蕭邦, 美樂出版

[90] 林勝儀(譯), 音樂之友社, 新訂標準音樂辭典, 美樂出版社

[91] 江慶涵, 劉志俊, “MP3數位音樂中的自動和弦辨識,” 數位生活科技研討會, 2006

55

附錄 A、新葛羅夫線上音樂辭典(2007)有關音樂專有名詞之定義

http://www.grovemusic.com/index.html z 調性key

In tonal music (see TONALITY), the abstract arrangement of musical phenomena such as melodies, harmonies and cadences around a referential or tonic pitch class. While the French ton and the German Tonart stress the importance of the tonic, the English term has a broader meaning: as a metaphorical ‘key’, the tonic ‘unlocks’ or clarifies the arrangement of pitch relations that underlies the music. A tonic thus unifies and coordinates the musical phenomena within its reach: in the key of C major, for example, there is an essential ‘C-ness’

to the music.

The idea that a piece or a passage lies ‘in’ a given key may reflect a cultural inclination to conceptualize key as a musical container. A key in this sense involves certain melodic tendencies and harmonic relations that maintain the tonic as the centre of attention; the tonic controls melodic contours in both smaller and larger musical contexts, determines the immediate succession of harmonies, and coordinates the overall succession of medial cadences and modulations in a piece.

Also crucial to the concept of key is the idea that there are two basic modal genera, major and minor, each with different musical characteristics arising largely from the disposition of tones and semitones within thier respective scales. Since each tonic governs both a major and a minor mode, there are (given equal temperament and enharmonic equivalence) a total of 24 keys, two for each of the 12 semitones within the chromatic octave.

All 24 possibilities were first arranged in a CIRCLE OF FIFTHS in Heinichen's Der General-Bass (Dresden, 1728; see illustration), though Heinichen’s circle had been anticipated by NIKOLAY DILETSKY. Each pair of major and minor modes has the same diatonic collection and key signature, while the collections of adjacent, 5th-related pairs differ by a one sharp or flat. As a model for harmonic succesesion, however, the circle is imperfect, for there are a number of crucial harmonic relations in tonal music that do not conform to this arrangement. Moreover, other representations of the total aggregate were common. In the first volume of Das wohltemperierte Clavier (1722), Bach wrote a separate prelude and fugue for each major and minor mode, which he arranged in ascending semitones within the chromatic octave.

The modes are further interrelated as follows: the ‘parallel’ major and minor (e.g. C major and C minor) share the same tonic but have different diatonic collections, while the

‘relative’ major and minor (C major and A minor) share the same diatonic collection but have different tonics. Within a given diatonic collection, all pitch classes (and the harmonies

56

rooted in them) are subdominant to the tonic, some more so than others. Moreover, a key is not limited to the pitch classes within its particular diatonic collection. In certain circumstances (melodic chromaticism, mixture, tonicization, modulation), the music can use pitch classes from outside its tonic major or minor scale without weakening its sense of orientation towards the tonic.

Keys are often said to possess characteristics associated with various extra-musical emotional states. While there has never been a consensus on these associations, the material basis for these attributions was at one time quite real: because of inequalities in actual temperament, each mode acquired a unique intonation and thus its own distinctive ‘tone’, and the sense that each mode had its own musical characteristics was strong enough to persist even in circumstances in which equal temperament was abstractly assumed. Though highly specific with respect to different repertories and listeners, these expressive qualities fall into two basic categories, which conform to the basic difference – often asserted as an opposition – between major and minor: major is heard to be brighter and more cheerful than minor, which in comparison is darker and sadder.

By BRIAN HYER z 速度tempo

Literally, the ‘time’ of a musical composition, but more commonly used to describe musical speed or pacing. Tempo may be indicated in a variety of ways. Most familiar are metronomic designations that link a particular durational unit (usually the beat unit of the notated metre) with a particular duration in clock time (e.g. crotchet = 80 beats/minute). Also familiar are conventionalized descriptions of speed and gestural character – andante, allegro, langsam etc; (see TEMPO AND EXPRESSION MARKS). There are also looser associations between metric notations and tempo, a vestige of earlier mensural practice, where, for example, 3/2 is sign of relatively slow tempo, 3/4 of moderate tempo and 3/8 of relatively quick tempo.

Similarly, we retain a sense of the distinction between the half-circle   (common time) and the crossed half-circle   (alla breve), with the latter theoretically twice as fast (see ALLA BREVE, NOTATION,§III,3–6, and PROPORTIONAL NOTATION).

While tempo necessarily involves a determination of the appropriate durations for the various rhythmic units given in score, there is more to tempo than simply indexing crotchets and quavers to some amount of clock time. Epstein observed that ‘tempo is a consequence of the sum of all factors within a piece – the overall sense of a work’s themes, rhythms, articulations, “breathing”, motion, harmonic progressions, tonal movement, contrapuntal activity. … Tempo … is a reduction of this complex Gestalt into the element of speed per se, a speed that allows the overall, integrated bundle of musical elements to flow with a rightful sense’ (Shaping Time: Music, the Brain, and Performance, New York, 1995, p.99). A true sense of tempo, then, is a product of more than the successive note-to-note articulations; it involves the perception of motion within rhythmic groups and across entire phrases. Finding

57

the ‘right’ tempo within and between sections of a piece is one of the subtlest and most difficult tasks facing the performer.

Changes in surface durations do not necessarily give rise to a change of tempo, as the augmentation or diminution of durational values may have little effect on the rate of the perceived pulse. Bona fide tempo changes may of course occur, either abruptly or gradually (via accelerando or ritardando) over the course of a composition, often rather dramatically.

But it is worth noting that even within passages that seem to be in stable tempo, the beat rate is not mechanically constant, save in performances that involve electronic or mechanical means of articulating beats and rhythms. Rather, in normal performances tempo systematically fluctuates within the bar and the phrase.

Tempo is intertwined with our sense of pulse and metre, for without a regular series of pulses it is difficult to imagine any sense of tempo whatsoever. In a metric context, our sense of tempo is what allows us to distinguish subdivisions from beats and beats from downbeats (see RHYTHM, §1, 4). The entire metric hierarchy, from the shortest subdivisions to the broadest levels of hypermetre, plays a pivotal role in establishing the ‘complex Gestalt’ of tempo.

JUSTIN LONDON z 力度Dynamics

The Capirola Codex of 1517 (fol.49v) contains the singular direction ‘tocca pian piano’.

The performance indications piano and forte are occasionally found in music composed around 1600, to indicate both echo effects (as in Bonelli's Primo libro delle vilanelle) and alternation between choirs (as in Giovanni Gabrieli's Sonata pian e forte). In Banchieri's madrigal comedy Pazzia senile the notated dynamics distinguish the characters from each other, and in the same composer's Barca di Venetia per Padova street criers and drinkers sing forte while the melodies of the fishermen are piano. In 17th-century notation, piano and Echo are often used synonymously. In genuine echo compositions it was usually sufficient to replace dynamic marks by appropriate headings; when solo voices or the chorus were to perform an echo effect, it was indicated by the words in Ecco or piano, or even more frequently by proposta – riposta.

Messe di voce (Domenico Mazzocchi used the sign C for the effect of increasing and decreasing sound) and diminuendos on long single notes were part of vocal and instrumental performing practice from the early 17th century. Caccini's principles of ‘muovere de l'affetto’

and ‘cantare con affetto’ would hardly have been conceivable without purposeful dynamics;

those principles were adopted first by Monteverdi, Fantini and Castello, and later by Thomas Mace. In the foreword to Le nuove musice (1601–2, p.63), Caccini described the esclamazione, which may already have been in use for decades, as ‘really nothing but allowing some reinforcement of the voice.’ In contrast to the messa di voce, the esclamazione involved letting the volume die down and immediately increase again. The dynamics

58

developed in Italy in connection with affects and echoes stimulated German, French and English music and its notation. Italian dynamic markings were used throughout Europe, often alongside terminology from other languages. The notation of crescendos and decrescendos was particularly refined. Sometimes a crescendo was indicated by a graduated series of dynamic levels (e.g. p … f … ff), sometimes by phrases such as ‘lowder by degrees’ (Locke, The Tempest, 1675) or ‘un peu plus forte et toujours en augmentant jusqu'à la fin’ (Marais, Sonnaite à la mariesienne, 1723).

In the 18th century composers resorted to filled-in forks, equilateral triangles and needles to express constant changes of volume, for instance in violin sonatas by G.A. Piani (1712), Geminiani (1739) and Veracini (1744), and in Rameau's Hippolyte et Araicie (1733).

Rameau and Geminiani indicated crescendos and decrescendos with wedges; Geminiani also used the direction rinforzando in the sense of a crescendo. The first edition of Leopold Mozart's Versuch einer gründlichen Violinschule (1756, pp.50–51) contains the following paraphrases: ‘Piano … means quiet; forte … loud or strong. Mezzo … means half and is used to moderate the forte and piano. Piu … means more. Crescendo … growing. Decrescendo … on the other hand, indicates that the strength of the note is to diminish more and more’.

Haydn used the markings pp, piu p, p, mezzo forte, poco f, f, f assai, ff, mezza voce, sotto voce, cresc., decresc., dim. and mancando. W.A. Mozart added mfp, sfz, sfp and callando to this stock of terms. Beethoven also used ppp (op.18), meno p, sempre p e dolce, piu f, sempre piu f, fff, fp, morendo, smorzando and perdendosi; in his scores such expressions as dolce, espressivo, cantabile and con espressione almost always indicate that the part thus described is the main one or its counterpart, not just a subordinate part.

In the course of the 19th century composers felt obliged to provide more and more performance indications; this led to proliferation of extreme values purportedly reflecting the composers' intentions. Berlioz was probably the first to resort to ffff, surpassing the fff found now and then in Weber and Beethoven. Carl Gollmick urged composers to treat pp and ff as superlatives, and to use ppp and fff only with reluctance (Critische Terminologie für Musiker und Musikfreunde, 1833). However, later composers ignored his plea for moderation. Verdi's Messa da Requiem contains the first ppppp, and Tchaikovsky's ‘Pathétique’ Symphony the first pppppp. The last movement of Skryabin's First Piano Sonata contains the marking Quasi niente, pppp, and his later directions range from velouté (‘velvety’) to éclatant (‘piercing’).

Schoenberg, in the fifth of the Kleine Klavierstücke op.19, added the phrase zart, aber voll (‘tender but full’) to a p. Mahler, Schreker, Berg, Draeseke, Puccini, Distler, Richard Strauss and Koechlin also used dynamic markings of above-average precision.

From the second half of the 19th century dynamic markings in scores by progressive composers are vertically differentiated. For instance, in Liszt's Tasso: lamento e trionfo, the adagio mesto section has four simultaneous markings: pp for the horn, ff for the harp, f espressivo for three solo cellos and bass clarinet, and p for the rest of the cellos and double

59

basses. Debussy's performance indications such as en dehors, très en dehors and soutenu provide clarity over and beyond the hierarchy of the parts. In the later 19th century directions such as hervortretend and marki(e)rt were used by Draeseke, Wagner, Bruckner and others;

the composers of the Second Viennese School began marking the main part (Hauptstimme) with ‘H’ and subordinate parts (Nebenstimmen) with ‘N’. Schoenberg, whose op.19 prescribes vertically differentiated dynamics in several passages, required composers ‘to show, in one's markings, whether the total loudness is meant or the instrument's own degree of loudness’, the dynamic marking is therefore either related to the total sound of the work as composed, or subjectively absolute, not fitting into that sound ‘from the point of view of the instrument’ (Style and Idea, 2/1984, p.341).

Dynamic signs and terms can be taken as identical only within the works of individual composers, or at the most for historically limited periods. Even within a composer's personal style one must take account of diachronic developments; for instance, fortissimo denoting breadth of aspiration and conflict does not occur until Beethoven's middle period.

‘Fortissimo does not always mean “as strong as possible” but can mean very strong, stronger than forte; like every term denoting strength, it comprises many degrees within itself ’ (A.B. Marx, Anleitung zum Vortrag Beethovenscher Klavierwerke, 1863, p.98). Marx took the ff in Beethoven's early works to be milder than the same marking in later works such as opp.57 and 106. The same observation applies to Schubert; the comparatively small expansion of a short piece such as one of his Ländler is hardly ever appropriate to the kind of large-scale fortissimo that has its place in sonata movements of larger dimensions.

In Le marteau sans maître, Boulez's instructions ‘sans équilibre’, as against ‘sonorités très équilibrées entre elles’ and ‘Les nuances seront exécutées “ponctuellement”’ (see §1 above), can be realized by a corresponding distribution of intensity. Notwithstanding the efforts of Schoenberg, Berg, Debussy, Stravinsky, Penderecki, Ligeti and Feldman, however, dynamics and the mingling of tonal colours, at least in the traditional instrumental make-up of an ensemble, are still not regarded as satisfactorily capable of notation. Moreover, the differences between, for instance, a piano played by only a few instruments and one played by a larger ensemble may be perceived, but no terms to describe them have been coined.

在文檔中 摘要 (頁 51-77)

相關文件