ଇМࡼӪ໕ᐡᏣӶึ߭ңຠ๊ϸዂԓ
ϟᔗң
ࢋऽ ߞ۹τᏱϏ୧ᆔ౪Ᏹق ചН ҴᇄᢋτᏱ୧Ᏹःفܛ ֕ჱύ ߞ۹τᏱᆔ౪Ᏹଲᄣ्
߭ңຠ๊ښ࡚ӶߝᒋҀൠϑϟԥԒȂڐӶӍཿၦȃၦၦଊڦூȃማ௳ ߭ՄȂпІጓΚૢᐡᄻၦዀޠαȂְᅌࣻ࿌२्ޠِՔȄ߭ңຠ๊ޠл् ҭޠΞӶຠ໕༇ڕȃಊڕึᐡᄻܗԇᐡᄻ߭ңࠣ፵ޠًⱜȂпւၦяӬ౪ ޠ؛๋Ȅႇџ߭ңຠ๊ޠःفτӼଭᄈΚૢཿޠϵѨ༇࡛ҴϸዂԓȂၷЎଭᄈึ ᐡᄻҐٙޠ߭ңຠ๊ໍःفȄԟޠःفРݳτӼ௵ңಜॏРݳȂߗΞԥп Ϗහኌ࣐ஆᙄϟөᆎᅌᆘݳȂپԄઢစᆪၰІ case-based reasoning ๊ȄҐःفლၑ ᔗңΚུߗึйϑᕖூࣻ࿌ାϸҔጃ౦ޠϏහኌРݳ support vector machines (SVM)ٿ࡛ᄻึ߭ңຠ๊ϸዂԓȄ࣐ᡜᜍ SVM ޠѠңܓȂשউпዀྦᅮϵ Ѩ(Standard and Poor’sȂпίᙐᆏ S&P)ܛึոޠΚૢཿึ߭ңຠ๊ၦਠኻҐ࣐ پȂᒶᐆ S&P ຠ๊ਣܛՄኍϟࣻᜱ२्ଓଡ଼ᡑІঢ়ॴᓏӱષձ࣐ዂԓޠᒰΤᡑ ȂٯпઢစᆪၰРݳ࣐ஆྦᇅ SVM РݳໍЩၷȂᄃᜍ๗ݏᡘұ SVM ዂԓᓻܼ ઢစᆪၰዂԓȄA Study of SVM Classification Models
in Issuers’ Credit Ratings
Jen-Ying Shih
Department of Business Administration, Chang Gung University
Wun-Hwa Chen
Graduate Institute of Business Administration, National Taiwan University
Soushan Wu
College of Management, Chang Gung University
Abstract
Credit rating systems have existed for a long time in most financial markets and played a major role in corporate capital raising, providing investment information for both individual investors and institutional investors, and credit granting in banks. The purpose of credit ratings is to measure the credit worthiness of credit securities’ issuers so as to provide investors valuable information in making financial decisions. Due to the fact that the subordination of bonds has a great impact on the bond’s rating (hence render the rating problem much easier to solve), most of the early researches have focused on industrial bond ratings rather than issuers’ credit rating. In terms of classification approaches, early researches relied on conventional statistic methods, while recent studies tended to apply artificial intelligence based techniques, such as artificial neural networks and case-based reasoning. The main objective of this research is to propose a classification model for the issuers’ credit ratings based on support vector machines, a novel classification algorithm famous for dealing with high dimension classification.
To verify the capability of the proposed model, a set of Standard and Poor’s issuers’ credit rating data was used as the test bed. To construct our classification models, the ten key financial variables used by Standard and Poor’s (S&P), and country risk were chosen as the input variables. An artificial neural network based classification model was selected as the benchmark. Our empirical results showed the superiority of the support vector machine model over the neural artificial network model.
൧ȃᏳ፤
ึ߭ңຠ๊(issuers’ credit rating)Ξ߭ңຠ๊ᐡᄻ(rating agency)ܛ௳ᇅޠ һܿᄈК߭ңຠ๊(counterparty credit rating)Ȃԫຠ๊߾ຠ๊ᐡᄻᄈڨຠึᓼᗚ༇ ଡ଼ᐍᡞΩϟཏُȂл्߾२ܼຠզึԥྦਣቻڐଓଡ଼ܜᒜϟΩІཏ ȂկٯߩЇӉեึᄈө༇ଡ଼ޠᓻӒᓼᗚזܗԂ1Ȅึޠ߭ңॴᓏབ ାȂܛᕖூޠ߭ңຠ઼๊๊བմȂࣻᄈӵȂڐܛึޠϵѨ༇๊ၦϏڏܛᕖூޠ ༇߭ຠ๊ηڨึ߭ңຠ๊ޠኈȄӱԫȂӶߝᒋҀൠϜȂ߭ңຠ๊ᅌ༉ሏ߭ ңॴᓏၦଊޠѓȂၦߝሰؒѠпࣻᄈᔗޠ߭ң઼๊ӶၦҐҀൠၦȂԥਞ७մ ၦԚҐᇅၦߝુнޠॴᓏȇၦߝٽӶၦଊഇ݃ޠၦᕘძϜȂѠٸҐٙᄈॴᓏ Ԃޠโ࡚ᒶᐆᎍ࿌ޠၦ༇᠍ϏڏȂԫၦଊЏڐᄈڨژା࡚ᆔښၦዀޠޠཿՅ ّȂڏࡿዀܓޠཏဏȂپԄԥٳݳၦᐡᄻܗஆߝೞ४ۢ༊ၦܼ BBBȞܗ Baaȟ઼пαޠၦϏڏ2Ȃٻூ߭ңຠ๊ၦଊޠණٽڏ२्ܓȄᆤαܛख़Ȃ߭ңຠ ๊ޠቌঅӶܼഇႇᙐܿޠಓဵණٽၦӍཿ߭ңॴᓏޠၦଊȂпւڐٸᐄҐٙޠ ॴᓏԂяᎍӬޠၦ؛๋ȞMolinero et. al. 1996ȟȄ
ஆҐαȂຠ๊ᐡᄻᙥҦᇕᇅፚึࣻᜱޠөᆎၦଊȂєࢃึޠঢ়ॴ ᓏȃӍཿᕋၽॴᓏڸଓଡ଼ॴᓏၦଊȂစҦϸݚ৲ყޠཿຠզȂϡึᐍᡞޠ ߭ңॴᓏ઼๊Ȅᗷด௳ϡ߭ңຠ๊࣐ΚقಜϾޠϸ؛๋ႇโȂկϸݚ৲ყޠ ཿຠզ٠ϭࠍґೞقಜϾޠᇴ݃Ȃཏ֊ณݳ໕Ͼөຠ्๊ષᇅຠ઼๊๊ޠᜱ ߾ȇଷԫϟѵȂ೩ӼӍཿۧґೞຠ๊ᐡᄻຠ๊ȂկၦϬᄈٳӍཿޠ߭ңॴᓏၦ ଊԇӶτޠሰؒȂࢉ೩ӼᏱლၑၽңөᆎಜॏРݳ(Belkaoi 1980; Ederington 1985; Pinches & Mingo 1975)ᇅϏහኌޠРݳ (Dutta & Shekhar 1988; Surkan & Singleton 1993; Shin & Han 1999)ٿ၍؛ԫϸୱᚡȂٯᕖூΚٳԥңޠ؛๋ၦଊᇅϸਞݏȄ կࠊޠःفӼ२ܼ༇߭ຠ๊ޠϸȂ܂܂ѠၽңΚٳ੬ۢޠึӈȞپԄ༇᠍ ޠ឵ܓȟึϸྦጃ౦ၷାޠዂԓȂկංоᘁЎᄈஆҐޠឋᚡ—ึ߭ңຠ ๊џึϸዂԓȂӶᄃଡ଼αȂҀൠς܂܂ᄈึ߭ңຠ๊ޠၦଊሰ࣐ؒϹȂ Ӷޠ༇ڕึѠႲӒຠզҐٙޠ߭ң઼๊Ȃп߰ՄኍпեᆎРԓၦȇၦࠍ ѠٲӒᄈۧґᕖூ߭ңຠ๊ޠึॴᓏԥߒؐޠᕤ၍ȄӱԫȂҐःففึ ߭ңຠ๊ϸዂԓȄԫѵȂႇџӶ༇߭ຠ๊ःفឋᚡαȂϑԥᏱණяϏහኌРݳ ޠᓻܓ(Dutta & Shekhar 1988; Maher & Sen 1997)Ȃٯᔗңઢစᆪၰ(artificial neural networks, ᙐᆏ ANN)ȃஆӱᅌᆘڸ case-based reasoningȞᙐᆏ CBRȟ๊Рݳ၍ ؛༇߭ຠ๊ޠୱᚡȂկၷЎفߗԒٿུึޠϸᏣ support vector machines(SVM) ޠᎍңܓȄ
1 пαึ߭ңຠ๊ޠᇴ݃߾ᐄ S&P ϵѨᆪયၦਠ http://www2.standardandpoors.comȄ
2 S&P ᄈߞ߭ңၦϏڏޠຠ๊ȂӔϸ࣐Οঐ઼๊ȂڐϜ឵ܼၦ઼๊ޠ༇ڕ࣐ AAAȃAAȃA І BBBȇ
ᐡ઼๊ޠ༇ڕࠍєࢃ BBȃBȃCCCȃCCȃC ᇅ DȄMoody’s ᄈߞ߭ңၦϏڏޠຠ๊ȂҼϸ࣐Οঐ઼๊Ȃ ڐϜ឵ܼၦ઼๊ޠ༇ڕ࣐ AaaȃAaȃAȃBaaȇᐡ઼๊ޠ༇ڕࠍєࢃ BaȃBȃCaaȃCaȃC ᇅ DȄ
SVM ӼၽңܼᄙϸୱᚡȂٯᕖூା࡚ޠϸҔጃ౦Ȃᔗңስєࢃᇨୱᚡ (Burbidge et. al. 2001)ȃҪ፵๗ᄻϸ(Cai & Lin 2002)ІҢޑᡞϸ(Morris & Autret 2001)๊ȄӶଓଡ଼ୱᚡαࠍӼ࣐ၽң support vector regression (SVR)၍؛Ⴒขөᆎߝᒋ ୧ࠣቌੀІൣႎ౦ȞTay & Cao 2001ȟޠୱᚡȂၷЎϸୱᚡ(support vector classification, ᙐᆏ SVC)ޠःفȂࢉҐНଇ SVM Ӷଓଡ଼ϸୱᚡ˕˕߭ңຠ๊ޠᎍңܓȂٯᇅ ೩ӼᏱளңޠઢစᆪၰ(ANN)ЩၷȄ пίНതଇᇅᐍ౪ႇџ߭ңຠ๊ःفޠ྆ݸȂڐԪȂᙐϮ SVM ᇅ ANN Р ݳȂҦܼ SVM ུ࣐РݳȂࢉשউଭᄈ SVM ΚၐಡޠϮಞȂಒήഌӌᇴ݃ःف ೪ॏȂಒѳഌϸᇴ݃ٯଇ፤ึ߭ңຠ๊ϸϟःف๗ݏȄഷࡤȂᖃ๗Ґःفޠ ๗፤ȂٯණяґٿѠໍΚؐःفޠРӪȄ
ະȃ߭ңຠ๊ࣻᜱःف
Κȃفୱᚡϟ
ԟ߭ңຠ๊ޠःفӼݨܼߞϵѨ༇༇߭ຠ๊ϸୱᚡȂߗхࠍԥΚٳ ಊ ༇ ຠ ๊ ޠ ः ف Ȃ ӱ ԫ Ȃ Ӷ ᒰ Τ ᡑ ޠ ᒶ ڦ α Ȃ ள ၏ ϵ Ѩ ༇ ޠ ༇ ᠍ ՞ (subordination)ȃึዂܗᓼ༇४ાΤ(Horrigan 1966; West 1970; Pinches & Mingo 1973, 1975)ȂࢉӶϸޠҔጃ౦αȂ܂܂ЩϛાΤՄ໕ٿூାȂٸᐄ Pinches & Mingo (1975)ȂMoody’s A ઼пαޠϵѨ༇ංо࣐ߩ឵ޠϵѨ༇(nonsubordinated bonds)Ȃ Ba ઼пίޠϵѨ༇ංо࣐឵ޠϵѨ༇Ȃࢉԫӱષ༇߭ຠ๊ޠ२्ႲขᡑȄկҦ ܼᄃଡ଼αȂຠ๊ᐡᄻ܂܂ӒҢึຠ๊ࡤȂϘଭᄈڐึޠ߭ңၦϏڏໍຠ ๊ȂࢉשউוగึஆᙄІᎍңጓ൝ኅހޠޠึ߭ңຠ๊ϸዂԓȄΡȃຠ๊Рݳ
ӶःفРݳαȂ༉ಜಜॏӼᡑ໕ϸݚഷளೞᔗңޠРݳȂєࢃӼϰጤܓୣրϸ ݚ (Belkaoi 1980; Ederington 1985; Pinches & Mingo 1975)ȃӼϰߩጤܓୣրϸݚ (Pinches & Mingo 1977)ȃޣጤଠᘫݳ(Horiggan 1966; West 1970; Ederington, 1985)ȃ Probit regression (Ederington 1985) ȃ Logit regression (Ederington 1985) І mutidimensional scaling (Molinero et. al. 1996)ȄկٳಜॏРݳ܂܂ሰᅗ٘੬ۢޠಜॏ ೪ȞپԄၦਠಓӬளᄙϸପȟϘᎍңȂӱԫȂϛሰଭᄈၦਠಣӬԥӉեಜॏ೪ ޠϏහኌРݳӱၽՅକȂӕђαၦଊऌޠໍؐѠᅗ٘τ໕ၽᆘޠሰؒȂӱԫȂᏱ αІᄃଡ଼αབٿབӼޠঢ়Ᏹლၑ௵ңϏහኌРݳ၍؛ୱᚡȂҭࠊϑԥᏱ௵ ң ঈ ༉ ሏ ઢ စ ᆪ ၰ (Dutta & Shekhar 1988; Surkan & Singleton 1990)ȃ genetic algorithm ᇅ CBR ᐍӬРݳ(Shin & Han 1999)ȃઢစᆪၰᇅ CBR ᐍӬРݳ(Kim & Han 2001)І CBR (Shin & Han 2001)၍؛༇߭ຠ๊ୱᚡȂٯᕖூࣻ࿌ϛᓀޠϸҔጃ
౦ȄߗԒٿȂ೩Ӽःفණя SVM Рݳ࣐ΚঐًԂޠϸРݳȂѠၱ ANN Ѡ᎐ႅ ޠഌഷϊϾ(local minimum)ୱᚡȂпؒூӓഷٺ၍Ȃࢉೞңп၍؛೩ӼϸୱᚡȂ ӱԫȂҐःفҼლၑၽң SVM Рݳ၍؛ึ߭ңຠ๊ϸୱᚡȄ
ήȃᒰΤᡑᒶڦ
ӶᒰΤᡑޠᒶڦαȂԥٳःف௵ңӼᡑ໕ಜॏРݳޠлԚӌϸݚȃӱષϸݚІ ഃؐᒶڦޠРԓᒶڦ२्ޠᒰΤᡑ(Pinches & Mingo 1975, 1977; Shin & Han 1999; Kim & Han 2001; Shin & Han 2001)ȂԥٳᏱпစᔽӬ౪ܓлᢏᒶڦᎍ࿌ޠᒰΤᡑ (Horrigan 1966; Belkaoi 1980 )Ȃٳᡑ܂܂ӱᔗၦਠޠѠڦூܓՅӼ२ܼӍཿଓ ଡ଼ၦଊȞЏڐଓଡ଼Щ౦ၦଊȟڸ߭ңၦϏڏޠึӈȞϵѨ༇ޠ឵ܓȟȂᘁЎ ٻңᕋၽ८ܗҀൠ८ޠၦଊȂࢉ Pinches & Mingo(1975)ᇰ࣐Ҕጃ౦ณݳτൾණ݈ ޠӱȂη߭ຠᐡᄻлϸݚ৲лᢏցᘟޠቌঅܛӶȄ
Ґःفᇰ࣐ࢌด्ϸዀྦᅮϵѨ(Standard and Poor’sȂпίᙐᆏ S&P)ܛۢ ޠຠ๊ȂӱᔗၦਠޠѠڦூܓȂѠლၑ༊ւң S&P ຠ๊ܛ२ޠଓଡ଼ᡑٿձ࣐ ᒰΤᡑȂΚૢՅّȂS&P ࡈᆏڐл्Մ໕ϟଓଡ଼८ኈӱષєࢃᄺఝ(leverage)ȃߴ ሬ(coverage)ȃᕖւΩ(profitability)І౫ߝࢻ໕(cash flow)ѳঐቺ८ȞS&P, 1996ȟȂп فዂԓϸޠਞݏȄ
ѳȃၦਠ
Ҧܼຠ๊ᐡᄻӶໍຠ๊ਣȂٯߩΚঐԒ࡚ޠၦਠٿ؛ۢȂՅࠊޠःفଷ ΠЎޠᕖւԚߞܓӱષਗ਼ІӼঐԒ࡚ޠၦਠ(West 1970; Pinches & Mingo 1973, 1975)ȂپԄႇџΟԒٿޠࣵᎸᡑȂӼљ༊ᇕႇџΚԒޠၦਠџϸ༇߭ຠ๊Ȃ ܗпႇџϥঐԒ࡚ޠ҂ְձ࣐ᒰΤᡑ(Pinches & Mingo 1973,1975)Ȃᇅᄃଡ଼ϸ ݚαԥτޠৰրȂࢉҐःفၽңήঐᐍԒ࡚ޠᐤѭၦਠџϸ߭ңຠ๊Ȅϥȃϸᒰяޠ೪ॏ
ႇџȂӶၽңઢစᆪၰРݳ࡛ᄻϵѨ༇߭ңຠ๊ዂԓޠःفϜ(Dutta & Shekhar 1988; Surkan & Singleton 1990)Ȃٯߩၽң༉ಜಜॏРݳޠᏱΚૢձӼ઼๊ޠϸ Ȟཏ֊ϸ઼๊τܼΡ઼ȟȂՅѬΡϸݳȂཏ֊ୣϸ࢛ϵѨ༇࣐࢛Κ߭ңຠ๊๊ ઼ȂӶԫΡϸݳޠݸίȂઢစᆪၰޠϸਞݏႈࣲԂȂҔጃ౦ഷାႁ 88%ѿ ѢȇկᒹᏻޠٯءԥձӼ઼๊ޠϸȄӶϛӤޠЩၷஆᙄίȂ༊ᖑᆏઢစᆪ ၰРݳӶୣϸ࣐࢛Κ߭ңຠ઼๊๊༇ڕޠឋᚡαȂڐҔጃ౦ାܼ༉ಜಜॏРݳȇ ՎܼӶӼ઼๊ޠϸםϜȂࠍԥࡠໍΚؐޠفȄ пίשউ൸ٸԫϥঐᄻ८ȂႇџޠःفཋᖃԄߓ 1 ܛұȄ
ߓ 1Ȉ߭ңຠ๊ःفЩၷߓ ᄻ८ ःف ୱᚡ ःفРݳ ᒰΤᡑ ၦਠ ᒰяᡑ ഷାҔጃ౦ Horrigan 1966 Moody’s ІS&P ܛ ึ ո ޠ ༇ ߭ ຠ๊ Ӽϰଠᘫዂ ༇ ᠍ ՞ ȃ ᖃ ၦ ȃ ᕋ ၽ ၦ ߝ խ ೳ Щ ౦ ȃ ၦ խ ᖃ ॓ ༇ Щ ౦ ȃ ೳ խ ၦ Щ ౦ ȃ ւխೳЩ౦ ΚԒ Moody’s І S&P ޠϳঐ߭ ຠ઼๊ 58% (Moody’s) 52%( S&P) West 1970 Moody’s ܛึ ո ޠ ༇ ߭ຠ๊ Ӽϰଠᘫዂ ᐄ Fisher (1959) ޠ ः ف Ȃ ᒰ Τ ᡑ ࣐ Ο Ԓ ࣵ Ꮈ ޠ ᡑ ȃ ᓼ ༇ ȃ ॓ ༇ խ ᠍ ઊ Щ ౦ І ࢻ Ӷѵޠ॓༇ ΚԒ ϳঐ߭ຠ ઼๊(Aaa, Aa, A, Baa, Ba, B) 62% Pinches & Mingo 1973 Moody’s ܛ ึ ո ޠ ༇߭ຠ๊ ጤܓӼϰୣ րϸݚȞпί ᙐᆏ MDSȟ ၽ ң ӱ ષ ϸ ݚ (factor analysis) 35 ঐᡑᒶᐆ 6 ঐ ᒰ Τ ᡑ Ȃ є ࢃ ഀ ៊ ϛ ᘟ ึ ւ ޠ Ԓ ȃ ึ ߝ ȃ ւђॏւਁຳң/ւ ਁ ຳ ң ȃ ߞ ॓ ༇ խ ᖃ ၦ ޠ Щ ౦ ȃ ւ խ ᖃ ၦ ޠ Щ ౦І༇᠍՞ Κ Ԓ Ȃ ӕ ђ α ႇ џ ϥ Ԓ ө ᒰ Τ ᡑ ޠ ҂ ְ ϥঐ߭ຠ ઼๊(Aa, A, Baa, Ba, B) 71.5 % Pinches & Mingo 1975 Moody’s ܛ ึ ո ޠ ༇߭ຠ๊ ٸᐄ༇ڕ ՞ϸրւң quadratic MDS ࡛Ҵ ڎঐዂԓ ၽ ң ӱ ષ ϸ ݚ ᒶ ᐆ ᒰ Τ ᡑ Ȃ є ࢃ ഀ ៊ ϛ ᘟ ึ ւ ޠ Ԓ ȃ ึ ߝ ȃ ւђॏւਁຳң/ւ ਁ ຳ ң ȃ ߞ ॓ ༇ խ ᖃ ၦ ޠ Щ ౦ ȃ ւ խ ᖃ ၦ ޠ Щ ౦І༇᠍՞ Κ Ԓ Ȃ ӕ ђ α ႇ џ ϥ Ԓ ө ᒰ Τ ᡑ ޠ ҂ ְ ϥঐ߭ຠ ઼๊(Aa, A, Baa, Ba, B) 75.4% Belkaoi 1980 S&P ึ ոޠ ༇ ߭ ຠ ๊ ഃؐӼϰୣ ր ϸ ݚ (Stepwise MDS) п စ ᔽ ޠ Ӭ ౪ ܓ ᒶ ᐆ ᒰ Τ ᡑ Ȃ є ࢃ ᖃ ၦ ȃ ᖃ ॓ ༇ ȃ ߞ ॓ ༇ խ ᖃ ၦ ၦҐޠЩ౦(ᖃၦ ၦ Ґ є ࢃ ᖃ ॓ ༇ ȃ ੬ ր ڸ ᠍ ΚԒ ϳঐ߭ң ઼๊ (AAA, AA, A, BBB, BB, B) 62.8%
ઊ)ȃ॓༇խᖃ ၦ ၦ Ґ ޠ Щ ౦ ȃ ࢻ Щ ౦ ȃ ւ ਁ ᇅ ੬ ր ਁ ߴ ሬ ॼ ȃ(ւ+ใࡤւਁ ຳ ң )/(ใ ࡤ ւ ਁ ຳ ң+੬րਁ)ȃؑ ቌ/ؑ ᠍ ઊ Ȃ п І ࣐ ಒ Κ ՞ ޠ ϵ Ѩ ༇ (0-1)Ȅ Ederingt on 1985 Moody’s ܛึ ո ޠ ༇ ߭ຠ๊ ޣጤଠᘫዂ ԓ(LR); ordered probit (OP); unordered logit(UL)ȇ ጤܓ MDA(LM) ; quadratic MDA(QM) ଓ ଡ଼ ߴ ሬ ޠ Ⴒ ข অ ȃ ᕖ ւ Ω ޠ ਣ ז ӗ Ⴒ ข অ ȃ ᕖ ւ Ω Ⴒ ข অ ޠ զ ॏ ዀྦ ᇳ(estimated standard error)ȃ(౫ ߝ ࢻ ໕ / ߞ ༇ ଡ଼ ) Щ ౦ ޠ ਣ ז ӗ Ⴒ ข অ ȃ ࠊ Щ ౦ ޠ ዀྦᇳȄ ΚԒ ϳঐϸ ઼๊(Aaa, Aa, A, Baa, Ba, B) LR=65%, OP=78%, UL=73%,LM =69%, QM=72% Dutta & Shekhar 1988 S&P ܛ ึ ո ޠ ༇ ߭ ຠ๊ ঈ༉ሏઢ စᆪၰ ٸ ᐄ ᄈ ༇ ߭ ຠ ๊ ޠ ኈ Ω І ၦ ਠ ޠ Ѡ ڦூܓȂ೪ॏ 10 ঐ ᒰ Τ ᡑ Ȃ є ࢃ ॓ ༇ /Ȟ ౫ ߝ + ڿ ۢ ၦ ȟȃ༇ଡ଼Щپȃ ೳߝ/ၦȃᕋ ཿւઊ/ೳߝȃ ଓ ଡ଼ ࡚ ȃ ೳ ւ/ڿۢၦȃႇџ ϥ Ԓ ٿ ޠ ᕋ Ԟ Ԛ ߞ ౦ ȃ Ⴒ ข ґ ٿ ϥ Ԓ ޠ ᕋ Ԟ Ԛ ߞ ౦ ȃ ᕋ ၽၦߝ/ೳߝȃ ᄈ Ӎ ཿ ޠ л ᢏ గ (subjective prospect of company) ଷ Π ௵ ң ϥ Ԓ ޠ Ԟ Τ Ԛ ߞ ౦ ѵ Ȃ ڐ Ꮈ ࣲ ௵ ң Κ Ԓ ޠ ၦ ਠ ڎ ঐ ๊ ઼ Ȃ є ࢃ ࣐ ࢛ Κ ߭ ң ຠ ઼๊๊(پ Ԅ AA) 83.3% Surkan & Singleton 1990 Moody’s ᄈ AT&T ϸ ഹяџޠٕ ᅮႬၘᕋၽ ϵѨϟ༇߭ ຠ๊ ঈ༉ሏઢ စᆪၰ ٸ ᐄ Peavy and Scott ޠःفᒶᐆ 7 ঐᡑȂєࢃ༇ଡ଼/ ᖃ ၦ Ґ ȃ ใ ࠊ ւ ਁ ຳң/ւȃݎ᠍ ઊൣႎ౦(ROE)ȃႇ ଷΠ ROE ௵ ңႇџ ϥԒޠ ᡑ߾ ѵȂ ڎ ঐ ๊ ઼ Ȃ є ࢃ Aaa and (A1,A2,A 3)ڎ઼ 88%
џϥԒٿ ROE ޠᡑ ߾ȃlog (ᖃၦ )ȃڿۢ೪റ࡛ᄻ ԚҐ/ᖃ౫ߝࢻΤȃ ߞഋႬၘԞΤЩپ ڐᎸࠍ ௵ңΚ Ԓޠ ၦਠ Molinero et. al. 1996 S&P ᄈ ՚ ੳ в ማ ޠ ༇ ߭ຠ๊ ӼϰА࡚ϸ ݚ (MS) ᒶ ڦ ᒌ ໕ ᕖ ւ Ω ȃ ၦ Ґ ๗ ᄻ ȃ ଓ ଡ଼ Ԛ Ґ ڸ ॴ ᓏ ๗ ᄻ ޠ 24 ঐࣻᜱଓଡ଼Щ ౦Ȅ ΚԒ Ң Κ ထ ӵ შ п ᕤ ၍ ө ማ ޠ ϸ ո ґ݃ጃᇴ݃ Maher & Sen 1997 Moody’s ༇߭ຠ๊ ঈ༉ሏઢ စ ᆪ ၰ І Logistics ڎ ᆎРݳໍ Щၷ 7 ঐᡑȂєࢃᖃၦ ȃߞ༇ଡ଼/ᖃၦ ȃւ/ᖃၦȃ ึ ༇ ଡ଼ ޠ ᓼ ᗚ ՞ȃӍཿޠ ȕ অ ȃ ଞ Ӆ ߝ ॓ ༇ ȃ ٿ Ս ܼ ཿ ഌ ߟ І ߩ ள ҭ ޠ ւ ଷ Π ᖃ ၦ ȃ ߞ ༇ ଡ଼ / ᖃ ၦ ȃ ւ / ᖃ ၦ п ϥ Ԓ ҂ ְ ॏ ᆘ ѵ Ȃ ڐ Ꮈ ࣲ ࣐ Κ Ԓ ၦਠ ϳঐϸ ઼๊(Aaa, Aa, A, Baa, Ba, B) 70% Shin & Han 1999 ᗻޠ୧ཿ Ґಊຠ๊ ңஆӱᅌᆘ ݳ (GA) ׳ weight vector пձ ࣐ CBR Р ݳޠ឵ܓȄ 168 ঐଓଡ଼Щ౦ Ϝ ໍ ӱ ષ ϸ ݚ ڸ ANOVA ᔯۢȂᑣᒶ я 27 ঐଓଡ଼Щ౦Ȃ ӕ п ഃ ؐ (stepwise) ᒶڦРݳᑣᒶя 12 ঐଓଡ଼Щ౦Ȅ ΚԒ 5 ঐ ϸ ઼๊ ө ๊ ઼ ޠ ђ ᠍ ҂ ְ Ҕ ጃ ౦࣐ 75.5% Kim & Han 2001 ᗻ ༇ڕຠ๊ ᐍӬၽң ઢစᆪၰϟ SOM І LVQ ޠ CBR Рݳ ଭᄈ 129 ঐᡑȞє ࢃ 4 ঐϸᡑІ 125 ঐଓଡ଼Щ౦ȟໍ ӱષϸݚᑣᒶ 26 ঐ ଓ ଡ଼ Щ ౦ Ȃ ӕ п ഃ ؐ (stepwise)ᒶ ڦ Рݳᑣᒶя 13 ঐଓ ଡ଼Щ౦Ȅ ΚԒ 5 ঐ ϸ ઼๊ ө ๊ ઼ ޠ ђ ᠍ ҂ ְ Ҕ ጃ ౦࣐ 69.1% Shin & Han 2001 ᗻޠ୧ཿ Ґಊຠ๊ ٻң inductive indexing ϟ CBR ၽ ң ӱ ષ ϸ ݚ ȃ ANOVA ᔯ ۢ І Kruskal-Wallis ᔯ ۢ ᑣᒶя 27 ঐᡑȞє ΚԒ 5 ঐ ϸ ઼๊ ө ๊ ઼ ޠ ђ ᠍ ҂ ְ Ҕ ጃ ౦࣐ 70.0%
ࢃ 23 ঐ໕ϾᡑІ 4 ঐ፵ϾᡑȂӕп ഃ ؐ (stepwise) ᒶ ڦ Рݳᑣᒶя 12 ঐଓ ଡ଼ᡑȄ
ȃϸዂԓ
ҐϮಞҐःفܛ௵ңޠڎᆎРݳ--SVM РݳІ ANN РݳȄҦܼ SVM РݳӶ ᆔ౪ስαޠᔗңםۧ឵ུ྆܉Ȃࢉпίשউ SVM ึޠ౪፤ஆᙄၐಡϮಞȂ सϑዤோ၏РݳȂࠍѠၱႇҐȂϛཽᄈᎨᐍНതആԚኈȄΚȃSupport vector machines
ȞΚȟSVM ᢏ܉ SVM ࣐Κ൩ஆܼಜॏᏱಭ౪፤ޠᏱಭᐡᏣ(learning machine)ȂڐஆҐޠၽձ྆܉ ࣐ᒰΤӪ໕пጤܓܗߩጤܓޠਰЗړԓ(kernel function)ৣژΚঐାᆱ࡚ޠ੬ኊޫ (feature space)ȂӶ੬ኊޫϜ׳ژഷᎍޠົ҂८(hyperplane)пୣրөঐϸȄԄԫ ΚٿȂҐӶմᆱ࡚Ϝϛңጤܓؒ၍ޠୱᚡȂ൸Ӷାᆱ࡚ϜໍϸȇঐҦା ᆱ࡚ܛಣԚޠ੬ኊޫȂࣦՎѠпณ४ᆱ࡚ޠȂӱ࣐ӶॏᆘαȂ᠍२(weights)ঐ ϛሰೞॏᆘяٿޠȄᙥҦᒶᐆᎍ࿌ޠਰЗړȂߩጤܓޠৣѠпٻ؛๋ړ Ӷ ঐ ུ ޠ ੬ ኊ ޫ ୱ ᚡ ؒ ၍ Ȅ ԫ ܓ ፵ ᡲ Vapnik Ѡ п ၽ ң ഷ ϊ Ͼ ๗ ᄻ ܓ ॴ ᓏ Ȟstructural risk minimizationȂᙐᆏ SRMȟӶߩጤܓୱᚡαȂٯйᗚңΚኻޠഷᎍ ϾѾȄՅ SVM ܛ؛ۢޠ؛๋ړҦΚထ੬ੇޠӪ໕ܛಣԚޠȂՅထӪ໕Ҧ ଌጜޠၦਠϜࢆᒶяٿޠȂᆏ࣐МࡼӪ໕(support vectors)Ȃηӱԫᐍঐᅌᆘݳᆏ࣐ support vector machines (Vapnik 1995)Ȅ
пშ 1 ࣐پȂၦਠл्Ѡϸ࣐ڎထ“o”ڸ“+”Ȅංեαٿ၍មȂSVM ൸्׳я ΚঐഷᎍϹഹົ҂८ȞܗΚ؛๋ړȟڎထၦਠϸȂԄშϜޠᄃጤȄՅ ؛๋ړԥ೩ӼԂޠಜॏܓ፵ȄစҦ SVM ॏᆘࡤȂშϜକٿޠᘉ൸МࡼӪ໕Ȃ ԥѳঐӶຏጤαޠᘉл्ңٿ؛ۢ؛๋ړȇѫΚঐϛӶຏጤαޠᘉႃЇϸഹ४ ښޠᘉȂӱ࣐ณݳೞϸȂܛпηાΤМࡼӪ໕ϜȂԫ࣐ՄኍԚҐړޠ soft margin ϸᏣޠхߓپφȄ
შ 1Ȉၽңጤܓړึ SVM ϸᏣޠጓپ
SVM ҦܼӶഷᎍϾਣϛཽ᎐ႅഌഷٺϾ(local optimum)ޠୱᚡȂӱԫȂԥϛЎ ޠᄃᜍःفϜࡿяȂSVM ޠߓ౫Щ ANN ᗚԂ(Morris & Autret 2001; Tay & Cao 2001; Cai & Lin 2002)Ȅ
ȞΡȟStructural Risk Minimization
ANN ԇӶᜳпΚૢϾޠୱᚡȂӶଌጜᏱಭਣȂ܂܂Ңႇ࡚ପᎍ(over fitting)ޠ ୱᚡȂл्ӱӶܼ ANN ߾௵ң empirical risk minimization (ERM)ࠍҢഷٺޠϸ ዂԓȂՅ SVM ࠍၽң structural risk minimization (SRM)ࠍึഷٺዂԓȂٯϑ ೞ Gunn(1998)ᜍᄃᓻܼ ANN ܛ௵ңϟ ERM ࠍȄᙐՅّϟȂSRM ࠍഷϊϾ గॴᓏޠαࣩ(upper bound)ȂՅߩ ERM ܛ௵ңഷϊϾଌጜಣၦਠኻҐޠᇳৰȄԫৰ ٻூ SVM ᐀ԥၷٺޠΚૢϾΩȂՅԫΩҔಜॏᏱಭ౪፤ޠҭዀ (Vapnik 1995)Ȅ ȞήȟSupport Vector Classification
п ί ඣ ख़ SVC Ԅ ե ᙥ Ҧ ౫ ԥ ጓ پ ܛ Ң ޠ ົ ҂ ८ п ձ ࣐ ഷ ᎍ ϸ ഹ ົ ҂ ८ (optimal separating hyperplane)ϸᏣȂڐஆҐᆡઢ࣐׳яΚಣഷτϾ margin(֊ഷτϾ ົ҂८ᇅؑঐϸޠഷߗၦਠᘉϟޠᚕ)Ȅпίϸήᆎᄙޠپφᇴ݃Ȅ ˬȃጤܓѠϸഹጓپ ೪ଌጜጓپၦਠ࣐
(
x1,y1) (
,..., x,y)
,x∈Rn,y∈{+1,−1} l l ȂڐϜ x ࣐ᒰΤӪ໕Ȃ၏ಣ ၦਠѠೞΚঐົ҂८ϸഹ࣐ڎȂΚ࣐+1ȂѫΚ࣐-1Ȃस၏ಣၦਠѠೞҔጃณᇳ ӵϸഹȂՅйؑঐϸޠഷߗӪ໕(nearest vector)ᇅ၏ົ҂८ޠᚕഷτϾȂࠍשউ Ѡᇴ၏ၦਠѠೞΚಣົ҂८ഷᎍϸഹȄשউѠпίӗםԓߓұ၏ົ҂८Ȉ l i b x w yi[ , + ]≥1, =1,..., (1) ܛ ׳ ژ ڎ ঐ ഷ ߗ ၏ ົ ҂ ८ ޠ Ӫ ໕ ᇅ ၏ ҂ ८ ޠ ᚕ margin Ԅ ί Ȉ Ѐ Ѐ Ѐ Ѐ Ѐ Ѐ Ѐ У Ϩ Ѐ У Ϥ Ӷ Ѐ Ϩ Ϥ Ϥ Ϥ Ϥ Ϥ Ϥ Ϥ Ϥ У Ѐ Ѐ Ѐ Ѐ Ѐ Ѐ Ѐ У Ϩ Ѐ У Ϥ Ӷ Ѐ Ϩ Ϥ Ϥ Ϥ Ϥ Ϥ Ϥ Ϥ Ϥ У ഷᎍϹഹົ҂८ ڎຏጤ ࣐ Marginw w b x w w b x w x b w d x b w d b w i y x i y x i y x i y x i i i i i i i i 2 , min , min ) ; , ( min ) ; , ( min ) , ( 1 : 1 : 1 , 1 , = + + + = + = = − = = − = ρ (2) ഷ τ Ͼ (2) Ѡ ߓ ұ ࣐ ႁ Ԛ ഷ ϊ Ͼ 2 2 1 ) (w = w Φ Ȃ ௵ ң ܝ ԓ ᚭ Ԕ ݳ (Lagrange relaxation)၏ୱᚡߓႁ࣐Ȉ
¦
= − + − = Φ l i i i i b w wb w 1 y w x b 2 , 2 ( [ , ] 1) 1 ) , , ( min α α (3)࣐ւؒ၍ȂѠ၏Ңୱᚡ(primal problem)ᙾϾ࣐ᄈୌୱᚡ(dual problem)Ȉ ¸ ¹ · ¨ © § Φ =max min ( , , ) ) ( max , α α α α W wb wb (4) ᐄ(3)ᄈ b І w ཌྷϸѠ(4)ߓႁ࣐Ȉ
¦¦
¦
= = + = − = l i l j l k k j i j i j i y y x x W 1 1 1 , 2 1 max ) ( max α αα α α α (5) Ѡؒ၍ίԓ(6)Ȉ¦
¦¦
¦
= = = = = = ≥ − = l j j j i l i l j l k k j i j i j i y l i t s x x y y 1 1 1 1 * 0 ,..., 1 0 . . , 2 1 min arg α α α α α α α (6) ؒூഷᎍϸഹົ҂८࣐Ȉ s r l i i i iy x b w x x w =¦
=− + = , 2 1 * * 1 * α (7) ڐϜȂx ,r xsؑঐϸѠᅗ٘αr,αs >0, yr =−1,ys =1ޠӉեМࡼӪ໕ȄשউѠ ூژ hard classifier ԄίȈ ) , sgn( ) (x w* x b f = + (8) सՄ໕ณݳӓೞϸޠݸਣȂڐ soft classifier ࣐ίԓȈ ° ¯ ° ® > + ≤ ≤ − − < − = + = 1 : 1 1 1 : 1 : 1 ) ( where ) , ( ) ( * z z z z z h b x w h x f (9) ᐄ Kuhn-Tucker conditionȈ[
wx b]
i l y i i i( , + −1)=0, =1,..., α (10) ӱԫѬԥѠᅗ٘yi[
w,xi + b]
=1ޠx ᘉԥߩႮޠܝԓॹ(Lagrange multipliers)Ȃi שউᆏٳᘉ࣐МࡼӪ໕(ᙐᆏ SV)ȂसၦਠѠೞጤܓϸഹȂࠍܛԥޠ SV ᘉဤӶmargin αȂηӱԫѠؒூЎ໕ޠ SVȄ๗ݏȂົ҂८ҦΚಣЎ໕ޠଌጜၦਠᘉ؛ ۢȂڐуᘉѠଌጜಣϜಌଷȂ֊ٻ२ུॏᆘົ҂८ȂٸดѠூژӤኻޠ๏Ȅࢉ SVM Ѡңпᄣ्ᗵ֥Ӷ SV ܛҢޠၦਠಣӬϜޠၦଊȄ
˭ȃጤܓϛѠϸഹጓپ—Soft margin technique
ଷΠၽң൷׳αࣩޠРݳп׳яѠҔጃϸഹၦਠထޠົ҂८ѵȂԥਣᄃзࣩӱ ࣐ณݳӓҔጃၦਠϸഹȂࢉ Vapnik (1995)έѫѵЖໍᇅᓀᇳϸԥᜱޠѵԚҐ ړ(cost function)ޠᢏ܉ȂпؒூԫϸࣩȂໍՅႁژΚૢϾޠਞݏȄࠊख़ޠഷᎍϸഹ ົ҂८Ѡߓႁ࣐Ȉ
¦
= + = Φ l i i C w w 1 2 2 1 ) , ( min ξ ξ (11) s.t.[
]
0 . ,..., 1 , 1 , ≥ = − ≥ + i i i i where l i b x w y ξ ξ ڐϜξiᓀᇳϸޠᇳৰᒌ໕ȂC ΚঐۢޠঅȂᐄ Minoux (1986)Ѡூ ژܝԓᚭԔݳԄίԓȈ[
]
(
)
¦
¦
¦
= = = − + − + − + = Φ l i l i i i i i T i i l i i y w x b C w w 1 1 1 2 1 2 1 ) , ( ξ ξ α ξ βξ (12) ڐϜ α, ܝԓॹȄԄࠊܛख़ȂѠ׳ژڐᄈୌୱᚡԄίȈ⦦
¦
= = = α α α = − αα + α l i l j l k k j i j i j i yy x x W 1 1 1 , 2 1 max ) ( max (13) ؒ၍ԓ(14)Ȉ¦¦
¦
= = − = = l i l j l k k j i j i j i y y x x 1 1 1 * , 2 1 min arg αα α α α (14) s.t. . 0 ,..., 1 0 1 = = ≤ ≤¦
= i l i i i y l i C α α ڐϜȂ߾ C ҇ೞ؛ۢȂঐ࣐၏ϸᏣޠѫѵΚঐ capacity controlȄ ˮȃାᆱ࡚੬ኊޫޠΚૢϾ—хਰЗړԓ ࿌ጤܓޠϸࣩ(boundary)ϛᎍ࿌ਣȂSVM ѠпᒰΤӪ໕(x)ৣژାᆱ࡚ޠ੬ኊ ޫ(z)ȂᙥҦᒶᐆΚঐߩጤܓޠৣȂSVM ѠпӶ੬ኊޫϜ࡛ᄻΚঐഷᎍޠϸഹົ ҂ ८ Ȅ ԫ ৣ є ࢃ ள ң ޠ polynomial K(x,x')=(
x,x' +1)
dڸ Gaussian radial basis functions (RBF) K x x( , ) exp'(
x x' 2)
γ
¦¦
¦
= = = − = l i l j l k k j i j i j i y y K x x 1 1 1 * ( , ) 2 1 min arg αα α α α (15) ڐϜK(xi,xj)࣐ਰЗړԓȂңпஉߩጤܓৣژ੬ኊޫޠӉଡ଼ȄΡȃઢစᆪၰ
ઢစᆪၰዂᔤҢޑઢစᆪၰஉϸයԓၽᆘѓȂΚঐҦ೩Ӽᙐկпା࡚ ፓᚖޠРԓϤഀޠ౪ϰ(processing unit)ܛᄻԚޠᆪၰȂϮܼ౪ϰޠଊဵ༉ ሏၰ৸ᆏ࣐ഀ๗ȂٯйϛӤޠܦᐕ(topology)ᇅᅌᆘݳ(algorithm)ѠпಣԚөᆎᆪၰዂ ԓ(Lippmann 1987)ȂپԄӼቺԓঈ༉ሏᆪၰ(backpropogation)ȃHopfield Networksȃ Self-Organizing Maps (ᙐᆏ SOM) Networks ๊ȄઢစᆪၰϑစԚѓӵၽңӶ೩Ӽስ ȂپԄ၍؛ȃႮ୶ȃማȃଓଡ଼ȃߴᓏІႬ๊߭ୱᚡ(Smith & Gupta 2000)Ȅঈ༉ሏᅌᆘݳઢစᆪၰᅌᆘݳϜᎍӬңٿ၍؛ႲขᇅϸୱᚡޠᅌᆘݳȄڐ л्ᆪၰ࢝ᄻѠϸ࣐ᒰΤቺȃᗵᙡቺІᒰяቺȂڐϜᗵᙡቺѠϸ࣐ႮቺܗसϔቺȂؑ ΚቺְҦঐ౪ϰ௷ӗಣԚȂؑΚቺޠᒰΤၦਠ࣐ࠊΚቺޠᒰяၦਠȂөቺޠ ഀ๗ڏԥђ᠍অ(weight)ȂᙥҦђ᠍অޠ৶ښࠊΚቺᒰΤၦਠޠኈโ࡚Ȅѻޠஆ Ґ౪ւңȶۄ७ݳ (gradient descent)ȷޠᢏ܉Ȃߓႁᆪၰᄃርᒰяᇅҭዀᒰя ϟৰޠᇳৰړഷϊϾȞ֊ ERMȟȂٯഇႇഀ๗ђ᠍অޠϛᘟ።ᐍȂٿႁԚᆪၰޠ ଌጜȄ࿌ᒰΤؑΚ์ᏱಭጓپၦਠਣȂӶᒰяቺཽூژႲขޠᒰяঅȂԫਣᙥҦЩၷ ҭዀᒰяঅڸႲขঅϟޠৰȂѠூΚᇳৰړȄ௦ᇳৰړϡпཌྷϸؒڐഷ ϊϾȂᆪၰࠍւңཌྷϸҢޠ๗ݏ።ᐍቺᇅቺϟޠ᠍অȂϛᘟӵᡑՅႁژᏱಭޠ ਞݏȄঐҦᒰяޠᇳৰ๗ݏӪࡤ༉ሏՎᗵᙡቺڸᒰΤቺп።ᐍ᠍অޠႇโȂ൸ѻ ϟܛпೞᆏ࣐ঈ༉ሏޠӱȄଌጜӼቺᆪၰ࢝ᄻޠΩ࡛ҴΚঐහኌᔗңโԓޠ २्ؐȂՅᒶᐆᎍ࿌ޠઢစᆪၰࠍᕖூၷٺႲขΩޠ२्ӱષȄၐಡᅌ ᆘݳѠՄ(Lippmann 1987)Ȅ
နȃःف೪ॏ
ΚȃःفኻҐ
Ґःف߾ᐄ S&P ϵѨяޏޠёޑ—Global Sector Review ڸ CARD ӏᆆбαܛ ଅၸޠ߭ңຠ๊ڸଓଡ଼ᡑၦਠȂᒶяӶ 1996 Ԓ௦ڨ S&P ຠ๊ޠजڸᐭӵୣޠ Ӎཿ࣐ःفኻҐȄٳኻҐܛཿєࢃຳܓࠣཿȃାऌཿȃႮ୶ཿȃϾϏ ཿȃ࡛ᑟ؆ਠཿȃآٚཿȃᐡᏣ೪റཿڸᓂ២ཿȂኻҐᖃӔॏ 429 ঐȄӶଌጜಣІ ขၑಣޠኻҐϸոαȞُߓ 3ȟȂ߾ٸᐄܛԥၦਠӶө઼๊ޠϸոםϸପȂҐःف Ӕᇕ 429 ঐڨຠӍཿኻҐȂτङٸྲ 75%І 25%ޠЩپϸቺȞ߭ңຠ઼๊๊ȟܫڦ
࣐ 325 ঐଌጜಣᇅ 104 ঐขၑಣኻҐȄ࣐ःفዂԓޠ࡚߭ᇅਞ࡚Մ໕ȂҐःفѫпϛ २ፓܫኻІ 0.75 Bootstrapping ڎᆎРԓ(Witten & Frank 2000)өܫя 100 ಣଌጜኻҐ ᇅขၑኻҐȂпໍ SVM ᇅ ANN ڎᆎРݳޠЩၷȄ ߓ 3ȈኻҐϸո ߭ңຠ઼๊๊ ኻҐϸո ԼϸЩ ଌጜಣ ขၑಣ AA ઼пα 41 9.6% 32 9 A 100 23.3% 75 25 BBB 93 21.7% 71 22 BB 111 25.9% 84 27 B ઼пί 84 19.6% 63 21 429 100.0% 325 104 ҐःفӶՍᡑޠᒶڦαȂ௵ң S&P’s Ӷຠզଓଡ଼ॴᓏਣՄ໕ޠл्ӱષȞߓ 4ȟȂ єࢃᕖւΩȃւਁߴሬȃၦҐ๗ᄻȃ౫ߝ໋ᙾΩ(౫ߝࢻ໕८ޠᓼ༇Ω)ѳ ঐቺ८ޠΫঐᒰΤᡑȂпІΚঐՄኍঢ়ॴᓏӱષ(п 0 ڸ 1 ୣϸ AAA ઼ڸ AA ઼ ঢ়ॴᓏ)ޠᒰΤᡑȄՎܼᕋၽॴᓏȂҦܼၦਠཫαޠ֩ᜳ࡚ၷାйлᢏԚӌႇ २ȂࢉኸϛϡՄኍȂկԥѠཽᄈዂޠਞݏҢϛւޠኈȄӶᒰΤᡑၦਠᇕ Р८ȂҐःفڦຠ๊Ԓ࡚܂ࠊ௱ήঐԒ࡚ޠଓଡ଼ၦਠȞ1993ȃ1994 І 1995 ԒȟȂ ࢉӔॏԥ 31 ঐՍᡑȞ10 ঐଓଡ଼ᡑ*3 Ԓ+1 ঐঢ়ॴᓏᡑȟȂկҦܼٳᡑޠ অٯߩϸոΚयȂसґစၦਠᙾϾܗҔϾ౪ȂᏳयঅၷϊޠᡑϟ२् ܓณݳᡘ౫ȂՅҦঅၷτښᐍঐᆪၰޠᏱಭႇโȂໍՅѿѢΠᏱಭԚݏȄࢉԥ ्҇ଭᄈᒰΤᡑঅձᙾϾܗҔϾޠ౪ȂҐःفпഷτഷϊᄈݳၦਠঅ ᙾඳ࣐Ϯܼ-1 ᇅ 1 ϟޠঅȄ Ӷᒰяᡑޠ೪ॏαȂҦܼ AAA ઼ኻҐၷЎȂࢉᇅ AA ઼Ӭڂຝ࣐ӤΚಣȂ CCC ઼пίޠኻҐηၷЎȂࢉᇅ B ઼Ӭڂ࣐ӤΚϸȄܛпዂԓᒰяᡑӔϸ࣐ϥ ঐ઼๊Ȃ֊ AA ઼пαȃAȃBBBȃBB ژ B ઼пίȄ
ΡȃSVM
Ґःفၽңп C++ ܛึޠ LIBSVM3ᡞโԓȂਰЗړԓ࣐ RBFȂਰЗړԓޠγ ІԚҐړԓޠ C ࣐ࡠ؛ۢޠȂשউٸᐄ Hsu et. al. (2003)ܛණяޠ”simple grid search”РݳȂٯ௵ң ten-fold cross validation џ൷׳ᎍӬޠγІ C অȂ፝Մშ 2Ȃ ׳ژޠڎঐঅϸր࣐γ =0.5 ІC= Ȃпூژ SVM ϸዂԓȄ8
ߓ 4ȈᒰΤᡑۢဏ ᡑ ཏဏ X1 ঢ়ॴᓏᡑȈҦܼजȃᐭࢹޠঢ়ॴᓏ઼๊҂ְ࣐ AAAȂ ՚៍๊ঢ়ޠຠ๊ AAȂࢉ೪ۢΚຏᔤᡑп࣐ୣրȂAAA ઼ ঢ়ᡑঅ࣐ 1ȂAA ઼ঢ়ᡑঅ࣐ 0Ȅ X2i, i =1,2,3 Ԓ ၗҁϯճ৲ ճ৲Ҕ ճ৲Ҕ ុᔼิપ ৲ߥም ុᔼิ৲ϐճ + + = X3i, i =1,2,3 Ԓ ុᔼิ৲שᙑᆶשϐճ৲ߥም ុᔼิ৲שᙑᆶשપ ճ৲Ҕᆶයၗҁϯճ৲ = X4i, i =1,2,3 Ԓ ҉Φ܄ၗҁิൔၿ ុᔼߐิપ ճ৲Ҕ ѳ֡ԃϣډයϐߏයॄ ߏය୍ ߚࢬሀۯ܌ளิ ިܿ ѳ֡อයॷສ = + + + + + *100% X5i, i =1,2,3 Ԓ *100% ᔭຄ ᛜሎܓ墿 ᛜᄐܓ墿۾ᔭຄۍ։ֺ= ᕋཿւઊ=ೳ−ೳԚҐ(Ӷණӗᙠᇅࠊ)−ᆔຳң− ःึԚҐ X6i, i =1,2,3 Ԓ *100% ᕴ୍ ᔼၮౢғޑߎࢬໆ ᕴ୍ޑКٯ ᔼౢғޑߎࢬໆჹ = ᕋཿҢޠ౫ߝࢻ໕=៊ᕋཿഌߟใࡤւ+ᙠᇅ+ሏ۾ܛ ூใ+ڐуߩ౫ߝޠҭ X7i, i =1,2,3 Ԓ Ծҗߎࢬໆჹᕴ୍ޑКٯ ᔼၮౢғޑߎࢬໆ ၗҁЍр ᔼၮၗߎޑቚᚐ ෧Ͽ ᕴ୍ ຏ ᔼၮၗߎޑቚᚐ ෧Ͽ ٠όхࡴߎکอයၗޑᡂ = − − +( ) ( ) : ( ) X8i, i =1,2,3 Ԓ ᕴ ୍ ՞ ၗ ҁ ޑ К ٯ ᕴ ୍ ᕴ ୍ ި ܿ = + X9i, i =1,2,3 Ԓ ೳ X10 i, i =1,2,3 Ԓ ݎ᠍ઊ X11i, I =1,2,3 Ԓ ᖃၦ
შ 2ȈγІ C অ๊ାጤშ ȞພȈlg ࣐п 2 ࣐ۼޠᄈȂѢαِޠ๊ାጤঅ࣐ validation set ޠҔጃ౦ȟ
ήȃANN
Ґःفп Matlab 6.1 ᡞџ೪ॏ ANN ዂԓȂ௵ңঈ༉ሏઢစᆪၰዂԓȂၽң Levenberg-Marquardt algorithmȂᒰΤቺҦ 31 ঐᒰΤϰܛಣԚȇӶᗵᙡቺޠ೪ॏαȂ ౪ϸୱᚡਣȂᗵᙡቺޠ౪ϰۤЎܼᒰΤቺޠᒰΤϰঐ(Diamantaras & Kung 2000)ȂดഷٺޠᗵᙡቺϰϬґԥΚૢܓޠۢ፤ȂࢉҐःفขၑΠ 1~31 ঐᗵ ᙡቺ౪ϰȂ௵ң hyperbolic tangent sigmoid (֊ tansig function in Matlab) ᙾඳړ Ȃᒰяቺє֥ 5 ঐ౪ϰȂϸրᄈᔗϥ߭ңຠ๊ȄӶᏱಭ౦ᇅଌጜԪР८Ȃ ҐःفขၑΠᏱಭ౦∈{0.001, 0.005, 0.01, 0.05, 0.1}ȂпІଌጜԪ∈ {10, 100, 1000, 1500, 2000}ȂԥᜱөঐಣӬޠଌጜಣҔጃ౦ᐄȂ፝Մߓ 5Ȅ࣐Πᗘռዂԓႇ࡚ପ ᎍޠୱᚡȂשউଌጜಣϜϸяΚಣᡜᜍಣ(validation)Ȃٯ௵ң early stopping ޠ๋ џջ݉၏ୱᚡȄစႇ 5 x 5 x 31 ঐϛӤޠ ANN ዂԓ೪ॏࡤȂᕖூഷାޠଌጜಣҔጃ౦ޠᏱಭ ౦࣐ 0.005ȂଌጜԪ࣐ 10 ԪȂᗵᙡቺϰҭ࣐ 22 ঐϰȂࢉᒶᐆп 31-22-5 ޠ ANN ࢝ᄻዂԓȄ
Ӄȃ๗ݏᇅଇ፤
ΚȃSVM ዂԓϸ๗ݏ
SVM ዂԓଌጜಣᇅขၑಣޠϸ๗ݏె༅ઑଳ(confusion matrix)Ԅߓ 6 Іߓ 7 ܛ ұȂڐᐍᡞϸҔጃ౦ϸր࣐ 70.77%І 60.58%ȄᗷดҔጃ౦ᇅ 100%ϬԥΚࢳৰȂ կसпᓎᐡϸϥঐ߭ңຠ઼๊๊ܛॏᆘޠҔጃ౦ٿЩၷȂϬାܼ 20%ȄӶڟֆϸ ޠҭޠαȂϬڏԥϸዂԓޠၦଊቌঅȄ пঐր઼๊ޠϸҔጃ౦ᢏᄇ๗ݏȞଌጜಣ፝ࣽߓ 6Ȃขၑಣ፝ࣽߓ 7ȟȂӶ ଌጜಣϜȂA ઼ޠҔጃ౦ഷା(80%)ȂڐԪ࣐ BB ઼(75%)Ȃಒή࣐ B ઼пί(68.25%)Ȃ ಒѳ࣐ BBB ઼(63.38%)Ȃഷմ࣐ AA ઼пα(59.38%)ȇՅӶขၑಣϜȂࠍ BBB ઼ޠ Ҕጃ౦ഷା(72.73%)ȂڐԪϘ A ઼(68%)Ȃಒή࣐ BB ઼(62.96%)Ȃಒѳ࣐ B ઼пί (57.14%)Ȃഷմ࣐ AA ઼пα(11.11%)Ȅၦਠ๗ݏᡘұпί౫ຬȈ 1. A ઼߭ңຠ๊ᗷดӶଌጜಣϜϸҔጃ౦ഷାȂӶขၑಣϜࠔߩഷାȂկϬ Ҕጃ౦Ԫାޠ߭ңຠ઼๊๊Ȅᐄขၑಣޠె༅ઑଳȂึ౫ϸᓀᇳޠӱӶ ܼϸዂԓױᓀᇳϸޠ 8 ঐขၑኻҐմզΚ઼Ȟᇳց࣐ BBB ઼ȟȄᏒᆔԄԫȂ ၏઼๊ࣻᄈαϬҐःفϜ SVM ዂԓϸҔጃၷାޠ߭ңຠ઼๊๊Ȅ 2. ขၑಣϜϸҔጃ౦ഷାޠ BBB ઼ȂӶଌጜಣϜϸҔጃ౦ٯϛାȂ௱ڐ ӱȂѠӶܼଌጜಣϜϑᏱಭ၏઼๊ޠΚૢ੬ܓȂࢉ֊ٻӶขၑಣϜȂϸҔ ጃ౦ϛ७ЇЁȄӤਣȂڐᓀᇳϸޠםҼӼึҢܼմզΚ઼ޠݸȞᇳց࣐ BB ઼ȟȄ 3. AA ઼пαޠϸҔጃ౦ӶଌጜಣІขၑಣϜࣲഷմޠȂՅйڐᓀᇳϸޠ ኻҐӼึҢܼմզΚ઼ޠݸȞཏ֊ A ઼ȟȄBB ઼ڸ B ઼пίȂࠍึҢޠᓀᇳ ϸӼ࣐᎒ߗ઼๊Ȟཏ֊մզܗାզΚঐ઼๊ȟȄڎঐ྄ᆓ઼๊ȶAA ઼пαȷ ІȶB ઼пίȷޠϸҔጃ౦ӶขၑಣϜၷմޠȂ௱ขڐӱѠ࣐ڎঐ ྄ᆓ઼๊ޠኻҐၷЎȂϸዂԓۧґщϸᏱಭڎঐ઼๊ޠ੬ܓȄ ߓ 6ȈSVM ϸ๗ݏె༅ઑଳ˕˕ଌጜኻҐ Predicted Target AA ઼ пα A BBB BB B ઼пί Ҕጃ౦ մզΚ઼ ᇳৰҔጃ౦ αίΚ઼ Ҕጃ౦ AA ઼пα 19 11 2 0 0 59.38% 93.75% 93.75% A 0 60 13 2 0 80.00% 97.33% 97.33% BBB 1 11 45 13 1 63.38% 81.69% 97.18% BB 1 5 5 63 10 75.00% 86.90% 92.86% B ઼пί 0 2 1 17 43 68.25% 68.25% 95.24% ᐍᡞዂԓ 70.77% 85.23% 95.38%ߓ 7ȈSVM ϸ๗ݏె༅ઑଳ—ขၑኻҐ Predicted Target AA ઼пα A BBB BB пίB ઼ Ҕጃ౦ մզΚ઼ᇳৰҔጃ౦ αίΚ઼Ҕጃ౦ AA ઼пα 1 8 0 0 0 11.11% 100.00% 100.00% A 0 17 8 0 0 68.00% 100.00% 100.00% BBB 0 1 16 5 0 72.73% 95.45% 100.00% BB 0 0 5 17 5 62.96% 81.48% 100.00% B ઼пί 0 0 1 8 12 57.14% 57.14% 95.24% ᐍᡞዂԓ 60.58% 85.58% 99.04% ᐄዂԓ๗ݏѠூژпίᄈึ߭ңຠ๊ϸዂԓޠ؛๋ཏోȈ ӶҐःف SVM ϸዂԓϜȂBBB ઼пαޠϸ઼๊ᇳցݸτഌϸึҢܼմզ Κ઼Ȃ௱فڐӱȂשউޤၿᐄ೩Ӽঢ়ޠၦҐҀൠᆔښ௪ࢋۢȂ࢛ٳݳၦ ᐡᄻ༊ၦܼၦ઼๊пαȞཏ֊ BBB ઼пαȟޠၦዀޠȂஆҐαȂٳึ BBB ઼пαၦϏڏޠึϟ߭ңॴᓏࣻᄈၷմȂ߭ຠᐡᄻӶຠ๊ᄈٳӍཿਣȂ ଷΠଓଡ଼८ޠӱષѵȂᕋၽᆔ౪८ޠӱષѠ؛ۢڐ઼๊ޠᜱᗥՄ໕ȂՅҐःفณ ݳᇕᕋၽᆔ౪८ޠၦਠѠആԚմզΚ઼ޠӱȄ ᗷดȂҐःفޠϸҔጃ౦༊ԥ 60.58%ȂसпህֆՍϾ߭ңຠ๊؛๋ޠᢏᘉٿࣽȂ Ϭڏԥ؛๋ၦଊቌঅȂᇅᓎᐡ؛๋ޠஆྦ(20%)ЩၷȂϬାя 40.58%ȄпᓀᇳϸܛѠ ആԚޠѷԚҐٿϸݚȂSVM ϸዂԓޠᇳցӼึҢܼ᎒ߗ઼๊ޠݸȂЏڐմզ Κ઼߭ңຠ઼๊๊ޠםഷႈȂࢉसשউпৡףմզΚ઼ޠҔጃ౦ٿᢏᄇȂࠍዂԓޠ Ҕጃ౦࣐ 85.58%ȂसпৡףαίΚ઼ᇳցٿࣽȂࠍዂԓޠҔጃ౦࣐ 99.04%Ȅᄃଡ଼αȂα ίΚ઼ޠᇳցᄈӍཿึ༇ڕᗷԥኈȂկٯߩᝓ२ޠᓀᇳȂӱԫȂҐःفϬѠᄈၦҐ Ҁൠණٽߒؐޠ߭ңຠ๊ၦଊȂ٦ٳۧґ௦ڨ߭ңຠ๊ޠӍཿѠӒᕤ၍Ґٙޠ߭ңॴ ᓏ઼๊ဤᘉȂၦѠໍΚؐᕤ၍ુн߭ңຠ๊ޠၦϏڏܛᄈᔗޠ߭ңॴᓏȄ
Ρȃᇅ ANN Щၷଇ፤
࣐ᕤ၍ SVM ޠϸਞݏȂҐःفѫп ANN ዂԓձ࣐ஆྦЩၷȂҦܼ ANN ዂԓ ԥѠณݳؒூӓഷٺ၍Ȃ᎐ႅഌഷٺϾޠםȂпІՄኍܫኻኻҐޠৰܓȂ ࢉשউөпϛ२ፓܫኻᇅ 0.75 Bootstrapping २ፓܫኻڎᆎРԓȂөםԚ 100 ಣኻҐȂ ңпЩၷ SVM ᇅ ANN ዂԓޠਞݏȂؑԪޠҔጃ౦Ԅߓ 8 ܛұȂңпЩၷ SVM ዂԓ ޠҔጃ౦ᡘାܼ ANN ዂԓȂٯ௵ң paired t-test ᔯۢ(Witten & Frank, 2000) п ᕤ၍ҐःفϜڎঐዂԓҔጃ౦ޠᓻӜȄ H0: SVM ขၑಣϸҔጃ౦-ANN ขၑಣϸҔጃ౦≤0 H1: SVM ขၑಣϸҔጃ౦-ANN ขၑಣϸҔጃ౦>0 100 Ԫϛ२ፓܫኻޠᄃᡜூژᔯۢঅ t =10.863 > t(0.01;99)=2.3646Ȃ100 Ԫ 0.75 Bootstrapping ޠᄃᡜூژᔯۢঅ t =13.5318 > t(0.01;99)Ȃࣲ࣐ܣ๙ H0Ȃᔯۢ๗ݏ࣐ SVM ขၑಣޠҔጃ౦ᡘାܼ ANN ขၑಣޠҔጃ౦Ȅߓ 8ȈڎᆎРݳขၑಣҔጃ౦ᄃᡜ 100 Ԫߓ
ϛ२ፓܫኻ Bootstrapping 0.75 ϛ२ፓܫኻ Bootstrapping0.75 ϛ२ፓܫኻ Bootstrapping0.75
SVM ANN SVM ANN SVM ANN SVM ANNt SVM ANN SVM ANN
1 60.58% 55.34% 52.88% 33.33% 34 48.08% 41.35% 50% 44.34% 67 56.73% 6.73% 44.23% 41.90% 2 52.88% 44.23% 55.77% 37.38% 35 50% 23.08% 56.73% 49.57% 68 44.23% 44.23% 45.19% 48.08% 3 55.77% 50.00% 58.65% 29.73% 36 56.73% 45.19% 60.58% 45.19% 69 45.19% 39.42% 49.04% 44.66% 4 58.65% 23.08% 46.15% 32.63% 37 60.58% 38.46% 49.04% 50.88% 70 49.04% 25.00% 55.77% 46.59% 5 46.15% 29.81% 50.96% 30.10% 38 49.04% 46.15% 63.46% 48.54% 71 55.77% 30.77% 48.08% 44.64% 6 50.96% 24.04% 54.81% 39.60% 39 63.46% 47.12% 56.73% 46.08% 72 48.08% 48.08% 57.69% 33.33% 7 54.81% 46.15% 56.73% 13.68% 40 56.73% 41.35% 51.92% 43.12% 73 57.69% 53.85% 52.88% 45.76% 8 56.73% 48.08% 51.92% 47.00% 41 51.92% 38.46% 53.85% 46.73% 74 52.88% 39.42% 54.81% 55.56% 9 51.92% 46.15% 55.77% 36.61% 42 53.85% 35.58% 55.77% 40.00% 75 54.81% 35.58% 58.65% 47.01% 10 55.77% 46.15% 44.23% 40.74% 43 55.77% 44.23% 50.96% 52.48% 76 58.65% 39.42% 57.69% 37.37% 11 44.23% 54.81% 52.88% 36.52% 44 50.96% 46.15% 52.88% 47.12% 77 57.69% 49.04% 58.65% 52.78% 12 52.88% 30.77% 57.69% 43.75% 45 52.88% 36.54% 53.85% 43.93% 78 58.65% 16.35% 58.65% 47.96% 13 57.69% 40.38% 49.04% 19.61% 46 53.85% 36.54% 42.31% 36.36% 79 58.65% 15.38% 46.15% 46.43% 14 49.04% 25.00% 49.04% 8.25% 47 42.31% 46.15% 58.65% 49.55% 80 46.15% 25.96% 45.19% 45.87% 15 49.04% 46.15% 56.73% 53.54% 48 58.65% 21.15% 52.88% 39.42% 81 45.19% 23.08% 53.85% 37.86% 16 56.73% 60.58% 52.88% 41.35% 49 52.88% 48.08% 59.62% 33.64% 82 53.85% 43.27% 50% 40.21% 17 52.88% 37.50% 56.73% 45.71% 50 59.62% 55.77% 60.58% 42.86% 83 50% 43.27% 51.92% 40.78% 18 56.73% 50.96% 51.92% 32.35% 51 60.58% 38.46% 54.81% 31.13% 84 51.92% 47.12% 58.65% 26.36% 19 51.92% 45.19% 50.96% 49.57% 52 54.81% 51.92% 51.92% 42.42% 85 58.65% 48.08% 52.88% 31.13% 20 50.96% 44.23% 55.77% 46.61% 53 51.92% 39.42% 58.65% 51.69% 86 52.88% 35.58% 59.62% 53.77% 21 55.77% 42.31% 59.62% 21.93% 54 58.65% 49.04% 52.88% 41.90% 87 59.62% 5.77% 55.77% 50.00% 22 59.62% 37.50% 50% 38.89% 55 52.88% 36.54% 61.54% 35.14% 88 55.77% 36.54% 58.65% 48.15% 23 50% 50.00% 47.12% 43.36% 56 61.54% 15.38% 45.19% 39.36% 89 58.65% 45.19% 50% 42.59% 24 47.12% 45.19% 51.92% 21.43% 57 45.19% 46.15% 49.04% 44.23% 90 50% 48.08% 49.04% 43.48% 25 51.92% 48.08% 47.12% 24.56% 58 49.04% 35.58% 55.77% 40.00% 91 49.04% 49.04% 52.88% 40.20% 26 47.12% 33.65% 51.92% 29.46% 59 55.77% 47.12% 57.69% 42.06% 92 52.88% 26.92% 53.85% 31.96% 27 51.92% 42.31% 52.88% 36.04% 60 57.69% 46.15% 50.96% 47.92% 93 53.85% 50.96% 60.58% 47.06% 28 52.88% 48.08% 44.23% 24.30% 61 50.96% 20.19% 54.81% 32.11% 94 60.58% 44.23% 47.12% 51.35% 29 44.23% 22.12% 53.85% 48.39% 62 54.81% 44.23% 54.81% 32.76% 95 47.12% 38.46% 50.96% 18.18% 30 53.85% 42.31% 50.96% 40.18% 63 54.81% 53.85% 52.88% 28.18% 96 50.96% 45.19% 51.92% 17.86% 31 50.96% 49.04% 49.04% 53.21% 64 52.88% 42.31% 59.62% 44.04% 97 51.92% 49.04% 49.04% 35.04% 32 49.04% 45.19% 56.73% 41.35% 65 59.62% 51.92% 51.92% 29.70% 98 49.04% 41.35% 52.88% 23.53% 33 56.73% 45.19% 48.08% 43.12% 66 51.92% 16.35% 56.73% 41.12% 99 52.88% 50.00% 47.12% 28.33% 100 47.12% 50.96% 51.92% 44.17% ࣐ܼ߰ᇅ SVM ዂԓໍϸЩၷȂஆܼٻңӤኻΚಣܫኻኻҐޠݸίȂשউ ѫϸݚ 31-22-5 ޠ ANN ࢝ᄻዂԓȂଌጜಣ፝ࣽߓ 9Ȃขၑಣ፝ࣽߓ 10Ȃᇅ SVM ໍЩၷȂӶขၑಣഌϸȂѠூژпί๗፤Ȉ 1. пঐր߭ຠ઼๊ޠҔጃ౦ᢏᄇȂSVM ዂԓޠҔጃ౦Ӷ AȃBBB І BB ઼ήঐ րାܼ ANN ዂԓȂӶ AA ઼пαІ B ઼пίࠍմܼ ANN ዂԓȂᡘұ SVM ዂ ԓӶ྄ᆓ઼๊ޠϸαၷৰޠȄ 2. пᐍᡞҔጃ౦ᢏᄇȂSVM ዂԓାܼ ANN ዂԓȄ 3. пৡףαίΚ઼ᇳৰޠҔጃ౦ᒌ໕ᢏᄇȂSVM ዂԓංоѠႁ 100%ޠᐍᡞዂԓ Ҕጃ౦Іঐր߭ຠ઼๊ޠҔጃ౦ȞଷΠ B ઼пί࣐ 95.24%ȟȄ
ߓ 9ȈANN ϸ๗ݏె༅ઑଳ˕˕ଌጜኻҐ Predicted Target AA ઼пα A BBB BB пίB ઼ Ҕጃ౦ մզΚ઼ᇳৰҔጃ౦ αίΚ઼Ҕጃ౦ AA ઼пα 21 9 0 2 0 65.63% 93.75% 93.75% A 7 52 12 3 2 68.42% 84.21% 93.42% BBB 1 7 57 4 1 81.43% 87.14% 97.14% BB 1 3 1 72 7 85.71% 94.05% 95.24% B ઼пί 0 1 1 3 59 92.19% 92.19% 96.88% ᐍᡞዂԓ 80.06% 89.88% 86.05% ߓ 10ȈANN ϸ๗ݏె༅ઑଳ˕˕ขၑኻҐ Predicted Target AA ઼пα A BBB BB пίB ઼ Ҕጃ౦ ᇳৰҔጃ౦մզΚ઼ αίΚ઼Ҕጃ౦ AA ઼пα 4 3 2 0 0 44.44% 77.78% 77.78% A 0 13 9 2 0 54.17% 91.67% 91.67% BBB 1 2 14 4 2 60.87% 78.26% 86.96% BB 0 1 8 12 6 44.44% 66.67% 96.30% B ઼пί 0 0 1 5 14 70.00% 70.00% 95.00% ᐍᡞዂԓ 55.34% 76.70% 86.05%
ഛȃ๗፤
Ґःفޠл्ଔᝧӶܼଇႇџःفၷЎණІޠึ߭ңຠ๊ឋᚡȂпІᔗң ུߗޠϏහኌϸϏڏ SVM Рݳٿ࡛ᄻึ߭ңຠ๊ϸዂԓȄҦܼึ߭ ңຠ๊ϛึຠ๊ѠӱՄ໕༇᠍՞๊ึӈՅၷܼܿϸ߭ຠ઼๊Ȃࢉ࣐Κ ፓᚖޠϸ؛๋ୱᚡȄึ߭ңຠ๊࣐өᆎึϏڏ߭ңຠ๊ޠஆᙄȂՅйҭࠊѯ ᢋޠ߭ңຠ๊ၦଊҼӼ឵ܼึޠ߭ңຠ๊ȂҐःفп SVM Рݳึ߭ңຠ๊؛ ๋ዂԓѠ࣐ၦҐҀൠණٽߒؐޠึ߭ңຠ๊ၦଊȄ SVM Рݳ࣐ߗංԒٿӶ౪ϸୱᚡαᕖூًԂ๗ݏޠϸݚ؛๋ϏڏȂٯϑၽң ܼᚃᏱІϏโ๊ୱᚡ၍؛ȂӶᆔ౪ऌᏱР८ȂҼӤኻ८ᖞ೩ӼϸޠୱᚡȂҐःفп ึ߭ңຠ๊࣐پџଇڐӶᆔ౪ስޠᎍңܓȂٯпѫΚϏහኌРݳ ANN ዂԓ ࣐ஆྦᇅ SVM ዂԓໍЩၷȂึ౫ SVM РݳӶҐःفϜޠϸҔጃ౦ାܼ ANN РݳȄ Ґःفޠ SVM ዂԓขၑಣϸҔጃ౦༊ႁ 60.58%ȂйӼᆻӱܼմզΚঐ߭ңຠ ઼๊๊Ȃ࡛ឋґٿޠःفРӪѠ්Ӫᇕᕋၽᆔ౪८ޠၦਠȂᡲዂԓҐٙѠᏱಭၷ ᐍޠ߭ңຠ๊឵ܓȂпଷմզΚঐ߭ңຠ઼๊๊ޠᓀᇳȂໍΚؐණЁҔጃ౦ȄSVM РݳӶึ߭ңຠ๊ϸୱᚡαȂᇅҐःفܛ೪ॏޠЩၷஆྦ ANN РݳЩၷࡤȂ ߒؐூژၷٺޠϸҔጃ౦Ȃ࡛ឋґٿѠၽң၏Рݳ၍؛ڐуޠᆔ౪ୱᚡȄᇭᗃ
ձདᗃڎ՞ୢӫቸࢦېসϟີཏُȄҐःف߾ҦऌཽःفॏฬܛМࡼȂॏ ฬጢဵ NSC 95-2416 H-182-008Ȃ༊ԫᇭᗃȄ
ՄНᝧ
1. Belkaoi, A. “Industrial Bond Ratings: A New Look,” Financial Management (Autumn) 1980, pp: 44-51
2. Burbidge, R., Trotter, M., Buxton B. and Holden, S. “Drug Design by Machine Learning: Support Vector Machines for Pharmaceutical Data Analysis,” Computers
and Chemistry (26) 2001, pp: 5-14
3. Cai, Y.-D. and Lin, X.-J. “Prediction of Protein Structural Classes by Support Vector Machines,” Computers and Chemistry (26) 2002, pp: 293-296
4. Diamantaras, K.I. and Kung, S.Y. Principal Component Neural Networks: Theory and
Applications, John Wiley, New York, 1996
5. Dutta, S. and Shekhar, S. “Bond Rating: A Non-Conservative Application of Neural Networks,” Proceedings of the IEEE International Conference on Neural Networks (II) 1988, pp: 443-450
6. Ederington, L.H., “Classification Models and Bond Ratings,” The Financial Review (20:4) 1985, pp: 237-262
7. Fisher, L. “Determinants of Risk Premiums on Corporate Bonds,” Journal of Political
Economy (June) 1959, pp: 217-237
8. Gunn, S.R. “Support Vector Machines for Classification and Regression,” unpublished manuscript, Faculty of Engineering and Applied Science Department of Electronics and Computer Science, University of Southampton, 1998, pp: 1-54
9. Horrigan, J.O. “The Determination of Long Term Credit Standing with Financial Ratios,” Journal of Accounting Research (Supplement) 1966, pp: 44-62
10. Hsu, C.-W., Chang, C.-C. and Lin, C.-J. “A Practical Guide to Support Vector Classification,” Department of Computer Science and Information Engineering, National Taiwan University, 2003
11. Kim, K.-S. and Han, I. “The Cluster-indexing Method for Case-based Reasoning Using Self-organizing Maps and Learning Vector Quantization for Bond Rating Cases,” Expert Systems with Applications (21) 2001, pp: 147-156
12. Lippmann, R.P. “An Introduction to Computing with Neural Nets,” IEEE ASSP
13. Maher, J.J. and Sen, T.K. “Predicting Bond Ratings Using Neural Networks: A Comparison with Logistic Regression,” Intelligent systems in accounting, finance and
management (6) 1997, pp: 59-72
14. Minoux, M. Mathematical Programming: Theory and Algorithms, John Wiley and Sons, 1986
15. Molinero, C.M., Gomez, C.A. and Cinca, C.S. “A Multivariate Study of Spanish Bond Ratings,” Omega (24:4) 1996, pp: 451-462
16. Morris, C.W. and Autret, A. “Support Vector Machines for Identifying Organisms - A Comparison with Strongly Partitioned Radial Basis Function Networks,” Ecological
modeling (146) 2001, pp: 57-67
17. Pinches, E. and Mingo, K.A. “A Multivariate Analysis of Industrial Bond Ratings,”
Journal of Finance (March) 1973, pp: 1-18
18. Pinches, E. and Mingo, K.A. “The Role of Subordination and Industrial Bond Ratings,” Journal of Finance (March) 1975, pp: 201-206
19. Shin , K.-S. and Han, I. “Case-based Reasoning Supported by Genetic Algorithms for Corporate Bond Rating,” Expert Systems with Application (16) 1999, pp: 85-95
20. Shin, K.-S. and Han, I. “A Case-based Approach Using Inductive Indexing for Corporate Bond Rating,” Decision Support Systems (32) 2001, pp: 41-52
21. Smith, K.A. and Gupta, J.N.D. “Neural Networks in Business: Techniques and Applications for the Operations Research,” Computers and Operations Research (27) 2000, pp: 1023-1044
22. Standard & Poor's Corporation Standard & Poor’s Corporate Ratings Criteria, McGraw Hill Book Company, 1996
23. Surkan, A.J. and Singleton, J.C. “Neural Networks for Bond Rating Improved by Multiple Hidden Layers,” Proceedings of the IEEE International Conference on
Neural Networks (2) 1990, pp: 163-168
24. Tay, F.E.H. and Cao, L. “Application of Support Vector Machines in Financial Time Series Forecasting,” Omega (29) 2001, pp: 309-317
25. Vapnik, V. N. The Nature of Statistical Learning Theory, New York, Springer-Verlag, 1995
26. West, R.R. “An Alternative Approach to Predicting Corporate Bond Ratings,” Journal
of Accounting Research (Spring) 1970, pp: 118-127
27. Witten, I.H., Frank E. Data Mining: Practical Machine Learning Tools and
Techniques with Java Implementations, San Francisco, Morgan Kaufmann Publishers,