• 沒有找到結果。

探討支持向量機器在發行人信用評等分類模式之應用

N/A
N/A
Protected

Academic year: 2021

Share "探討支持向量機器在發行人信用評等分類模式之應用"

Copied!
24
0
0

加載中.... (立即查看全文)

全文

(1)

௥ଇМࡼӪ໕ᐡᏣӶึ՘΢߭ңຠ๊ϸ᜹ዂԓ

ϟᔗң

ࢋ΢ऽ ߞ۹τᏱϏ୧ᆔ౪Ᏹق ചН๽ ୾ҴᇄᢋτᏱ୧Ᏹःفܛ ֕ჱύ ߞ۹τᏱᆔ౪Ᏹଲ

ᄣ्

߭ңຠ๊ښ࡚ӶߝᒋҀൠϑ՘ϟԥԒȂڐӶӍཿ᝱ၦȃ׺ၦ΢ၦଊڦூȃማ՘௳ ߭୥ՄȂпІ೤ጓΚૢᐡᄻ׺ၦዀޠαȂְ׹ᅌ຀ࣻ࿌२्ޠِՔȄ߭ңຠ๊ޠл् ҭޠΞӶຠ໕༇ڕȃಊڕึ՘ᐡᄻܗԇ෢ᐡᄻ߭ңࠣ፵ޠًⱜȂпւ׺ၦ΢୉яӬ౪ ޠ؛๋Ȅႇџ߭ңຠ๊ޠःفτӼଭᄈΚૢ౱ཿޠϵѨ༇࡛Ҵϸ᜹ዂԓȂၷЎଭᄈึ ՘ᐡᄻҐٙޠ߭ңຠ๊ໍ՘ःفȄԟ෉ޠःفРݳτӼ௵ңಜॏРݳȂߗ෉Ξԥп΢ Ϗහኌ࣐ஆᙄϟөᆎᅌᆘݳȂپԄ᜹ઢစᆪၰІ case-based reasoning ๊ȄҐःفლၑ ᔗңΚ໷ུߗึ৥йϑᕖூࣻ࿌ାϸ᜹Ҕጃ౦ޠ΢ϏහኌРݳ support vector machines (SVM)ٿ࡛ᄻึ՘΢߭ңຠ๊ϸ᜹ዂԓȄ࣐ᡜᜍ SVM ޠѠңܓȂשউ஡пዀྦ඾ᅮϵ Ѩ(Standard and Poor’sȂпίᙐᆏ S&P)ܛึոޠΚૢ౱ཿึ՘΢߭ңຠ๊ၦਠኻҐ࣐ پȂᒶᐆ S&P ຠ๊ਣܛՄኍϟࣻᜱ२्ଓଡ଼ᡑ኶І୾ঢ়ॴᓏӱષձ࣐ዂԓޠᒰΤᡑ ኶Ȃٯп᜹ઢစᆪၰРݳ࣐ஆྦᇅ SVM Рݳໍ՘ЩၷȂᄃᜍ๗ݏᡘұ SVM ዂԓᓻܼ ᜹ઢစᆪၰዂԓȄ

(2)

A Study of SVM Classification Models

in Issuers’ Credit Ratings

Jen-Ying Shih

Department of Business Administration, Chang Gung University

Wun-Hwa Chen

Graduate Institute of Business Administration, National Taiwan University

Soushan Wu

College of Management, Chang Gung University

Abstract

Credit rating systems have existed for a long time in most financial markets and played a major role in corporate capital raising, providing investment information for both individual investors and institutional investors, and credit granting in banks. The purpose of credit ratings is to measure the credit worthiness of credit securities’ issuers so as to provide investors valuable information in making financial decisions. Due to the fact that the subordination of bonds has a great impact on the bond’s rating (hence render the rating problem much easier to solve), most of the early researches have focused on industrial bond ratings rather than issuers’ credit rating. In terms of classification approaches, early researches relied on conventional statistic methods, while recent studies tended to apply artificial intelligence based techniques, such as artificial neural networks and case-based reasoning. The main objective of this research is to propose a classification model for the issuers’ credit ratings based on support vector machines, a novel classification algorithm famous for dealing with high dimension classification.

To verify the capability of the proposed model, a set of Standard and Poor’s issuers’ credit rating data was used as the test bed. To construct our classification models, the ten key financial variables used by Standard and Poor’s (S&P), and country risk were chosen as the input variables. An artificial neural network based classification model was selected as the benchmark. Our empirical results showed the superiority of the support vector machine model over the neural artificial network model.

(3)

൧ȃᏳ፤

ึ՘΢߭ңຠ๊(issuers’ credit rating)Ξ࢑߭ңຠ๊ᐡᄻ(rating agency)ܛ௳ᇅޠ һܿᄈК߭ңຠ๊(counterparty credit rating)Ȃԫຠ๊߾ຠ๊ᐡᄻᄈڨຠึ՘΢ᓼᗚ༇ ଡ଼ᐍᡞ૗ΩϟཏُȂл्߾຀२ܼຠզึ՘΢࢑֐ԥྦਣቻ՘ڐଓଡ଼ܜᒜϟ૗ΩІཏ ᜺ȂկٯߩЇ࢏Ӊեึ՘΢ᄈө༇ଡ଼ޠᓻӒᓼᗚ໸זܗ୒Ԃ1Ȅึ՘΢ޠ߭ңॴᓏབ ାȂܛᕖூޠ߭ңຠ઼๊๊བմȂࣻᄈӵȂڐܛึ՘ޠϵѨ༇๊᝱ၦϏڏܛ૗ᕖூޠ ༇߭ຠ๊η஡ڨึ՘΢߭ңຠ๊ޠኈ៫ȄӱԫȂӶߝᒋҀൠϜȂ߭ңຠ๊׹ᅌ༉ሏ߭ ңॴᓏၦଊޠѓ૗Ȃၦߝሰؒ޲Ѡпࣻᄈᔗޠ߭ң઼๊ӶၦҐҀൠ᝱ၦȂԥਞ७մ᝱ ၦԚҐᇅၦߝુнޠॴᓏȇၦߝٽ๞޲Ӷၦଊഇ݃ޠ׺ၦᕘძϜȂѠٸҐٙᄈॴᓏ୒ Ԃޠโ࡚ᒶᐆᎍ࿌ޠ׺ၦ༇᠍ϏڏȂԫၦଊЏڐᄈڨژା࡚ᆔښ׺ၦዀޠޠ՘ཿՅ ّȂ؂ڏࡿዀܓޠཏဏȂپԄԥٳݳ΢׺ၦᐡᄻܗஆߝೞ४ۢ༊૗׺ၦܼ BBBȞܗ Baaȟ઼пαޠ׺ၦϏڏ2Ȃٻூ߭ңຠ๊ၦଊޠණٽ؂ڏ२्ܓȄᆤαܛख़Ȃ߭ңຠ ๊ޠቌঅӶܼ૗ഇႇᙐܿޠಓဵණٽ׺ၦ΢Ӎཿ߭ңॴᓏޠၦଊȂпւڐٸᐄҐٙޠ ॴᓏ୒Ԃ୉яᎍӬޠ׺ၦ؛๋ȞMolinero et. al. 1996ȟȄ

ஆҐαȂຠ๊ᐡᄻᙥҦᇕ໲ᇅ೥ፚึ՘΢ࣻᜱޠөᆎၦଊȂєࢃึ՘΢ޠ୾ঢ়ॴ ᓏȃӍཿᕋၽॴᓏڸଓଡ଼ॴᓏၦଊȂစҦϸݚ৲ყ໦ޠ஠ཿຠզȂ๞ϡึ՘΢ᐍᡞޠ ߭ңॴᓏ઼๊Ȅᗷด௳ϡ߭ңຠ๊࣐ΚقಜϾޠϸ᜹؛๋ႇโȂկ࢑ϸݚ৲ყ໦ޠ஠ ཿຠզ׭೛٠ϭࠍґೞقಜϾޠᇴ݃Ȃཏ֊ณݳ໕Ͼөຠ्๊ષᇅຠ઼๊๊໣ޠᜱ ߾ȇଷԫϟѵȂ೩ӼӍཿۧґೞຠ๊ᐡᄻຠ๊Ȃկ׺ၦ΢Ϭᄈ೼ٳӍཿޠ߭ңॴᓏၦ ଊԇӶ࡟τޠሰؒȂࢉ೩ӼᏱ޲ლၑၽңөᆎಜॏРݳ(Belkaoi 1980; Ederington 1985; Pinches & Mingo 1975)ᇅ΢ϏහኌޠРݳ (Dutta & Shekhar 1988; Surkan & Singleton 1993; Shin & Han 1999)ٿ၍؛ԫϸ᜹ୱᚡȂٯᕖூΚٳԥңޠ؛๋ၦଊᇅϸ᜹ਞݏȄ կࠊ΢ޠःفӼ຀२ܼ༇߭ຠ๊ޠϸ᜹Ȃ܂܂ѠၽңΚٳ੬ۢޠึ՘఩ӈȞپԄ༇᠍ ޠ௄឵ܓȟึ৥ϸ᜹ྦጃ౦ၷାޠዂԓȂկංоᘁЎᄈ؂ஆҐޠឋᚡ—ึ՘΢߭ңຠ ๊џึ৥ϸ᜹ዂԓȂӶᄃଡ଼αȂҀൠ΢ς܂܂ᄈึ՘΢߭ңຠ๊ޠၦଊሰؒ؂࣐੉ϹȂ ዗Ӷޠ༇ڕึ՘΢ѠႲӒຠզҐٙޠ߭ң઼๊Ȃп߰ՄኍпեᆎРԓ᝱ၦȇ׺ၦ΢ࠍ ѠٲӒᄈۧґᕖூ߭ңຠ๊ޠึ՘΢ॴᓏԥߒؐޠᕤ၍ȄӱԫȂҐःف஡௥فึ՘΢ ߭ңຠ๊ϸ᜹ዂԓȄԫѵȂႇџӶ༇߭ຠ๊ःفឋᚡαȂϑԥᏱ޲ණя΢ϏහኌРݳ ޠᓻ౵ܓ(Dutta & Shekhar 1988; Maher & Sen 1997)Ȃٯᔗң᜹ઢစᆪၰ(artificial neural networks, ᙐᆏ ANN)ȃஆӱᅌᆘڸ case-based reasoningȞᙐᆏ CBRȟ๊Рݳ၍ ؛༇߭ຠ๊ޠୱᚡȂկၷЎ௥فߗԒٿུึ৥ޠϸ᜹Ꮳ support vector machines(SVM) ޠᎍңܓȄ

1 пαึ՘΢߭ңຠ๊ޠᇴ݃߾਴ᐄ S&P ϵѨᆪયၦਠ http://www2.standardandpoors.comȄ

2 S&P ᄈߞ෉߭ң׺ၦϏڏޠຠ๊ȂӔϸ࣐Οঐ઼๊ȂڐϜ឵ܼ׺ၦ઼๊ޠ༇ڕ࣐ AAAȃAAȃA І BBBȇ׺

ᐡ઼๊ޠ༇ڕࠍєࢃ BBȃBȃCCCȃCCȃC ᇅ DȄMoody’s ᄈߞ෉߭ң׺ၦϏڏޠຠ๊ȂҼϸ࣐Οঐ઼๊Ȃ ڐϜ឵ܼ׺ၦ઼๊ޠ༇ڕ࣐ AaaȃAaȃAȃBaaȇ׺ᐡ઼๊ޠ༇ڕࠍєࢃ BaȃBȃCaaȃCaȃC ᇅ DȄ

(4)

SVM Ӽၽңܼ࠯ᄙϸ᜹ୱᚡȂٯᕖூା࡚ޠϸ᜹Ҕጃ౦Ȃᔗңስ஀єࢃᇨ᛿ୱᚡ (Burbidge et. al. 2001)ȃ೗Ҫ፵๗ᄻϸ᜹(Cai & Lin 2002)ІҢޑᡞϸ᜹(Morris & Autret 2001)๊ȄӶଓଡ଼ୱᚡαࠍӼ࣐ၽң support vector regression (SVR)၍؛Ⴒขөᆎߝᒋ ୧ࠣቌੀІൣႎ౦ȞTay & Cao 2001ȟޠୱᚡȂၷЎϸ᜹ୱᚡ(support vector classification, ᙐᆏ SVC)ޠःفȂࢉҐН஡௥ଇ SVM Ӷଓଡ଼ϸ᜹ୱᚡ˕˕߭ңຠ๊ޠᎍңܓȂٯᇅ ೩ӼᏱ޲ளңޠ᜹ઢစᆪၰ(ANN)ЩၷȄ пίНത஡௥ଇᇅᐍ౪ႇџ߭ңຠ๊ःفޠ྆ݸȂڐԪȂ஡ᙐϮ SVM ᇅ ANN Р ݳȂҦܼ SVM ུ࣐РݳȂࢉשউ஡ଭᄈ SVM ୉ΚၐಡޠϮಞȂಒήഌӌ஡ᇴ݃ःف ೪ॏȂಒѳഌϸ஡ᇴ݃ٯଇ፤ึ՘΢߭ңຠ๊ϸ᜹ϟःف๗ݏȄഷࡤȂᖃ๗Ґःفޠ ๗፤ȂٯණяґٿѠໍΚؐःفޠРӪȄ

ະȃ߭ңຠ๊ࣻᜱःف

Κȃ௥فୱᚡϟ᜹࠯

ԟ෉߭ңຠ๊ޠःفӼ஠ݨܼߞ෉ϵѨ༇༇߭ຠ๊ϸ᜹ୱᚡȂߗхࠍԥΚٳ฼෉ ಊ ༇ ຠ ๊ ޠ ः ف Ȃ ӱ ԫ Ȃ Ӷ ᒰ Τ ᡑ ኶ ޠ ᒶ ڦ α Ȃ ೾ ள ஡ ၏ ϵ Ѩ ༇ ޠ ༇ ᠍ ໸ ՞ (subordination)ȃึ՘೤ዂܗᓼ༇෉४ાΤ(Horrigan 1966; West 1970; Pinches & Mingo 1973, 1975)ȂࢉӶϸ᜹ޠҔጃ౦αȂ܂܂ЩϛાΤՄ໕ٿூାȂٸᐄ Pinches & Mingo (1975)ȂMoody’s A ઼пαޠϵѨ༇ංо࣐ߩ௄឵ޠϵѨ༇(nonsubordinated bonds)Ȃ Ba ઼пίޠϵѨ༇ංо࣐௄឵ޠϵѨ༇Ȃࢉԫӱષ࢑༇߭ຠ๊ޠ२्Ⴒขᡑ኶ȄկҦ ܼᄃଡ଼αȂຠ๊ᐡᄻ܂܂Ӓ౱Ңึ՘΢ຠ๊ࡤȂϘଭᄈڐึ՘ޠ߭ң׺ၦϏڏໍ՘ຠ ๊Ȃࢉשউוగ૗ึ৥؂ஆᙄІᎍңጓ൝؂ኅހޠޠึ՘΢߭ңຠ๊ϸ᜹ዂԓȄ

Ρȃຠ๊Рݳ

ӶःفРݳαȂ༉ಜಜॏӼᡑ໕ϸݚ࢑ഷளೞᔗңޠРݳȂєࢃӼϰጤܓୣրϸ ݚ (Belkaoi 1980; Ederington 1985; Pinches & Mingo 1975)ȃӼϰߩጤܓୣրϸݚ (Pinches & Mingo 1977)ȃޣጤଠᘫݳ(Horiggan 1966; West 1970; Ederington, 1985)ȃ Probit regression (Ederington 1985) ȃ Logit regression (Ederington 1985) І mutidimensional scaling (Molinero et. al. 1996)Ȅկ೼ٳಜॏРݳ܂܂ሰᅗ٘੬ۢޠಜॏ ୆೪ȞپԄၦਠಓӬளᄙϸପȟϘ૗ᎍңȂӱԫȂϛሰଭᄈၦਠಣӬԥӉեಜॏ୆೪ ޠ΢ϏහኌРݳӱၽՅକȂӕђαၦଊऌ׭ޠໍؐѠᅗ٘τ໕ၽᆘޠሰؒȂӱԫȂᏱ ೛αІᄃଡ଼αབٿབӼޠ஠ঢ়Ᏹ޲ლၑ௵ң΢ϏහኌРݳ၍؛ୱᚡȂҭࠊϑԥᏱ޲௵ ң ঈ ༉ ሏ ᜹ ઢ စ ᆪ ၰ (Dutta & Shekhar 1988; Surkan & Singleton 1990)ȃ genetic algorithm ᇅ CBR ᐍӬРݳ(Shin & Han 1999)ȃ᜹ઢစᆪၰᇅ CBR ᐍӬРݳ(Kim & Han 2001)І CBR (Shin & Han 2001)၍؛༇߭ຠ๊ୱᚡȂٯᕖூࣻ࿌ϛᓀޠϸ᜹Ҕጃ

(5)

౦ȄߗԒٿȂ೩Ӽःفණя SVM Рݳ࣐ΚঐًԂޠϸ᜹РݳȂѠၱ಴ ANN Ѡ૗᎐ႅ ޠ׌ഌഷϊϾ(local minimum)ୱᚡȂпؒூӓ஀ഷٺ၍Ȃࢉೞңп၍؛೩Ӽϸ᜹ୱᚡȂ ӱԫȂҐःفҼლၑၽң SVM Рݳ၍؛ึ՘΢߭ңຠ๊ϸ᜹ୱᚡȄ

ήȃᒰΤᡑ኶ᒶڦ

ӶᒰΤᡑ኶ޠᒶڦαȂԥٳःف௵ңӼᡑ໕ಜॏРݳޠлԚӌϸݚȃӱષϸݚІ ഃؐᒶڦޠРԓᒶڦ२्ޠᒰΤᡑ኶(Pinches & Mingo 1975, 1977; Shin & Han 1999; Kim & Han 2001; Shin & Han 2001)ȂԥٳᏱ޲пစᔽӬ౪ܓлᢏᒶڦᎍ࿌ޠᒰΤᡑ኶ (Horrigan 1966; Belkaoi 1980 )Ȃ೼ٳᡑ኶܂܂ӱᔗၦਠޠѠڦூܓՅӼ຀२ܼӍཿଓ ଡ଼ၦଊȞЏڐ࢑ଓଡ଼Щ౦ၦଊȟڸ߭ң׺ၦϏڏޠึ՘఩ӈȞϵѨ༇ޠ௄឵ܓȟȂᘁЎ ٻңᕋၽ८ܗҀൠ८ޠၦଊȂࢉ Pinches & Mingo(1975)ᇰ࣐೼࢑Ҕጃ౦ณݳτൾණ݈ ޠ঩ӱȂη࢑߭ຠᐡᄻл஼ϸݚ৲лᢏցᘟޠቌঅܛӶȄ

Ґःفᇰ࣐ࢌด्ϸ᜹ዀྦ඾ᅮϵѨ(Standard and Poor’sȂпίᙐᆏ S&P)ܛ๞ۢ ޠຠ๊ȂӱᔗၦਠޠѠڦூܓȂѠლၑ༊ւң S&P ຠ๊׭೛ܛ຀२ޠଓଡ଼ᡑ኶ٿձ࣐ ᒰΤᡑ኶ȂΚૢՅّȂS&P ࡈᆏڐл्Մ໕ϟଓଡ଼८ኈ៫ӱષєࢃᄺఝ(leverage)ȃߴ ሬ(coverage)ȃᕖւ૗Ω(profitability)І౫ߝࢻ໕(cash flow)ѳঐቺ८ȞS&P, 1996ȟȂп ௥فዂԓϸ᜹ޠਞݏȄ

ѳȃၦਠ෉໣

Ҧܼຠ๊ᐡᄻӶໍ՘ຠ๊ਣȂٯߩ൑᏶ΚঐԒ࡚ޠၦਠٿ؛ۢȂՅࠊ΢ޠःفଷ ΠЎ኶ޠᕖւԚߞܓӱષਗ਼ІӼঐԒ࡚ޠၦਠ(West 1970; Pinches & Mingo 1973, 1975)ȂپԄႇџΟԒٿޠࣵᎸᡑ౵኶ȂӼљ༊ᇕ໲ႇџΚԒޠၦਠџϸ᜹༇߭ຠ๊Ȃ ׼ܗпႇџϥঐԒ࡚ޠ҂ְ኶ձ࣐ᒰΤᡑ኶(Pinches & Mingo 1973,1975)Ȃ೼ᇅᄃଡ଼ϸ ݚαԥ࡟τޠৰրȂࢉҐःف஡ၽңήঐ׈ᐍԒ࡚ޠᐤѭၦਠџϸ᜹߭ңຠ๊Ȅ

ϥȃϸ᜹ᒰяޠ೪ॏ

ႇџȂӶၽң᜹ઢစᆪၰРݳ࡛ᄻϵѨ༇߭ңຠ๊ዂԓޠःفϜ(Dutta & Shekhar 1988; Surkan & Singleton 1990)Ȃٯߩ჌ၽң༉ಜಜॏРݳޠᏱ޲ΚૢձӼ઼๊ޠϸ᜹ Ȟཏ֊ϸ᜹઼๊τܼΡ઼ȟȂՅѬ࢑ΡϸݳȂཏ֊ୣϸ࢛ϵѨ༇࢑֐࣐࢛Κ߭ңຠ๊๊ ઼ȂӶԫΡϸݳޠ௒ݸίȂ᜹ઢစᆪၰޠϸ᜹ਞݏ඾ႈࣲ࡟ԂȂҔጃ౦ഷାႁ 88%ѿ Ѣȇկᒹᏻޠ࢑ٯءԥձӼ઼๊ޠϸ᜹ȄӶ೼ϛӤޠЩၷஆᙄίȂ༊૗ᖑᆏ᜹ઢစᆪ ၰРݳӶୣϸ࢑֐࣐࢛Κ߭ңຠ઼๊๊༇ڕޠឋᚡαȂڐҔጃ౦ାܼ༉ಜಜॏРݳȇ ՎܼӶӼ઼๊ޠϸ᜹௒םϜȂࠍԥࡠໍΚؐޠ௥فȄ пίשউ൸ٸԫϥঐᄻ८Ȃ஡ႇџޠःفཋᖃԄߓ 1 ܛұȄ

(6)

ߓ 1Ȉ߭ңຠ๊ःفЩၷߓ ᄻ८ ःف ୱᚡ᜹࠯ ःفРݳ ᒰΤᡑ኶ ၦਠ ෉໣ ᒰяᡑ኶ ഷାҔጃ౦ Horrigan 1966 Moody’s ІS&P ܛ ึ ո ޠ ༇ ߭ ຠ๊ Ӽϰଠᘫዂ ࠯ ༇ ᠍ ໸ ՞ ȃ ᖃ ၦ ౱ ȃ ᕋ ၽ ၦ ߝ խ ᎜ ೳ Щ ౦ ȃ ౒ ၦ ౱ խ ᖃ ॓ ༇ Щ ౦ ȃ ᎜ ೳ ᚟ խ ౒ ၦ ౱ Щ ౦ ȃ ౒ւխ᎜ೳ᚟Щ౦ ΚԒ Moody’s І S&P ޠϳঐ߭ ຠ઼๊ 58% (Moody’s) 52%( S&P) West 1970 Moody’s ܛึ ո ޠ ༇ ߭ຠ๊ Ӽϰଠᘫዂ ࠯ ਴ ᐄ Fisher (1959) ޠ ः ف Ȃ ᒰ Τ ᡑ ኶ ࣐ Ο Ԓ ࣵ Ꮈ ޠ ᡑ ౵ ኶ ȃ ᓼ ༇ ෉ ໣ ȃ ॓ ༇ խ ᠍ ઊ Щ ౦ І ࢻ ೾Ӷѵޠ॓༇ ΚԒ ϳঐ߭ຠ ઼๊(Aaa, Aa, A, Baa, Ba, B) 62% Pinches & Mingo 1973 Moody’s ܛ ึ ո ޠ ༇߭ຠ๊ ጤܓӼϰୣ րϸݚȞпί ᙐᆏ MDSȟ ၽ ң ӱ ષ ϸ ݚ (factor analysis) ௄ 35 ঐᡑ኶ᒶᐆ 6 ঐ ᒰ Τ ᡑ ኶ Ȃ є ࢃ ഀ ៊ ϛ ᘟ ึ ޷ ւ ޠ Ԓ ኶ ȃ ึ ՘ ߝ ᚟ ȃ ౒ ւђॏւਁຳң/ւ ਁ ຳ ң ȃ ߞ ෉ ॓ ༇ խ ᖃ ၦ ౱ ޠ Щ ౦ ȃ ౒ ւ խ ᖃ ၦ ౱ ޠ Щ ౦І༇᠍໸՞ Κ Ԓ Ȃ ӕ ђ α ႇ џ ϥ Ԓ ө ᒰ Τ ᡑ ኶ ޠ ҂ ְ ኶ ϥঐ߭ຠ ઼๊(Aa, A, Baa, Ba, B) 71.5 % Pinches & Mingo 1975 Moody’s ܛ ึ ո ޠ ༇߭ຠ๊ ٸᐄ༇ڕ໸ ՞ϸրւң quadratic MDS ࡛Ҵ ڎঐዂԓ ၽ ң ӱ ષ ϸ ݚ ᒶ ᐆ ᒰ Τ ᡑ ኶ Ȃ є ࢃ ഀ ៊ ϛ ᘟ ึ ޷ ւ ޠ Ԓ ኶ ȃ ึ ՘ ߝ ᚟ ȃ ౒ ւђॏւਁຳң/ւ ਁ ຳ ң ȃ ߞ ෉ ॓ ༇ խ ᖃ ၦ ౱ ޠ Щ ౦ ȃ ౒ ւ խ ᖃ ၦ ౱ ޠ Щ ౦І༇᠍໸՞ Κ Ԓ Ȃ ӕ ђ α ႇ џ ϥ Ԓ ө ᒰ Τ ᡑ ኶ ޠ ҂ ְ ኶ ϥঐ߭ຠ ઼๊(Aa, A, Baa, Ba, B) 75.4% Belkaoi 1980 S&P ึ ոޠ ༇ ߭ ຠ ๊ ഃؐӼϰୣ ր ϸ ݚ (Stepwise MDS) п စ ᔽ ޠ Ӭ ౪ ܓ ᒶ ᐆ ᒰ Τ ᡑ ኶ Ȃ є ࢃ ᖃ ၦ ౱ ȃ ᖃ ॓ ༇ ȃ ߞ ෉ ॓ ༇ խ ᖃ ׺ ၦ ၦҐޠЩ౦(ᖃ׺ၦ ၦ Ґ є ࢃ ᖃ ॓ ༇ ȃ ੬ ր ޷ ڸ ඾ ೾ ޷ ᠍ ΚԒ ϳঐ߭ң ઼๊ (AAA, AA, A, BBB, BB, B) 62.8%

(7)

ઊ)ȃ฼෉॓༇խᖃ ׺ ၦ ၦ Ґ ޠ Щ ౦ ȃ ࢻ ୞ Щ ౦ ȃ ւ ਁ ᇅ ੬ ր ޷ ޷ ਁ ߴ ሬ ॼ ኶ȃ(౒ւ+ใࡤւਁ ຳ ң )/(ใ ࡤ ւ ਁ ຳ ң+੬ր޷޷ਁ)ȃؑ ޷޷ቌ/඾೾޷ؑ޷ ᠍ ઊ Ȃ п І ࢑ ֐ ࣐ ಒ Κ ໸ ՞ ޠ ϵ Ѩ ༇ (0-1)Ȅ Ederingt on 1985 Moody’s ܛึ ո ޠ ༇ ߭ຠ๊ ޣጤଠᘫዂ ԓ(LR); ordered probit (OP); unordered logit(UL)ȇ ጤܓ MDA(LM) ; quadratic MDA(QM) ଓ ଡ଼ ߴ ሬ ޠ Ⴒ ข অ ȃ ᕖ ւ ૗ Ω ޠ ਣ ໣ ז ӗ Ⴒ ข অ ȃ ᕖ ւ ૗ Ω Ⴒ ข অ ޠ զ ॏ ዀྦ ᇳ(estimated standard error)ȃ(౫ ߝ ࢻ ໕ / ߞ ෉ ༇ ଡ଼ ) Щ ౦ ޠ ਣ ໣ ז ӗ Ⴒ ข অ ȃ ࠊ ໷ Щ ౦ ޠ ዀྦᇳȄ ΚԒ ϳঐϸ᜹ ઼๊(Aaa, Aa, A, Baa, Ba, B) LR=65%, OP=78%, UL=73%,LM =69%, QM=72% Dutta & Shekhar 1988 S&P ܛ ึ ո ޠ ༇ ߭ ຠ๊ ঈ༉ሏ᜹ઢ စᆪၰ ٸ ᐄ ᄈ ༇ ߭ ຠ ๊ ޠ ኈ ៫ Ω І ၦ ਠ ޠ Ѡ ڦூܓȂ೪ॏ 10 ঐ ᒰ Τ ᡑ ኶ Ȃ є ࢃ ॓ ༇ /Ȟ ౫ ߝ + ڿ ۢ ၦ ౱ȟȃ༇ଡ଼Щپȃ᎜ ೳߝ᚟/౒ၦ౱ȃᕋ ཿւઊ/᎜ೳߝ᚟ȃ ଓ ଡ଼ ஽ ࡚ ȃ ᎜ ೳ ౒ ւ/ڿۢၦ౱ȃႇџ ϥ Ԓ ٿ ޠ ᕋ Ԟ Ԛ ߞ ౦ ȃ Ⴒ ข ґ ٿ ϥ Ԓ ޠ ᕋ Ԟ Ԛ ߞ ౦ ȃ ᕋ ၽၦߝ/᎜ೳߝ᚟ȃ ᄈ Ӎ ཿ ޠ л ᢏ ৥ గ (subjective prospect of company) ଷ Π ௵ ң ϥ Ԓ ޠ Ԟ Τ Ԛ ߞ ౦ ѵ Ȃ ڐ Ꮈ ࣲ ௵ ң Κ Ԓ ෉ ޠ ၦ ਠ ڎ ঐ ๊ ઼ Ȃ є ࢃ ࢑ ֐ ࣐ ࢛ Κ ߭ ң ຠ ઼๊๊(پ Ԅ AA) 83.3% Surkan & Singleton 1990 Moody’s ᄈ ௄ AT&T ϸ ഹяџޠٕ ᅮႬၘᕋၽ ϵѨϟ༇߭ ຠ๊ ঈ༉ሏ᜹ઢ စᆪၰ ٸ ᐄ Peavy and Scott ޠःفᒶᐆ 7 ঐᡑ኶Ȃєࢃ༇ଡ଼/ ᖃ ၦ Ґ ȃ ใ ࠊ ւ ਁ ຳң/౒ւȃ޷ݎ᠍ ઊൣႎ౦(ROE)ȃႇ ଷΠ ROE ௵ ңႇџ ϥԒޠ ᡑ౵߾ ኶ѵȂ ڎ ঐ ๊ ઼ Ȃ є ࢃ Aaa and (A1,A2,A 3)ڎ઼ 88%

(8)

џϥԒٿ ROE ޠᡑ ౵߾኶ȃlog (ᖃၦ ౱)ȃڿۢ೪റ࡛ᄻ ԚҐ/ᖃ౫ߝࢻΤȃ ߞഋႬၘԞΤЩپ ڐᎸࠍ ௵ңΚ Ԓ෉ޠ ၦਠ Molinero et. al. 1996 S&P ᄈ ՚ ੳ в ማ ՘ ޠ ฼ ෉ ༇ ߭ຠ๊ ӼϰА࡚ϸ ݚ (MS) ᒶ ڦ ᒌ ໕ ᕖ ւ ૗ Ω ȃ ၦ Ґ ๗ ᄻ ȃ ଓ ଡ଼ Ԛ Ґ ڸ ॴ ᓏ ๗ ᄻ ޠ 24 ঐࣻᜱଓଡ଼Щ ౦Ȅ ΚԒ ౱ Ң Κ ஼ ထ ໲ ӵ შ п ᕤ ၍ ө ማ ՘ ޠ ϸ ո ґ݃ጃᇴ݃ Maher & Sen 1997 Moody’s ༇߭ຠ๊ ঈ༉ሏ᜹ઢ စ ᆪ ၰ І Logistics ڎ ᆎРݳໍ՘ Щၷ 7 ঐᡑ኶Ȃєࢃᖃၦ ౱ȃߞ෉༇ଡ଼/ᖃၦ ౱ȃ౒ւ/ᖃၦ౱ȃ ึ ՘ ༇ ଡ଼ ޠ ᓼ ᗚ ໸ ՞ȃӍཿޠ඾೾޷ ȕ অ ȃ ଞ Ӆ ߝ ॓ ༇ ౒ ᚟ ȃ ٿ Ս ܼ ୅ ཿ ഌ ߟ І ߩ ள ໷ ҭ ޠ ౒ ւ ଷ Π ᖃ ၦ ౱ ȃ ߞ ෉ ༇ ଡ଼ / ᖃ ၦ ౱ ȃ ౒ ւ / ᖃ ၦ ౱ п ϥ Ԓ ҂ ְ ኶ ॏ ᆘ ѵ Ȃ ڐ Ꮈ ࣲ ࣐ Κ Ԓ ෉ ၦਠ ϳঐϸ᜹ ઼๊(Aaa, Aa, A, Baa, Ba, B) 70% Shin & Han 1999 ᗻ୾ޠ୧ཿ Ґಊຠ๊ ңஆӱᅌᆘ ݳ (GA) ׳ weight vector пձ ࣐ CBR Р ݳޠ឵ܓȄ ௄ 168 ঐଓଡ଼Щ౦ Ϝ ໍ ՘ ӱ ષ ϸ ݚ ڸ ANOVA ᔯۢȂᑣᒶ я 27 ঐଓଡ଼Щ౦Ȃ ӕ п ഃ ؐ (stepwise) ᒶڦРݳᑣᒶя 12 ঐଓଡ଼Щ౦Ȅ ΚԒ 5 ঐ ϸ ᜹ ઼๊ ө ๊ ઼ ޠ ђ ᠍ ҂ ְ Ҕ ጃ ౦࣐ 75.5% Kim & Han 2001 ᗻ ୾ ฼ ෉ ༇ڕຠ๊ ᐍӬၽң᜹ ઢစᆪၰϟ SOM І LVQ ޠ CBR Рݳ ଭᄈ 129 ঐᡑ኶Ȟє ࢃ 4 ঐϸ᜹ᡑ኶І 125 ঐଓଡ଼Щ౦ȟໍ ՘ӱષϸݚᑣᒶ 26 ঐ ଓ ଡ଼ Щ ౦ Ȃ ӕ п ഃ ؐ (stepwise)ᒶ ڦ Рݳᑣᒶя 13 ঐଓ ଡ଼Щ౦Ȅ ΚԒ 5 ঐ ϸ ᜹ ઼๊ ө ๊ ઼ ޠ ђ ᠍ ҂ ְ Ҕ ጃ ౦࣐ 69.1% Shin & Han 2001 ᗻ୾ޠ୧ཿ Ґಊຠ๊ ٻң inductive indexing ϟ CBR ၽ ң ӱ ષ ϸ ݚ ȃ ANOVA ᔯ ۢ І Kruskal-Wallis ᔯ ۢ ᑣᒶя 27 ঐᡑ኶Ȟє ΚԒ 5 ঐ ϸ ᜹ ઼๊ ө ๊ ઼ ޠ ђ ᠍ ҂ ְ Ҕ ጃ ౦࣐ 70.0%

(9)

ࢃ 23 ঐ໕Ͼᡑ኶І 4 ঐ፵Ͼᡑ኶Ȃӕп ഃ ؐ (stepwise) ᒶ ڦ Рݳᑣᒶя 12 ঐଓ ଡ଼ᡑ኶Ȅ

୥ȃϸ᜹ዂԓ

Ґ࿾஡ϮಞҐःفܛ௵ңޠڎᆎРݳ--SVM РݳІ ANN РݳȄҦܼ SVM РݳӶ ᆔ౪ስ஀αޠᔗң௒םۧ឵ུ྆܉Ȃࢉпίשউ஡௄ SVM ึ৥ޠ౪፤ஆᙄၐಡϮಞȂ स᠟޲ϑዤோ၏РݳȂࠍѠၱႇҐ࿾Ȃ஡ϛཽᄈᎨ᠟ᐍ጖НതആԚኈ៫Ȅ

ΚȃSupport vector machines

ȞΚȟSVM ᢏ܉ SVM ࣐Κ൩ஆܼಜॏᏱಭ౪፤ޠᏱಭᐡᏣ(learning machine)ȂڐஆҐޠၽձ྆܉ ࣐஡ᒰΤӪ໕пጤܓܗߩጤܓޠਰЗړԓ(kernel function)࢏ৣژΚঐାᆱ࡚ޠ੬ኊޫ ໣(feature space)ȂӶ੬ኊޫ໣Ϝ׳ژഷᎍޠົ҂८(hyperplane)пୣրөঐϸ᜹ȄԄԫ ΚٿȂ঩ҐӶմᆱ࡚Ϝϛ૗ңጤܓؒ၍ޠୱᚡȂ൸૗Ӷାᆱ࡚Ϝໍ՘ϸ᜹ȇ೼ঐҦା ᆱ࡚ܛಣԚޠ੬ኊޫ໣ȂࣦՎѠп࢑ณ४ᆱ࡚ޠȂӱ࣐ӶॏᆘαȂ᠍२(weights)೼ঐ ୥኶࢑ϛሰೞॏᆘяٿޠȄᙥҦᒶᐆᎍ࿌ޠਰЗړ኶Ȃߩጤܓޠ࢏ৣѠпٻ؛๋ړ኶ Ӷ ೼ ঐ ུ ޠ ੬ ኊ ޫ ໣ ஡ ୱ ᚡ ؒ ၍ Ȅ ԫ ܓ ፵ ᡲ Vapnik Ѡ п ၽ ң ഷ ϊ Ͼ ๗ ᄻ ܓ ॴ ᓏ Ȟstructural risk minimizationȂᙐᆏ SRMȟӶߩጤܓୱᚡαȂٯйᗚ࢑ңΚኻޠഷᎍ Ͼ׭ѾȄՅ SVM ܛ؛ۢޠ؛๋ړ኶࢑ҦΚထ੬ੇޠӪ໕ܛಣԚޠȂՅ೼ထӪ໕࢑Ҧ ଌጜޠၦਠϜࢆᒶяٿޠȂᆏ࣐МࡼӪ໕(support vectors)Ȃηӱԫᐍঐᅌᆘݳᆏ࣐ support vector machines (Vapnik 1995)Ȅ

пშ 1 ࣐پȂၦਠл्Ѡϸ࣐ڎထ“o”ڸ“+”Ȅ௄ංեαٿ၍មȂSVM ൸࢑्׳я ΚঐഷᎍϹഹົ҂८ȞܗΚ఩؛๋ړ኶ȟ஡೼ڎထၦਠϸ໡ȂԄშϜޠᄃጤȄՅ೼఩ ؛๋ړ኶ԥ೩Ӽ࡟Ԃޠಜॏܓ፵ȄစҦ SVM ॏᆘࡤȂშϜ୽କٿޠᘉ൸࢑МࡼӪ໕Ȃ ԥѳঐӶຏጤαޠᘉл्࢑ңٿ؛ۢ؛๋ړ኶ȇѫΚঐϛӶຏጤαޠᘉ࢑ႃЇϸഹ४ ښޠᘉȂӱ࣐ณݳೞϸ᜹ȂܛпηાΤМࡼӪ໕ϜȂԫ࣐ՄኍԚҐړ኶ޠ soft margin ϸ᜹ᏣޠхߓپφȄ

(10)

შ 1Ȉၽңጤܓړ኶ึ৥ SVM ϸ᜹Ꮳޠጓپ

SVM ҦܼӶഷᎍϾਣϛཽ᎐ႅ׌ഌഷٺϾ(local optimum)ޠୱᚡȂӱԫȂԥϛЎ ޠᄃᜍःفϜࡿяȂSVM ޠߓ౫Щ ANN ᗚԂ(Morris & Autret 2001; Tay & Cao 2001; Cai & Lin 2002)Ȅ

ȞΡȟStructural Risk Minimization

ANN ԇӶᜳпΚૢϾޠୱᚡȂӶଌጜᏱಭਣȂ܂܂౱Ңႇ࡚ପᎍ(over fitting)ޠ ୱᚡȂл्঩ӱӶܼ ANN ߾௵ң empirical risk minimization (ERM)঩ࠍ౱Ңഷٺޠϸ ᜹ዂԓȂՅ SVM ࠍ࢑ၽң structural risk minimization (SRM)঩ࠍึ৥ഷٺዂԓȂٯϑ ೞ Gunn(1998)ᜍᄃᓻܼ ANN ܛ௵ңϟ ERM ঩ࠍȄᙐՅّϟȂSRM ঩ࠍ࢑ഷϊϾ෉ గॴᓏޠαࣩ(upper bound)ȂՅߩ ERM ܛ௵ңഷϊϾଌጜಣၦਠኻҐޠᇳৰȄԫৰ౵ ٻூ SVM ᐀ԥၷٺޠΚૢϾ૗ΩȂՅԫ૗ΩҔ࢑ಜॏᏱಭ౪፤ޠҭዀ (Vapnik 1995)Ȅ ȞήȟSupport Vector Classification

п ί ஡ ඣ ख़ SVC Ԅ ե ᙥ Ҧ ౫ ԥ ጓ پ ܛ ౱ Ң ޠ ົ ҂ ८ п ձ ࣐ ഷ ᎍ ϸ ഹ ົ ҂ ८ (optimal separating hyperplane)ϸ᜹ᏣȂڐஆҐᆡઢ࣐׳яΚಣഷτϾ margin(֊ഷτϾ ົ҂८ᇅؑঐϸ᜹ޠഷߗၦਠᘉϟ໣ޠ຾ᚕ)Ȅпίϸήᆎ࠯ᄙޠپφᇴ݃Ȅ ˬȃጤܓѠϸഹጓپ ୆೪ଌጜጓپၦਠ࣐

(

x1,y1

) (

,..., x,y

)

,xRn,y∈{+1,−1} l l ȂڐϜ x ࣐ᒰΤӪ໕Ȃ၏ಣ ၦਠѠೞΚঐົ҂८ϸഹ࣐ڎ᜹ȂΚ᜹࣐+1ȂѫΚ᜹࣐-1Ȃस၏ಣၦਠѠೞҔጃณᇳ ӵϸഹȂՅйؑঐϸ᜹ޠഷߗӪ໕(nearest vector)ᇅ၏ົ҂८໣ޠ຾ᚕഷτϾȂࠍשউ Ѡᇴ၏ၦਠѠೞΚಣົ҂८ഷᎍϸഹȄשউѠпίӗםԓߓұ၏ົ҂८Ȉ l i b x w yi[ , + ]1, =1,..., (1) ܛ ׳ ژ ڎ ঐ ഷ ߗ ၏ ົ ҂ ८ ޠ Ӫ ໕ ᇅ ၏ ҂ ८ ໣ ޠ ຾ ᚕ margin Ԅ ί Ȉ Ѐ Ѐ Ѐ Ѐ Ѐ Ѐ Ѐ У Ϩ Ѐ У Ϥ Ӷ Ѐ Ϩ Ϥ Ϥ Ϥ Ϥ Ϥ Ϥ Ϥ Ϥ У Ѐ Ѐ Ѐ Ѐ Ѐ Ѐ Ѐ У Ϩ Ѐ У Ϥ Ӷ Ѐ Ϩ Ϥ Ϥ Ϥ Ϥ Ϥ Ϥ Ϥ Ϥ У ഷᎍϹഹົ҂८ ڎ఩ຏጤ೏ ࣐ Margin

(11)

w w b x w w b x w x b w d x b w d b w i y x i y x i y x i y x i i i i i i i i 2 , min , min ) ; , ( min ) ; , ( min ) , ( 1 : 1 : 1 , 1 , = + + + = + = = − = = − = ρ (2) ഷ τ Ͼ (2) Ѡ ߓ ұ ࣐ ႁ Ԛ ഷ ϊ Ͼ 2 2 1 ) (w = w Φ Ȃ ௵ ң ܝ ԓ ᚭ Ԕ ݳ (Lagrange relaxation)஡၏ୱᚡߓႁ࣐Ȉ

¦

= − + − = Φ l i i i i b w wb w 1 y w x b 2 , 2 ( [ , ] 1) 1 ) , , ( min α α (3)

࣐ւؒ၍ȂѠ஡၏঩Ңୱᚡ(primal problem)ᙾϾ࣐ᄈୌୱᚡ(dual problem)Ȉ ¸ ¹ · ¨ © § Φ =max min ( , , ) ) ( max , α α α α W wb wb (4) ਴ᐄ(3)ᄈ b І w ୒ཌྷϸѠ஡(4)ߓႁ࣐Ȉ

¦¦

¦

= = + = − = l i l j l k k j i j i j i y y x x W 1 1 1 , 2 1 max ) ( max α αα α α α (5) Ѡؒ၍ίԓ(6)Ȉ

¦

¦¦

¦

= = = = = = ≥ − = l j j j i l i l j l k k j i j i j i y l i t s x x y y 1 1 1 1 * 0 ,..., 1 0 . . , 2 1 min arg α α α α α α α (6) ؒூഷᎍϸഹົ҂८࣐Ȉ s r l i i i iy x b w x x w =

¦

= + = , 2 1 * * 1 * α (7) ڐϜȂx ,r xs࢑ؑঐϸ᜹Ѡᅗ٘αrs >0, yr =−1,ys =1ޠӉեМࡼӪ໕ȄשউѠ ூژ hard classifier ԄίȈ ) , sgn( ) (x w* x b f = + (8) सՄ໕ณݳ׈ӓೞϸ᜹ޠ௒ݸਣȂڐ soft classifier ࣐ίԓȈ ° ¯ ° ® ­ > + ≤ ≤ − − < − = + = 1 : 1 1 1 : 1 : 1 ) ( where ) , ( ) ( * z z z z z h b x w h x f (9) ਴ᐄ Kuhn-Tucker conditionȈ

[

wx b

]

i l y i i i( , + −1)=0, =1,..., α (10) ӱԫѬԥѠᅗ٘yi

[

w,xi + b

]

=1ޠx ᘉԥߩႮޠܝԓॹ኶(Lagrange multipliers)Ȃi שউᆏ೼ٳᘉ࣐МࡼӪ໕(ᙐᆏ SV)ȂसၦਠѠೞጤܓϸഹȂࠍܛԥޠ SV ᘉ஡ဤӶ

(12)

margin αȂηӱԫѠؒூ࡟Ў໕ޠ SVȄ๗ݏȂົ҂८஡ҦΚಣЎ໕ޠଌጜၦਠᘉ؛ ۢȂڐуᘉѠ௄ଌጜಣϜಌଷȂ֊ٻ२ུॏᆘົ҂८ȂٸดѠூژӤኻޠ๏਱Ȅࢉ SVM Ѡңпᄣ्ᗵ֥Ӷ SV ܛ౱ҢޠၦਠಣӬϜޠၦଊȄ

˭ȃጤܓϛѠϸഹጓپ—Soft margin technique

ଷΠၽң൷׳αࣩޠРݳп׳яѠҔጃϸഹၦਠထޠົ҂८ѵȂԥਣ઎ᄃзࣩӱ ࣐ณݳ׈ӓҔጃ஡ၦਠϸഹȂࢉ Vapnik (1995)έѫѵЖໍᇅᓀᇳϸ᜹ԥᜱޠ᚟ѵԚҐ ړ኶(cost function)ޠᢏ܉ȂпؒூԫϸࣩȂໍՅႁژΚૢϾޠਞݏȄࠊख़ޠഷᎍϸഹ ົ҂८Ѡߓႁ࣐Ȉ

¦

= + = Φ l i i C w w 1 2 2 1 ) , ( min ξ ξ (11) s.t.

[

]

0 . ,..., 1 , 1 , ≥ = − ≥ + i i i i where l i b x w y ξ ξ ڐϜξi࢑ᓀᇳϸ᜹ޠᇳৰᒌ໕ȂC ࢑Κঐ๞ۢޠ୥኶অȂ਴ᐄ Minoux (1986)Ѡூ ژܝԓᚭԔݳԄίԓȈ

[

]

(

)

¦

¦

¦

= = = − + − + − + = Φ l i l i i i i i T i i l i i y w x b C w w 1 1 1 2 1 2 1 ) , ( ξ ξ α ξ βξ (12) ڐϜ α, ࢑ܝԓॹ኶ȄԄࠊܛख़ȂѠ׳ژڐᄈୌୱᚡԄίȈβ

¦¦

¦

= = = α α α = − αα + α l i l j l k k j i j i j i yy x x W 1 1 1 , 2 1 max ) ( max (13) ؒ၍ԓ(14)Ȉ

¦¦

¦

= = − = = l i l j l k k j i j i j i y y x x 1 1 1 * , 2 1 min arg αα α α α (14) s.t. . 0 ,..., 1 0 1 = = ≤ ≤

¦

= i l i i i y l i C α α ڐϜȂ߾኶ C ҇໹ೞ؛ۢȂ೼ঐ୥኶࣐၏ϸ᜹ᏣޠѫѵΚঐ capacity controlȄ ˮȃାᆱ࡚੬ኊޫ໣ޠΚૢϾ—෈хਰЗړԓ ࿌ጤܓޠϸࣩ(boundary)ϛᎍ࿌ਣȂSVM Ѡп஡ᒰΤӪ໕(x)࢏ৣژାᆱ࡚ޠ੬ኊ ޫ໣(z)ȂᙥҦᒶᐆΚঐߩጤܓޠ࢏ৣȂSVM ѠпӶ੬ኊޫ໣Ϝ࡛ᄻΚঐഷᎍޠϸഹົ ҂ ८ Ȅ ԫ ࢏ ৣ є ࢃ ள ң ޠ polynomial K(x,x')=

(

x,x' +1

)

dڸ Gaussian radial basis functions (RBF) K x x( , ) exp'

(

x x' 2

)

γ

(13)

¦¦

¦

= = = − = l i l j l k k j i j i j i y y K x x 1 1 1 * ( , ) 2 1 min arg αα α α α (15) ڐϜK(xi,xj)࣐ਰЗړԓȂңпஉ՘ߩጤܓ࢏ৣژ੬ኊޫ໣ޠӉଡ଼Ȅ

Ρȃ᜹ઢစᆪၰ

᜹ઢစᆪၰዂᔤҢޑઢစᆪၰஉ՘ϸයԓၽᆘѓ૗Ȃ࢑ΚঐҦ೩Ӽᙐ൑կпା࡚ ፓᚖޠРԓϤഀޠ೏౪൑ϰ(processing unit)ܛᄻԚޠᆪၰȂϮܼ೏౪൑ϰ໣ޠଊဵ༉ ሏၰ৸ᆏ࣐ഀ๗ȂٯйϛӤޠܦᐕ(topology)ᇅᅌᆘݳ(algorithm)ѠпಣԚөᆎᆪၰዂ ԓ(Lippmann 1987)ȂپԄӼቺԓঈ༉ሏᆪၰ(backpropogation)ȃHopfield Networksȃ Self-Organizing Maps (ᙐᆏ SOM) Networks ๊Ȅ᜹ઢစᆪၰϑစԚѓӵၽңӶ೩Ӽስ ஀ȂپԄ၍؛՘᎜ȃႮ୶ȃማ՘ȃଓଡ଼ȃߴᓏІႬ๊߭ୱᚡ(Smith & Gupta 2000)Ȅ

ঈ༉ሏᅌᆘݳ࢑᜹ઢစᆪၰᅌᆘݳϜᎍӬңٿ၍؛Ⴒขᇅϸ᜹ୱᚡޠᅌᆘݳȄڐ л्ᆪၰ࢝ᄻѠϸ࣐ᒰΤቺȃᗵᙡቺІᒰяቺȂڐϜᗵᙡቺѠϸ࣐ႮቺܗसϔቺȂؑ ΚቺְҦ኶ঐ೏౪൑ϰ௷ӗಣԚȂؑΚቺޠᒰΤၦਠ࣐ࠊΚቺޠᒰяၦਠȂөቺ໣ޠ ഀ๗ڏԥђ᠍অ(weight)ȂᙥҦђ᠍অޠ஽৶௢ښࠊΚቺᒰΤၦਠޠኈ៫โ࡚Ȅѻޠஆ Ґ঩౪࢑ւңȶۄ७ݳ (gradient descent)ȷޠᢏ܉Ȃ஡ߓႁᆪၰᄃርᒰяᇅҭዀᒰя ϟৰ౵ޠᇳৰړ኶ഷϊϾȞ֊ ERMȟȂٯഇႇഀ๗ђ᠍অޠϛᘟ።ᐍȂٿႁԚᆪၰޠ ଌጜȄ࿌ᒰΤؑΚ์ᏱಭጓپၦਠਣȂӶᒰяቺཽூژႲขޠᒰяঅȂԫਣᙥҦЩၷ ҭዀᒰяঅڸႲขঅϟ໣ޠৰ౵ȂѠூΚᇳৰړ኶Ȅ௦຀஡ᇳৰړ኶ϡпཌྷϸؒڐഷ ϊϾȂᆪၰࠍւңཌྷϸ౱Ңޠ๗ݏ።ᐍቺᇅቺϟ໣ޠ᠍অȂϛᘟӵ׾ᡑՅႁژᏱಭޠ ਞݏȄ೼ঐҦᒰяޠᇳৰ๗ݏӪࡤ༉ሏՎᗵᙡቺڸᒰΤቺп።ᐍ᠍অޠႇโȂ൸࢑ѻ ϟܛпೞᆏ࣐ঈ༉ሏޠ঩ӱȄଌጜӼቺᆪၰ࢝ᄻޠ૗Ω࢑࡛ҴΚঐහኌ࠯ᔗңโԓޠ २्ؐ᡾ȂՅᒶᐆᎍ࿌ޠ᜹ઢစᆪၰ୥኶ࠍ࢑ᕖூၷٺႲข૗Ωޠ२्ӱષȄၐಡᅌ ᆘݳѠ୥Մ(Lippmann 1987)Ȅ

နȃःف೪ॏ

ΚȃःفኻҐ

Ґःف߾਴ᐄ S&P ϵѨяޏޠёޑ—Global Sector Review ڸ CARD ӏᆆбαܛ ଅၸޠ߭ңຠ๊ڸଓଡ଼ᡑ኶ၦਠȂᒶяӶ 1996 Ԓ௦ڨ S&P ຠ๊ޠज୾ڸ઺ᐭӵୣޠ Ӎཿ࣐ःفኻҐȄ೼ٳኻҐܛ೏౱ཿєࢃ੒ຳܓ౱ࠣཿȃାऌ׭౱ཿȃႮ୶ཿȃϾϏ ཿȃ࡛ᑟ؆ਠཿȃآٚཿȃᐡᏣ೪റཿڸᓂ២ཿȂኻҐᖃ኶Ӕॏ 429 ঐȄӶଌጜಣІ ขၑಣޠኻҐϸոαȞ୥ُߓ 3ȟȂ߾ٸᐄܛԥၦਠӶө઼๊ޠϸո௒םϸପȂҐःف Ӕᇕ໲ 429 ঐڨຠӍཿኻҐȂτङٸྲ 75%І 25%ޠЩپϸቺȞ߭ңຠ઼๊๊ȟܫڦ

(14)

࣐ 325 ঐଌጜಣᇅ 104 ঐขၑಣኻҐȄ࣐ःفዂԓޠ࡚߭ᇅਞ࡚Մ໕ȂҐःفѫпϛ २ፓܫኻІ 0.75 Bootstrapping ڎᆎРԓ(Witten & Frank 2000)өܫя 100 ಣଌጜኻҐ ᇅขၑኻҐȂпໍ՘ SVM ᇅ ANN ڎᆎРݳޠЩၷȄ ߓ 3ȈኻҐϸո ߭ңຠ઼๊๊ ኻҐϸո ԼϸЩ ଌጜಣ ขၑಣ AA ઼пα 41 9.6% 32 9 A 100 23.3% 75 25 BBB 93 21.7% 71 22 BB 111 25.9% 84 27 B ઼пί 84 19.6% 63 21 429 100.0% 325 104 ҐःفӶՍᡑ኶ޠᒶڦαȂ௵ң S&P’s Ӷຠզଓଡ଼ॴᓏਣՄ໕ޠл्ӱષȞߓ 4ȟȂ єࢃᕖւ૗ΩȃւਁߴሬȃၦҐ๗ᄻȃ౫ߝ໋ᙾ૗Ω(౫ߝࢻ໕८ޠ฼෉ᓼ༇૗Ω)ѳ ঐቺ८ޠΫঐᒰΤᡑ኶ȂпІΚঐՄኍ୾ঢ়ॴᓏӱષ(п 0 ڸ 1 ୣϸ AAA ઼ڸ AA ઼ ୾ঢ়ॴᓏ)ޠᒰΤᡑ኶ȄՎܼᕋၽॴᓏȂҦܼၦਠཫ໲αޠ֩ᜳ࡚ၷାйлᢏԚӌႇ २ȂࢉኸϛϡՄኍȂկ೼ԥѠ૗ཽᄈዂ࠯ޠਞݏ౱Ңϛւޠኈ៫ȄӶᒰΤᡑ኶ၦਠᇕ ໲෉໣Р८ȂҐःفڦຠ๊Ԓ࡚܂ࠊ௱ήঐԒ࡚ޠଓଡ଼ၦਠȞ1993ȃ1994 І 1995 ԒȟȂ ࢉӔॏԥ 31 ঐՍᡑ኶Ȟ10 ঐଓଡ଼ᡑ኶*3 Ԓ+1 ঐ୾ঢ়ॴᓏᡑ኶ȟȂկҦܼ೼ٳᡑ኶ޠ অ஀ٯߩϸոΚयȂसґစၦਠᙾϾܗҔ೤Ͼ೏౪Ȃ஡Ᏻयঅ஀ၷϊ޲ޠᡑ኶ϟ२् ܓณݳᡘ౫ȂՅҦঅ஀ၷτ޲௢ښᐍঐᆪၰޠᏱಭႇโȂໍՅѿѢΠᏱಭԚݏȄࢉԥ ्҇ଭᄈᒰΤᡑ኶অձᙾϾܗҔ೤Ͼޠ೏౪ȂҐःف஡пഷτഷϊᄈ࢏ݳ஡ၦਠঅ஀ ᙾඳ࣐Ϯܼ-1 ᇅ 1 ϟ໣ޠঅȄ Ӷᒰяᡑ኶ޠ೪ॏαȂҦܼ AAA ઼ኻҐ኶ၷЎȂࢉᇅ AA ઼Ӭڂຝ࣐ӤΚಣȂ CCC ઼пίޠኻҐ኶ηၷЎȂࢉᇅ B ઼Ӭڂ࣐ӤΚϸ᜹Ȅܛпዂԓᒰяᡑ኶Ӕϸ࣐ϥ ঐ઼๊Ȃ֊௄ AA ઼пαȃAȃBBBȃBB ژ B ઼пίȄ

ΡȃSVM

Ґःفၽңп C++ ܛ໡ึޠ LIBSVM3೻ᡞโԓȂਰЗړԓ࣐ RBFȂਰЗړԓޠγ ІԚҐړԓޠ C ࣐ࡠ؛ۢޠ୥኶Ȃשউٸᐄ Hsu et. al. (2003)ܛණяޠ”simple grid search”РݳȂٯ௵ң ten-fold cross validation џ൷׳ᎍӬޠγІ C ୥኶অȂ፝୥Մშ 2Ȃ ׳ژޠڎঐ୥኶অϸր࣐γ =0.5 ІC= Ȃпூژ SVM ϸ᜹ዂԓȄ8

(15)

ߓ 4ȈᒰΤᡑ኶ۢဏ ᡑ኶ ཏဏ X1 ୾ঢ়ॴᓏᡑ኶ȈҦܼज୾ȃᐭࢹޠ୾ঢ়ॴᓏ઼๊҂ְ࣐ AAAȂ઺ ՚៍๊୾ঢ়ޠຠ๊࢑ AAȂࢉ೪ۢΚຏᔤᡑ኶п࣐ୣրȂAAA ઼୾ ঢ়ᡑ኶অ࣐ 1ȂAA ઼୾ঢ়ᡑ኶অ࣐ 0Ȅ X2i, i =1,2,3 Ԓ ၗҁϯճ৲ ճ৲຤Ҕ ճ৲຤Ҕ ࡭ុᔼ཰ิ߻પ੻ ৲ߥም ࡭ុᔼ཰ิ߻৲߻ϐճ + + = X3i, i =1,2,3 Ԓ ࡭ុᔼ཰ิ߻৲߻שᙑᆶש઻߻ϐճ৲ߥም ࡭ុᔼ཰ิ߻৲߻שᙑᆶש઻߻પ੻ ճ৲຤Ҕᆶ྽යၗҁϯճ৲ = X4i, i =1,2,3 Ԓ ҉Φ܄ၗҁิ߻ൔၿ౗ ࡭ុᔼ཰೽ߐิ߻પ੻ ճ৲຤Ҕ ѳ֡΋ԃϣډයϐߏයॄ໸ ߏය໸୍ ߚࢬ୏ሀۯ܌ளิ ިܿ៾੻ ѳ֡อයॷສ = + + + + + *100% X5i, i =1,2,3 Ԓ *100% ᔭຄ ᛜሎܓ墿 ᛜᄐܓ墿۾ᔭຄۍ։ֺ= ᕋཿւઊ=᎜ೳ౒᚟−᎜ೳԚҐ(Ӷණӗ׸ᙠᇅ׸૊ࠊ)−᎜ᆔຳң− ःึԚҐ X6i, i =1,2,3 Ԓ *100% ᕴ໸୍ ᔼၮౢғޑ౜ߎࢬໆ ᕴ໸୍ޑКٯ ᔼ཰ౢғޑ౜ߎࢬໆჹ = ᕋཿ౱Ңޠ౫ߝࢻ໕=᝸៊ᕋཿഌߟใࡤ౒ւ+׸ᙠᇅ׸૊+ሏ۾ܛ ூใ+ڐуߩ౫ߝޠ໷ҭ X7i, i =1,2,3 Ԓ Ծҗ౜ߎࢬໆჹᕴ໸୍ޑКٯ ᔼၮౢғޑ౜ߎࢬໆ ၗҁЍр ᔼၮၗߎޑቚᚐ ෧Ͽ ᕴ໸୍ ຏ ᔼၮၗߎޑቚᚐ ෧Ͽ ٠όхࡴ౜ߎکอය׫ၗޑᡂ୏ = − − +( ) ( ) : ( ) X8i, i =1,2,3 Ԓ ᕴ ໸ ୍ ՞ ၗ ҁ ޑ К ٯ ᕴ ໸ ୍ ᕴ ໸ ୍ ި ܿ ៾ ੻ = + X9i, i =1,2,3 Ԓ ᎜ೳ౒᚟ X10 i, i =1,2,3 Ԓ ޷ݎ᠍ઊ X11i, I =1,2,3 Ԓ ᖃၦ౱

(16)

შ 2ȈγІ C ୥኶অ๊ାጤშ ȞພȈlg ࣐п 2 ࣐ۼޠᄈ኶ȂѢαِޠ๊ାጤ኶অ࣐ validation set ޠҔጃ౦ȟ

ήȃANN

Ґःفп Matlab 6.1 ೻ᡞџ೪ॏ ANN ዂԓȂ௵ңঈ༉ሏ᜹ઢစᆪၰዂԓȂၽң Levenberg-Marquardt algorithmȂᒰΤቺҦ 31 ঐᒰΤ൑ϰܛಣԚȇӶᗵᙡቺޠ೪ॏαȂ ೏౪ϸ᜹ୱᚡਣȂᗵᙡቺޠ೏౪൑ϰ኶ۤЎܼᒰΤቺޠᒰΤ൑ϰঐ኶(Diamantaras & Kung 2000)Ȃดഷٺޠᗵᙡቺ൑ϰ኶ϬґԥΚૢܓޠۢ፤ȂࢉҐःفขၑΠ 1~31 ঐᗵ ᙡቺ೏౪൑ϰ኶Ȃ௵ң hyperbolic tangent sigmoid (֊ tansig function in Matlab) ᙾඳړ ኶Ȃᒰяቺє֥ 5 ঐ೏౪൑ϰȂϸրᄈᔗϥ᜹߭ңຠ๊ȄӶᏱಭ౦ᇅଌጜԪ኶Р८Ȃ ҐःفขၑΠᏱಭ౦∈{0.001, 0.005, 0.01, 0.05, 0.1}ȂпІଌጜԪ኶∈ {10, 100, 1000, 1500, 2000}ȂԥᜱөঐಣӬޠଌጜಣҔጃ౦኶ᐄȂ፝୥Մߓ 5Ȅ࣐Πᗘռዂԓႇ࡚ପ ᎍޠୱᚡȂשউ௄ଌጜಣϜϸяΚಣᡜᜍಣ(validation)Ȃٯ௵ң early stopping ޠ๋౲ џջ݉၏ୱᚡȄ

စႇ 5 x 5 x 31 ঐϛӤޠ ANN ዂԓ೪ॏࡤȂ૗஋ᕖூഷାޠଌጜಣҔጃ౦ޠᏱಭ ౦࣐ 0.005ȂଌጜԪ኶࣐ 10 ԪȂᗵᙡቺ൑ϰ኶ҭ࣐ 22 ঐ൑ϰ኶Ȃࢉᒶᐆп 31-22-5 ޠ ANN ࢝ᄻዂԓȄ

(17)
(18)
(19)

Ӄȃ๗ݏᇅଇ፤

ΚȃSVM ዂԓϸ᜹๗ݏ

SVM ዂԓଌጜಣᇅขၑಣޠϸ᜹๗ݏె༅ઑଳ(confusion matrix)Ԅߓ 6 Іߓ 7 ܛ ұȂڐᐍᡞϸ᜹Ҕጃ౦ϸր࣐ 70.77%І 60.58%ȄᗷดҔጃ౦ᇅ 100%ϬԥΚࢳৰ຾Ȃ կसпᓎᐡϸ᜹ϥঐ߭ңຠ઼๊๊ܛॏᆘޠҔጃ౦ٿЩၷȂϬ࢑ାܼ 20%ȄӶڟֆϸ ᜹ޠҭޠαȂϬڏԥϸ᜹ዂԓޠၦଊቌঅȄ пঐր઼๊ޠϸ᜹Ҕጃ౦ᢏᄇ๗ݏȞଌጜಣ፝୥ࣽߓ 6Ȃขၑಣ፝୥ࣽߓ 7ȟȂӶ ଌጜಣϜȂA ઼ޠҔጃ౦ഷା(80%)ȂڐԪ࣐ BB ઼(75%)Ȃಒή࣐ B ઼пί(68.25%)Ȃ ಒѳ࣐ BBB ઼(63.38%)Ȃഷմ࣐ AA ઼пα(59.38%)ȇՅӶขၑಣϜȂࠍ࢑ BBB ઼ޠ Ҕጃ౦ഷା(72.73%)ȂڐԪϘ࢑ A ઼(68%)Ȃಒή࣐ BB ઼(62.96%)Ȃಒѳ࣐ B ઼пί (57.14%)Ȃഷմ࣐ AA ઼пα(11.11%)Ȅၦਠ๗ݏᡘұпί౫ຬȈ 1. A ઼߭ңຠ๊ᗷดӶଌጜಣϜϸ᜹Ҕጃ౦ഷାȂӶขၑಣϜࠔߩഷାȂկϬ࢑ Ҕጃ౦Ԫାޠ߭ңຠ઼๊๊Ȅ਴ᐄขၑಣޠె༅ઑଳȂึ౫ϸ᜹ᓀᇳޠ঩ӱӶ ܼϸ᜹ዂԓױᓀᇳϸ᜹ޠ 8 ঐขၑኻҐմզΚ઼Ȟᇳց࣐ BBB ઼ȟȄᏒᆔԄԫȂ ၏઼๊ࣻᄈαϬ࢑ҐःفϜ SVM ዂԓϸ᜹Ҕጃၷାޠ߭ңຠ઼๊๊Ȅ 2. ขၑಣϜϸ᜹Ҕጃ౦ഷାޠ BBB ઼ȂӶଌጜಣϜϸ᜹Ҕጃ౦ٯϛାȂ௱ڐ঩ ӱȂѠ૗ӶܼଌጜಣϜϑᏱಭ၏઼๊ޠΚૢ੬ܓȂࢉ֊ٻӶขၑಣϜȂϸ᜹Ҕ ጃ౦ϛ७ЇЁȄӤਣȂڐᓀᇳϸ᜹ޠ௒םҼӼึҢܼմզΚ઼ޠ௒ݸȞᇳց࣐ BB ઼ȟȄ 3. AA ઼пαޠϸ᜹Ҕጃ౦ӶଌጜಣІขၑಣϜࣲ࢑ഷմޠȂՅйڐᓀᇳϸ᜹ޠ ኻҐӼึҢܼմզΚ઼ޠ௒ݸȞཏ֊ A ઼ȟȄBB ઼ڸ B ઼пίȂࠍึҢޠᓀᇳ ϸ᜹Ӽ࣐᎒ߗ઼๊Ȟཏ֊մզܗାզΚঐ઼๊ȟȄڎঐ྄ᆓ઼๊ȶAA ઼пαȷ ІȶB ઼пίȷޠϸ᜹Ҕጃ౦ӶขၑಣϜ࢑ၷմޠȂ௱ขڐ঩ӱѠ૗࣐೼ڎঐ ྄ᆓ઼๊ޠኻҐ኶ၷЎȂϸ᜹ዂԓۧґ૗щϸᏱಭ೼ڎঐ઼๊ޠ੬ܓȄ ߓ 6ȈSVM ϸ᜹๗ݏె༅ઑଳ˕˕ଌጜኻҐ Predicted Target AA ઼ пα A BBB BB B ઼пί Ҕጃ౦ մզΚ઼ ᇳৰҔጃ౦ αίΚ઼ Ҕጃ౦ AA ઼пα 19 11 2 0 0 59.38% 93.75% 93.75% A 0 60 13 2 0 80.00% 97.33% 97.33% BBB 1 11 45 13 1 63.38% 81.69% 97.18% BB 1 5 5 63 10 75.00% 86.90% 92.86% B ઼пί 0 2 1 17 43 68.25% 68.25% 95.24% ᐍᡞዂԓ 70.77% 85.23% 95.38%

(20)

ߓ 7ȈSVM ϸ᜹๗ݏె༅ઑଳ—ขၑኻҐ Predicted Target AA ઼пα A BBB BB пίB ઼ Ҕጃ౦ մզΚ઼ᇳৰҔጃ౦ αίΚ઼Ҕጃ౦ AA ઼пα 1 8 0 0 0 11.11% 100.00% 100.00% A 0 17 8 0 0 68.00% 100.00% 100.00% BBB 0 1 16 5 0 72.73% 95.45% 100.00% BB 0 0 5 17 5 62.96% 81.48% 100.00% B ઼пί 0 0 1 8 12 57.14% 57.14% 95.24% ᐍᡞዂԓ 60.58% 85.58% 99.04% ਴ᐄዂԓ๗ݏѠூژпίᄈึ৥߭ңຠ๊ϸ᜹ዂԓޠ؛๋ཏోȈ ӶҐःف SVM ϸ᜹ዂԓϜȂBBB ઼пαޠϸ᜹઼๊ᇳց௒ݸτഌϸึҢܼմզ Κ઼Ȃ௱فڐ঩ӱȂשউޤၿ਴ᐄ೩Ӽ୾ঢ়ޠၦҐҀൠᆔښ௪ࢋ೤ۢȂ࢛ٳݳ΢׺ၦ ᐡᄻ༊૗׺ၦܼ׺ၦ઼๊пαȞཏ֊ BBB ઼пαȟޠ׺ၦዀޠȂஆҐαȂ೼ٳึ՘ BBB ઼пα׺ၦϏڏޠึ՘΢ϟ߭ңॴᓏࣻᄈၷմȂ߭ຠᐡᄻӶຠ๊ᄈ೼ٳӍཿਣȂ ଷΠଓଡ଼८ޠӱષѵȂᕋၽᆔ౪८ޠӱષѠ૗࢑؛ۢڐ઼๊ޠᜱᗥՄ໕ȂՅҐःفณ ݳᇕ໲ᕋၽᆔ౪८ޠၦਠѠ૗࢑ആԚմզΚ઼ޠ঩ӱȄ ᗷดȂҐःفޠϸ᜹Ҕጃ౦༊ԥ 60.58%ȂसпህֆՍ୞Ͼ߭ңຠ๊؛๋ޠᢏᘉٿࣽȂ Ϭڏԥ؛๋ၦଊቌঅȂᇅᓎᐡ؛๋ޠஆྦ(20%)ЩၷȂϬାя 40.58%Ȅпᓀᇳϸ᜹ܛѠ ૗ആԚޠ཭ѷԚҐٿϸݚȂSVM ϸ᜹ዂԓޠᇳցӼึҢܼ᎒ߗ઼๊ޠ௒ݸȂЏڐ࢑մզ Κ઼߭ңຠ઼๊๊ޠ௒םഷ඾ႈȂࢉसשউпৡףմզΚ઼ޠҔጃ౦ٿᢏᄇȂࠍዂԓޠ Ҕጃ౦࣐ 85.58%ȂसпৡףαίΚ઼ᇳցٿࣽȂࠍዂԓޠҔጃ౦࣐ 99.04%Ȅᄃଡ଼αȂα ίΚ઼ޠᇳցᄈӍཿึ՘༇ڕᗷԥኈ៫Ȃկٯߩᝓ२ޠᓀᇳȂӱԫȂҐःفϬѠᄈၦҐ Ҁൠණٽߒؐޠ߭ңຠ๊ၦଊȂ٦ٳۧґ௦ڨ߭ңຠ๊ޠӍཿѠӒ՘ᕤ၍Ґٙޠ߭ңॴ ᓏ઼๊ဤᘉȂ׺ၦ΢ѠໍΚؐᕤ၍ુн߭ңຠ๊ޠ׺ၦϏڏܛᄈᔗޠ߭ңॴᓏȄ

Ρȃᇅ ANN Щၷଇ፤

࣐ᕤ၍ SVM ޠϸ᜹ਞݏȂҐःفѫп ANN ዂԓձ࣐ஆྦЩၷȂҦܼ ANN ዂԓ ԥѠ૗ณݳؒூӓ஀ഷٺ၍Ȃ᎐ႅ׌ഌഷٺϾޠ௒םȂпІՄኍܫኻኻҐޠৰ౵ܓȂ ࢉשউөпϛ२ፓܫኻᇅ 0.75 Bootstrapping २ፓܫኻڎᆎРԓȂөםԚ 100 ಣኻҐȂ ңпЩၷ SVM ᇅ ANN ዂԓޠਞݏȂؑԪޠҔጃ౦Ԅߓ 8 ܛұȂңпЩၷ SVM ዂԓ ޠҔጃ౦࢑֐ᡘ຀ାܼ ANN ዂԓȂٯ௵ң paired t-test ᔯۢ(Witten & Frank, 2000) п ᕤ၍ҐःفϜڎঐዂԓҔጃ౦ޠᓻӜȄ H0: SVM ขၑಣϸ᜹Ҕጃ౦-ANN ขၑಣϸ᜹Ҕጃ౦≤0 H1: SVM ขၑಣϸ᜹Ҕጃ౦-ANN ขၑಣϸ᜹Ҕጃ౦>0 100 Ԫϛ२ፓܫኻޠᄃᡜூژᔯۢঅ t =10.863 > t(0.01;99)=2.3646Ȃ100 Ԫ 0.75 Bootstrapping ޠᄃᡜூژᔯۢঅ t =13.5318 > t(0.01;99)Ȃࣲ࣐ܣ๙ H0Ȃᔯۢ๗ݏ࣐ SVM ขၑಣޠҔጃ౦ᡘ຀ାܼ ANN ขၑಣޠҔጃ౦Ȅ

(21)

ߓ 8ȈڎᆎРݳขၑಣҔጃ౦ᄃᡜ 100 Ԫߓ

ϛ२ፓܫኻ Bootstrapping 0.75 ϛ२ፓܫኻ Bootstrapping0.75 ϛ२ፓܫኻ Bootstrapping0.75

SVM ANN SVM ANN SVM ANN SVM ANNt SVM ANN SVM ANN

1 60.58% 55.34% 52.88% 33.33% 34 48.08% 41.35% 50% 44.34% 67 56.73% 6.73% 44.23% 41.90% 2 52.88% 44.23% 55.77% 37.38% 35 50% 23.08% 56.73% 49.57% 68 44.23% 44.23% 45.19% 48.08% 3 55.77% 50.00% 58.65% 29.73% 36 56.73% 45.19% 60.58% 45.19% 69 45.19% 39.42% 49.04% 44.66% 4 58.65% 23.08% 46.15% 32.63% 37 60.58% 38.46% 49.04% 50.88% 70 49.04% 25.00% 55.77% 46.59% 5 46.15% 29.81% 50.96% 30.10% 38 49.04% 46.15% 63.46% 48.54% 71 55.77% 30.77% 48.08% 44.64% 6 50.96% 24.04% 54.81% 39.60% 39 63.46% 47.12% 56.73% 46.08% 72 48.08% 48.08% 57.69% 33.33% 7 54.81% 46.15% 56.73% 13.68% 40 56.73% 41.35% 51.92% 43.12% 73 57.69% 53.85% 52.88% 45.76% 8 56.73% 48.08% 51.92% 47.00% 41 51.92% 38.46% 53.85% 46.73% 74 52.88% 39.42% 54.81% 55.56% 9 51.92% 46.15% 55.77% 36.61% 42 53.85% 35.58% 55.77% 40.00% 75 54.81% 35.58% 58.65% 47.01% 10 55.77% 46.15% 44.23% 40.74% 43 55.77% 44.23% 50.96% 52.48% 76 58.65% 39.42% 57.69% 37.37% 11 44.23% 54.81% 52.88% 36.52% 44 50.96% 46.15% 52.88% 47.12% 77 57.69% 49.04% 58.65% 52.78% 12 52.88% 30.77% 57.69% 43.75% 45 52.88% 36.54% 53.85% 43.93% 78 58.65% 16.35% 58.65% 47.96% 13 57.69% 40.38% 49.04% 19.61% 46 53.85% 36.54% 42.31% 36.36% 79 58.65% 15.38% 46.15% 46.43% 14 49.04% 25.00% 49.04% 8.25% 47 42.31% 46.15% 58.65% 49.55% 80 46.15% 25.96% 45.19% 45.87% 15 49.04% 46.15% 56.73% 53.54% 48 58.65% 21.15% 52.88% 39.42% 81 45.19% 23.08% 53.85% 37.86% 16 56.73% 60.58% 52.88% 41.35% 49 52.88% 48.08% 59.62% 33.64% 82 53.85% 43.27% 50% 40.21% 17 52.88% 37.50% 56.73% 45.71% 50 59.62% 55.77% 60.58% 42.86% 83 50% 43.27% 51.92% 40.78% 18 56.73% 50.96% 51.92% 32.35% 51 60.58% 38.46% 54.81% 31.13% 84 51.92% 47.12% 58.65% 26.36% 19 51.92% 45.19% 50.96% 49.57% 52 54.81% 51.92% 51.92% 42.42% 85 58.65% 48.08% 52.88% 31.13% 20 50.96% 44.23% 55.77% 46.61% 53 51.92% 39.42% 58.65% 51.69% 86 52.88% 35.58% 59.62% 53.77% 21 55.77% 42.31% 59.62% 21.93% 54 58.65% 49.04% 52.88% 41.90% 87 59.62% 5.77% 55.77% 50.00% 22 59.62% 37.50% 50% 38.89% 55 52.88% 36.54% 61.54% 35.14% 88 55.77% 36.54% 58.65% 48.15% 23 50% 50.00% 47.12% 43.36% 56 61.54% 15.38% 45.19% 39.36% 89 58.65% 45.19% 50% 42.59% 24 47.12% 45.19% 51.92% 21.43% 57 45.19% 46.15% 49.04% 44.23% 90 50% 48.08% 49.04% 43.48% 25 51.92% 48.08% 47.12% 24.56% 58 49.04% 35.58% 55.77% 40.00% 91 49.04% 49.04% 52.88% 40.20% 26 47.12% 33.65% 51.92% 29.46% 59 55.77% 47.12% 57.69% 42.06% 92 52.88% 26.92% 53.85% 31.96% 27 51.92% 42.31% 52.88% 36.04% 60 57.69% 46.15% 50.96% 47.92% 93 53.85% 50.96% 60.58% 47.06% 28 52.88% 48.08% 44.23% 24.30% 61 50.96% 20.19% 54.81% 32.11% 94 60.58% 44.23% 47.12% 51.35% 29 44.23% 22.12% 53.85% 48.39% 62 54.81% 44.23% 54.81% 32.76% 95 47.12% 38.46% 50.96% 18.18% 30 53.85% 42.31% 50.96% 40.18% 63 54.81% 53.85% 52.88% 28.18% 96 50.96% 45.19% 51.92% 17.86% 31 50.96% 49.04% 49.04% 53.21% 64 52.88% 42.31% 59.62% 44.04% 97 51.92% 49.04% 49.04% 35.04% 32 49.04% 45.19% 56.73% 41.35% 65 59.62% 51.92% 51.92% 29.70% 98 49.04% 41.35% 52.88% 23.53% 33 56.73% 45.19% 48.08% 43.12% 66 51.92% 16.35% 56.73% 41.12% 99 52.88% 50.00% 47.12% 28.33% 100 47.12% 50.96% 51.92% 44.17% ࣐ܼ߰ᇅ SVM ዂԓໍ՘ϸ᜹ЩၷȂஆܼٻңӤኻΚಣܫኻኻҐޠ௒ݸίȂשউ ѫ՘ϸݚ 31-22-5 ޠ ANN ࢝ᄻዂԓȂଌጜಣ፝୥ࣽߓ 9Ȃขၑಣ፝୥ࣽߓ 10Ȃᇅ SVM ໍ՘ЩၷȂӶขၑಣഌϸȂѠூژпί๗፤Ȉ 1. пঐր߭ຠ઼๊ޠҔጃ౦ᢏᄇȂSVM ዂԓޠҔጃ౦Ӷ AȃBBB І BB ઼ήঐ᜹ րାܼ ANN ዂԓȂӶ AA ઼пαІ B ઼пίࠍմܼ ANN ዂԓȂᡘұ SVM ዂ ԓӶ྄ᆓ઼๊ޠϸ᜹α࢑ၷৰޠȄ 2. пᐍᡞҔጃ౦ᢏᄇȂSVM ዂԓାܼ ANN ዂԓȄ 3. пৡףαίΚ઼ᇳৰޠҔጃ౦ᒌ໕ᢏᄇȂSVM ዂԓංоѠႁ 100%ޠᐍᡞዂԓ Ҕጃ౦Іঐր߭ຠ઼๊ޠҔጃ౦ȞଷΠ B ઼пί࣐ 95.24%ȟȄ

(22)

ߓ 9ȈANN ϸ᜹๗ݏె༅ઑଳ˕˕ଌጜኻҐ Predicted Target AA ઼пα A BBB BB пίB ઼ Ҕጃ౦ մզΚ઼ᇳৰҔጃ౦ αίΚ઼Ҕጃ౦ AA ઼пα 21 9 0 2 0 65.63% 93.75% 93.75% A 7 52 12 3 2 68.42% 84.21% 93.42% BBB 1 7 57 4 1 81.43% 87.14% 97.14% BB 1 3 1 72 7 85.71% 94.05% 95.24% B ઼пί 0 1 1 3 59 92.19% 92.19% 96.88% ᐍᡞዂԓ 80.06% 89.88% 86.05% ߓ 10ȈANN ϸ᜹๗ݏె༅ઑଳ˕˕ขၑኻҐ Predicted Target AA ઼пα A BBB BB пίB ઼ Ҕጃ౦ ᇳৰҔጃ౦մզΚ઼ αίΚ઼Ҕጃ౦ AA ઼пα 4 3 2 0 0 44.44% 77.78% 77.78% A 0 13 9 2 0 54.17% 91.67% 91.67% BBB 1 2 14 4 2 60.87% 78.26% 86.96% BB 0 1 8 12 6 44.44% 66.67% 96.30% B ઼пί 0 0 1 5 14 70.00% 70.00% 95.00% ᐍᡞዂԓ 55.34% 76.70% 86.05%

ഛȃ๗፤

Ґःفޠл्ଔᝧӶܼ௥ଇႇџःفၷЎණІޠึ՘΢߭ңຠ๊ឋᚡȂпІᔗң ུߗޠ΢Ϗහኌϸ᜹Ϗڏ SVM Рݳٿ࡛ᄻึ՘΢߭ңຠ๊ϸ᜹ዂԓȄҦܼึ՘΢߭ ңຠ๊ϛ჌ึ՘ຠ๊ѠӱՄ໕༇᠍໸՞๊ึ՘఩ӈՅၷܼܿϸ᜹߭ຠ઼๊Ȃࢉ࣐Κ؂ ፓᚖޠϸ᜹؛๋ୱᚡȄึ՘΢߭ңຠ๊࣐өᆎึ՘Ϗڏ߭ңຠ๊ޠஆᙄȂՅйҭࠊѯ ᢋޠ߭ңຠ๊ၦଊҼӼ឵ܼึ՘΢ޠ߭ңຠ๊ȂҐःفп SVM Рݳึ৥߭ңຠ๊؛ ๋ዂԓѠ࣐ၦҐҀൠණٽߒؐޠึ՘΢߭ңຠ๊ၦଊȄ SVM Рݳ࣐ߗංԒٿӶ೏౪ϸ᜹ୱᚡαᕖூًԂ๗ݏޠϸݚ؛๋ϏڏȂٯϑၽң ܼᚃᏱІϏโ๊ୱᚡ၍؛ȂӶᆔ౪ऌᏱР८ȂҼӤኻ໹८ᖞ೩Ӽϸ᜹ޠୱᚡȂҐःفп ึ՘΢߭ңຠ๊࣐پџ௥ଇڐӶᆔ౪ስ஀ޠᎍңܓȂٯпѫΚ΢ϏහኌРݳ ANN ዂԓ ࣐ஆྦᇅ SVM ዂԓໍ՘ЩၷȂึ౫ SVM РݳӶҐःفϜޠϸ᜹Ҕጃ౦ାܼ ANN РݳȄ Ґःفޠ SVM ዂԓขၑಣϸ᜹Ҕጃ౦༊ႁ 60.58%ȂйӼᆻӱܼմզΚঐ߭ңຠ ઼๊๊Ȃ࡛ឋґٿޠःفРӪѠ්Ӫᇕ໲ᕋၽᆔ౪८ޠၦਠȂᡲዂԓҐٙѠᏱಭၷ׈ ᐍޠ߭ңຠ๊឵ܓȂп੒ଷմզΚঐ߭ңຠ઼๊๊ޠᓀᇳȂໍΚؐණЁҔጃ౦ȄSVM РݳӶึ՘΢߭ңຠ๊ϸ᜹ୱᚡαȂᇅҐःفܛ೪ॏޠЩၷஆྦ ANN РݳЩၷࡤȂ ߒؐூژၷٺޠϸ᜹Ҕጃ౦Ȃ࡛ឋґٿѠၽң၏Рݳ၍؛ڐуޠᆔ౪ୱᚡȄ

(23)

ᇭᗃ

ձ޲དᗃڎ՞ୢӫቸࢦېসϟ᝙ີཏُȄҐःف߾Ҧ୾ऌཽःفॏฬܛМࡼȂॏ ฬጢဵ NSC 95-2416 H-182-008Ȃ༊ԫᇭᗃȄ

୥ՄНᝧ

1. Belkaoi, A. “Industrial Bond Ratings: A New Look,” Financial Management (Autumn) 1980, pp: 44-51

2. Burbidge, R., Trotter, M., Buxton B. and Holden, S. “Drug Design by Machine Learning: Support Vector Machines for Pharmaceutical Data Analysis,” Computers

and Chemistry (26) 2001, pp: 5-14

3. Cai, Y.-D. and Lin, X.-J. “Prediction of Protein Structural Classes by Support Vector Machines,” Computers and Chemistry (26) 2002, pp: 293-296

4. Diamantaras, K.I. and Kung, S.Y. Principal Component Neural Networks: Theory and

Applications, John Wiley, New York, 1996

5. Dutta, S. and Shekhar, S. “Bond Rating: A Non-Conservative Application of Neural Networks,” Proceedings of the IEEE International Conference on Neural Networks (II) 1988, pp: 443-450

6. Ederington, L.H., “Classification Models and Bond Ratings,” The Financial Review (20:4) 1985, pp: 237-262

7. Fisher, L. “Determinants of Risk Premiums on Corporate Bonds,” Journal of Political

Economy (June) 1959, pp: 217-237

8. Gunn, S.R. “Support Vector Machines for Classification and Regression,” unpublished manuscript, Faculty of Engineering and Applied Science Department of Electronics and Computer Science, University of Southampton, 1998, pp: 1-54

9. Horrigan, J.O. “The Determination of Long Term Credit Standing with Financial Ratios,” Journal of Accounting Research (Supplement) 1966, pp: 44-62

10. Hsu, C.-W., Chang, C.-C. and Lin, C.-J. “A Practical Guide to Support Vector Classification,” Department of Computer Science and Information Engineering, National Taiwan University, 2003

11. Kim, K.-S. and Han, I. “The Cluster-indexing Method for Case-based Reasoning Using Self-organizing Maps and Learning Vector Quantization for Bond Rating Cases,” Expert Systems with Applications (21) 2001, pp: 147-156

12. Lippmann, R.P. “An Introduction to Computing with Neural Nets,” IEEE ASSP

(24)

13. Maher, J.J. and Sen, T.K. “Predicting Bond Ratings Using Neural Networks: A Comparison with Logistic Regression,” Intelligent systems in accounting, finance and

management (6) 1997, pp: 59-72

14. Minoux, M. Mathematical Programming: Theory and Algorithms, John Wiley and Sons, 1986

15. Molinero, C.M., Gomez, C.A. and Cinca, C.S. “A Multivariate Study of Spanish Bond Ratings,” Omega (24:4) 1996, pp: 451-462

16. Morris, C.W. and Autret, A. “Support Vector Machines for Identifying Organisms - A Comparison with Strongly Partitioned Radial Basis Function Networks,” Ecological

modeling (146) 2001, pp: 57-67

17. Pinches, E. and Mingo, K.A. “A Multivariate Analysis of Industrial Bond Ratings,”

Journal of Finance (March) 1973, pp: 1-18

18. Pinches, E. and Mingo, K.A. “The Role of Subordination and Industrial Bond Ratings,” Journal of Finance (March) 1975, pp: 201-206

19. Shin , K.-S. and Han, I. “Case-based Reasoning Supported by Genetic Algorithms for Corporate Bond Rating,” Expert Systems with Application (16) 1999, pp: 85-95

20. Shin, K.-S. and Han, I. “A Case-based Approach Using Inductive Indexing for Corporate Bond Rating,” Decision Support Systems (32) 2001, pp: 41-52

21. Smith, K.A. and Gupta, J.N.D. “Neural Networks in Business: Techniques and Applications for the Operations Research,” Computers and Operations Research (27) 2000, pp: 1023-1044

22. Standard & Poor's Corporation Standard & Poor’s Corporate Ratings Criteria, McGraw Hill Book Company, 1996

23. Surkan, A.J. and Singleton, J.C. “Neural Networks for Bond Rating Improved by Multiple Hidden Layers,” Proceedings of the IEEE International Conference on

Neural Networks (2) 1990, pp: 163-168

24. Tay, F.E.H. and Cao, L. “Application of Support Vector Machines in Financial Time Series Forecasting,” Omega (29) 2001, pp: 309-317

25. Vapnik, V. N. The Nature of Statistical Learning Theory, New York, Springer-Verlag, 1995

26. West, R.R. “An Alternative Approach to Predicting Corporate Bond Ratings,” Journal

of Accounting Research (Spring) 1970, pp: 118-127

27. Witten, I.H., Frank E. Data Mining: Practical Machine Learning Tools and

Techniques with Java Implementations, San Francisco, Morgan Kaufmann Publishers,

參考文獻

相關文件

Receiver operating characteristic (ROC) curves are a popular measure to assess performance of binary classification procedure and have extended to ROC surfaces for ternary or

In summary, the main contribution of this paper is to propose a new family of smoothing functions and correct a flaw in an algorithm studied in [13], which is used to guarantee

“Transductive Inference for Text Classification Using Support Vector Machines”, Proceedings of ICML-99, 16 th International Conference on Machine Learning, pp.200-209. Coppin

The objective of this study is to establish a monthly water quality predicting model using a grammatical evolution (GE) programming system for Feitsui Reservoir in Northern

The purpose of this thesis is to propose a model of routes design for the intra-network of fixed-route trucking carriers, named as the Mixed Hub-and-Spoke

The main objective of this system is to design a virtual reality learning system for operation practice of total station instrument, and to make learning this skill easier.. Students

On the content classification of commercial, we modified a classic model of the vector space to build the classification model of commercial audio, and then identify every kind

The objective is to evaluate the impact of personalities balance in a project management team on the team’s performance.. To verify the effectiveness of this model, two