• 沒有找到結果。

Future Directions

Chapter 5 Conclusions and Future Work

5.2 Future Directions

Some directions deserve further study in the future:

1) More comparisons with other emotional models will be further studied. It will be interesting to investigate different models for robotic emotion generation and evaluate their emotional intelligence with practical experiments.

2) For human emotion recognition, it is suggested to focus on the development of robust algorithms to deal with more natural visual and audio signals. Methods to extract more reliable features of both visual and audio modalities will also be investigated to improve the performance. The direct fusion of both visual and audio features is considered for the future to overcome the incomplete problem.

3) Because the voice signal must be acquired using the embedded system, it is difficult to establish a benchmark to evaluate the developed recognition algorithm. In the future, a method to extract key phrases in an utterance will be investigated, to increase the recognition rate. The emotional state can be estimated more directly from the speech signal, than from the extracted statistical features of the whole voice frame.

4) In this study, all participants in our experiment are aware of the test. It belongs to intrusive testing. Other types of testing can be studied in the future.

Appendix A

Evaluation Questionary of Emotional Interaction

Bibliography

[1] M. Fujita, “On Activating Human Communications with Pet-type Robot AIBO,”

Proceedings of IEEE, Vol. 92, No. 11, pp. 1804-1813, 2004.

[2] H. H. Lund, “Modern Artificial Intelligence for Human-robot Interaction,” Proceedings of IEEE, Vol. 92, No. 11, pp. 1821-1838, 2004.

[3] S. G. Roh, K. W. Yang, J. H. Park, H. Moon, H. S. Kim, H. Lee, H. R. Choi, “A Modularized Personal Robot DRPI: Design and Implementation,” IEEE Trans. on Robotics, Vol. 25, No. 2, pp. 414-425, 2009.

[4] NEC's KOTOHANA Emotion communicator,

http://thefutureofthings.com/pod/1042/necs-kotohana-emotion-communicator.html

[5] C. Breazeal, “Emotion and Sociable Humanoid Robots,” International Journal of Human-Computer Studies, Vol. 59, pp. 119-155, 2003.

[6] C. Breazeal, D. Buchsbaum, J. Gray, D. Gatenby and B. Blumberg, “Learning From and About Others: Towards Using Imitation to Bootstrap the Social Understanding of Others by Robots,” Journal of Artificial Life, Vol. 11, pp.1-32, 2005.

[7] MIT Media Lab, personal robot group,

http://robotic.media.mit.edu/projects/robots/mds/headface/headface.html

[8] T. Wu, N. J. Butko, P. Ruvulo, M. S. Bartlett and J. R. Movellan, “Learning to Make Facial Expressions,” in Proc. of IEEE 8th International Conference on Development and Learning, Shanghai, China, 2009, pp. 1-6.

[9] N. Mavridis and D. Hanson, “The IbnSina Center: An Augmented Reality Theater with Intelligent Robotic and Virtual Characters,” in Proc. of IEEE 18th International Symposium on Robot and Human Interactive Communication, Toyama, Japan, 2009, pp.

681-686.

[10] N. Mavridis, A. AlDhaheri, L. AlDhaheri, M. Khanji and N. AlDarmaki, “Transforming IbnSina into an Advanced Multilingual Interactive Android Robot,” in Proc. of IEEE GCC Conference and Exhibition, Dubai, United Arab Emirates, 2011, pp. 120-123.

[11] T. Hashimoto, S. Hiramatsu, T. Tsuji and H. Kobayashi, “Realization and Evaluation of Realistic Nod with Receptionist Robot SAYA,” in Proc. of the 16th IEEE International Symposium on Robot and Human interactive Communication (RO-MAN 2007), Jeju Island, Korea, 2007, pp. 326-331.

[12] T. Hashimoto, S. Hiramatsu, T. Tsuji and H. Kobayashi, “Development of the Face Robot SAYA for Rich Facial Expressions,” in Proc. of International Joint Conference on SICE-ICASE, Pusan, Korea, 2006, pp. 5423-5428.

[13] D. W. Lee, T. G. Lee, B. So, M. Choi, E. C. Shin, K. W. Yang, M. H. Back, H. S. Kim and H. G. Lee, “Development of an Android for Emotional Expression and Human Interaction,” in Proc. of International Federation of Automatic Control, Seoul, Korea, 2008, pp. 4336-4337.

[14] M. S. Siegel, “Persuasive Robotics: How Robots Change Our Minds,” Massachusetts Institute of Technology, PhD Thesis, 2009.

[15] N. Mavridis, M. Petychakis, A. Tsamakos, P. Toulis, S. Emami, W. Kazmi, C. Datta, C.

BenAbdelkader and A. Tanoto, “FaceBots: Steps Towards Enhanced Long-Term Human-Robot Interaction by Utilizing and Publishing Online Social Information,”

Springer Paladyn Journal of Behavioral Robotics, Vol. 1, No. 3, pp. 169-178, 2011.

[16] H. Miwa, T. Okuchi, K. Itoh, H. Takanobu and A. Takanishi, “A New Mental Model for Humanoid Robots for Human Friendly Communication,” in Proc. of IEEE International Conference on Robotics and Automation, Taipei, Taiwan, 2003, pp. 3588-3593.

[17] H. Miwa, K. Itoh, M. Matsumoto, M. Zecca, H. Takanobu, S. Rocella, M.C. Carrozza, P.

Dario and A. Takanishi, “Effective Emotional Expressions with Emotion Expression Humanoid Robot WE-4RII,” in Proc. of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, 2004, pp. 2203-2208.

[18] D. Duhaut, “A Generic Architecture for Emotion and Persionality,” in Proc. of IEEE International Conference on Advanced Intelligent Mechatronics, Xi’an, China, 2008, pp.

188-193.

[19] L. Moshkina, S. Park, R. C. Arkin, J. K. Lee and H. Jung, “TAME: Time-Varying Affective Response for Humanoid Robots,” International Journal of Social Robotics, Vol.

3, pp.207-221, 2011.

[20] C. Itoh, S. Kato and H. Itoh, “Mood-transition-based Emotion Generation Model for the Robot’s Personality,” in Proc. of IEEE International Conference on Systems, Man and Cybernetics, St Antonio, TX, USA, 2009, pp. 2957-2962.

[21] S. C. Banik, K. Watanabe, M. K. Habib and K. Izumi, “An Emotion-Based Task Sharing Approach for a Cooperative Multiagent Robotic System,” in Proc. of IEEE International Conference on Mechatronics and Automation, Kagawa, Japan, 2008, pp. 77-82.

[22] J. C. Park, H. R. Kim, Y. M. Kim and D. S. Kwon, “Robot’s Individual Emotion Generation Model and Action Coloring According to the Robot’s Personality,” in Proc. of

IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, 2009, pp. 257-262.

[23] H. R. Kim and D. S. Kwon, “Computational Model of Emotion Generation for Human-Robot Interaction Based on the Cognitive Appraisal Theory,” International Journal of Intelligent and Robotic Systems, Vol. 60, pp. 263-283, 2010.

[24] D. Lee, H. S. Ahn and J. Y. Choi, “A General Behavior Generation Module for Emotional Robots Using Unit Behavior Combination Method,” in Proc. of IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, 2009, pp. 375-380.

[25] M. J. Han, C. H. Lin and K. T. Song, “Autonomous Emotional Expression Generation of a Robotic Face,” in Proc. of IEEE International Conference on Systems, Man and Cybernetics, St Antonio, TX, USA, 2009, pp. 2501-2506.

[26] Y. Tian, T. Kanade and J.F. Cohn, “Recognizing Action Units for Facial Expression Analysis,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 23, No. 2, pp.

97-115, 2001.

[27] M. Pantic and L.J.M. Rothkrantz, “Automatic Analysis of Facial Expressions: The State of the Art,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 22, No. 12, pp. 1424-1445, 2000.

[28] D. Ververidis, C. Kotropoulos and I. Pitas, “Automatic Emotional Speech Classification,” in Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, Quebec, Canada, 2004, pp. 593-596.

[29] B. Schuller, G. Rigoll and M. Lang, “Speech Emotion Recognition Combining Acoustic Features and Linguistic Information in a Hybrid Support Vector Machine - Belief Network Architecture,” in Pro. of IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, Quebec, Canada, 2004, Vol. 1, pp. 577-580.

[30] P. S. Aleksic and A. K. Katsaggelos, “Audio-Visual Biometrics,” Proceedings of IEEE, Vol. 94, No. 11, pp. 2025-2044, 2006.

[31] L. C. De Silva, T. Miyasato and R. Nakatsu, “Facial Emotion Recognition Using Multi-modal Information,” in Proc. of IEEE International Conference on Information, Communications and Signal Processing, Singapore, 1997, pp. 397-401.

[32] L. C. De Silva, “Audiovisual Emotion Recognition,” in Proc. of IEEE International Conference on Systems, Man and Cybernetics, The Hague, The Netherlands, 2004, pp.

649-654.

[33] H. J. Go, K. C. Kwak, D. J. Lee and M. G. Chun, “Emotion Recognition from the Facial Image and Speech Signal,” in Proc. of SICE Annual Conference, Fukui, Japan, 2003, pp.

2890-2895.

[34] Y. Wang and L. Guan, “Recognizing Human Emotion from Audiovisual Information,” in Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing, Philadelphia, PA, USA, 2005, pp. 1125-1128.

[35] J. C. Platt, Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods, MIT Press, Cambridge, MA, 2000.

[36] O. W. Kwon, K. Chan, J. Hao and T. W. Lee, “Emotion Recognition by Speech Signals,”

in Proc. of 8th European Conference on Speech Communication and Technology, Geneva, Switzerland, 2003, pp. 55 125–128.

[37] T. L. Nwe, S. W. Foo, and L. C. De Silva, “Speech Emotion Recognition Using Hidden Markov Models,” Speech Communication, Vol. 41, No. 4, pp. 603-623, 2003.

[38] K. H. Hyun, E. H. Kim, and Y. K. Kwak, “Improvement of Emotion Recognition by Bayesian Classifier Using Non-zero-pitch Concept,” in Proc. of IEEE International Workshop on Robot and Human Interactive Communication, Nashville, USA, 2005, pp.

312-316.

[39] T. L. Pao and Y. T. Chen, “Mandarin Emotion Recognition in Speech,” in Proc. of IEEE Workshop on Automatic Speech Recognition and Understanding, St. Thomas, Virgin Islands, 2003, pp. 227-230.

[40] D. Neiberg, K. Elenius and K. Laskowski, “Emotion Recognition in Spontaneous Speech Using GMMs,” in Proc. of International Conference on Spoken Language Processing, Pittsburgh, Pennsylvania, USA, 2006, pp. 809-812.

[41] M. You, C. Chen, J. Bu, J. Liu and J. Tao, “Emotional Speech Analysis on Nonlinear Manifold,” in Proc. of IEEE International Conference on Pattern Recognition, Hong Kong, China, 2006, pp. 91-94.

[42] M. You, C. Chen, J. Bu, J. Liu and J. Tao, “A Hierarchical Framework for Speech Emotion Recognition,” in Proc. of IEEE International Symposium on Industrial Electronics, Montreal, Quebec, Canada, 2006, pp. 515-519.

[43] Z. J. Chuang and C. H. Wu, “Emotion Recognition Using Acoustic Features and Textual Content,” in Proc. of IEEE International Conference on Multimedia and Expo, Taipei, Taiwan, 2004, pp. 53-56.

[44] C. Busso, S. Lee and S. Narayanan, “Analysis of Emotionally Salient Aspects of

Fundamental Frequency for Emotion Detection,” IEEE Trans. on Audio, Speech, and Language Processing, Vol. 17, No. 4, pp. 582-596, 2009.

[45] B. Yang and M. Lugger, “Emotion Recognition from Speech Signals Using New Harmony Features,” Signal Processing, Vol. 90, No. 5, pp. 1415-1423, 2010.

[46] C. Li, Q. Zhou, J. Cheng, X. Wu and Y. Xu, “Emotion Recognition in a Chatting Robot,”

in Proc. of 2008 IEEE International Conference on Automation and Logistics, Qingdao, China, 2008, pp. 1452-1457.

[47] E. H. Kim, K. H. Hyun, S. H. Kim and Y. K. Kwak, “Improved Emotion Recognition with a Novel Speaker-independent Feature,” IEEE Trans. on Mechatronics, Vol. 14, No. 3, pp. 317-325, 2009.

[48] J. S. Park, J. H. Kim and Y. H. Oh, “Feature Vector Classification Based Speech Emotion Recognition for Service Robots,” IEEE Trans. on Consumer Electronics, Vol. 55, No. 3, pp. 1590-1596, 2009.

[49] K. T. Song, M. J. Han and J. W. Hong, “Online Learning Design of an Image-Based Facial Expression Recognition System,” Intelligent Service Robotics, Vol. 3, No. 3, pp.

151-162, 2010.

[50] M. A. Amin and H. Yan, “Expression Intensity Measurement from Facial Images by Self Organizing Maps,” in Proc. of IEEE International Conference on Machine Learning and and Cybernetics, Kunming, 2008, pp. 3490-3496.

[51] M. Beszedes and P. Culverhouse, “Comparison of Human and Automatic Facial Emotions and Emotion Intensity Levels Recognition,” in Proc. of IEEE International Symposium on Image and Signal Processing and Analysis, Istanbul, Turkey, 2007, pp.

429-434.

[52] M. Oda and K. Isono, “Effects of Time Function and Expression Speed on the Intensity and Realism of Facial Expressions,” in Proc. of IEEE International Conference on Systems, Man and Cybernetics, Singapore, 2008, pp. 1103-1109.

[53] K. K. Lee and Y. Xu, “Real-time Estimation of Facial Expression Intensity,” in Proc. of IEEE International Conference on Robotics and Automation, Taipei, Taiwan, 2003, pp.

2567-2572.

[54] K. T. Song and J. Y. Lin, “Behavior Fusion of Robot Navigation Using a Fuzzy Neural Network,” in Proc. of IEEE International Conference on Systems, Man and Cybernetics, Taipei, Taiwan, 2006, pp. 4910-4915.

[55] Grimace project, available online at: http://grimace-project.net/

[56] P. Ekman and W. V. Friesen, The Facial Action Coding System: A Technique for The Measurement of Facial Movement, Consulting Psychologists Press, San Francisco, 1978.

[57] D. G. Myers, Theories of Emotion, NY: Worth Publishers, New York, 2004.

[58] S. F. Locke, MIT Meter Measures the Mood of Passers-By, available online at:

http://www.popsci.com/technology/article/2011-11/mit-meter-measures-mood-passers [59] R. R. McCrae and P. T. Costa, “Validation of the Five Factor Model of Personality across

Instruments and Observers,” Journal of Personality and Social Psychology, Vol. 51, 81-90, 1987.

[60] P. T. Costa and R. R. McCrae, “Normal Personality Assessment in Clinical Practice: The NEO Personality Inventory,” Journal of Psychological Assessment, Vol. 4, 5-13, 1992.

[61] A. Mehrabian, “Analysis of the Big-five Personality Factors in Terms of the PAD Temperament Model,” Australian Journal of Psychology, Vol. 48, No. 2, pp. 86-92, 1996.

[62] L. R. Goldberg, “The Development of Markers for the Big-Five Factor Structure,”

Psychological Assessment, Vol. 4, pp.26-42, 1992.

[63] J. A. Russell and G. Pratt, “A Description of the Affective Quality Attributed to Environments,” Journal of Personality and Social Psychology, Vol. 38, No. 2, 311-322, 1980.

[64] J. A. Russell and M. Bullock, “Multidimensional Scaling of Emotional Facial Expressions: Similarity from Preschoolers to Adults,” Journal of Personality and Social Psychology, Vol. 48, 1290–1298, 1985.

[65] J. A. Russell, “A Circumplex Model of Affect,” Journal of Personality and Social Psychology, Vol. 39, No. 6, 1161-1178, 1980.

[66] Frederick Jelinek, Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA, 1999.

[67] N. Christianini and J.S. Taylor, An Introduction to Support Vector Machines, MIT Press, Cambridge, MA, 2000.

[68] P. Viola and M. Jones, “Rapid Object Detection Using a Boosted Cascade of Simple Features,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Kauai Marriott, Hawaii, 2001, pp. 511-518.

[69] J. H. Lai, P. C Yuen, W. S. Chen, S. Lao and M. Kawade, “Robust Facial Feature Point Detection Under Nonlinear Illuminations,” in Proc. of IEEE ICCV Workshop on Recognition, Analysis and Tracking of Faces and Gestures in Real-time Systems,

Vancouver , Canada, 2001, pp.168-174.

[70] M. J. Han, J. H. Hsu, K. T. Song, and F. Y. Chang, “A New Information Fusion Method for SVM-based Robotic Audio-visual Emotion Recognition,” in Proc. of IEEE International Conference on Systems, Man and Cybernetics, Montreal, Canada, 2007, pp.

2656-2661.

[71] H. C. Kim, D. J. Kim and S. Y. Bang, “Face Recognition Using LDA Mixture Model,” in Proc. of IEEE International Conference on Pattern Recognition, Quebec, Canada, 2002, pp. 925-928.

[72] K. M. Yan, “Development of A Home Robot Speech Recognition System,” National Chiao Tung University, Master Thesis, 2002.

[73] B. Gold and N. Morgan, Speech and Audio Signal Processing: Processing and Perception of Speech and Music, John Wiley & Sons, New York, USA, 2000.

[74] S. Mika, G. Ratsch, J. Weston, B. Scholkopf and K. R. Muller, “Fisher Discriminant Analysis with Kernels,” in Proc. of IEEE International Workshop on Neural Networks for Signal Processing, Madison. WI. USA, 1999, pp. 41-48. Nature of Statistic Learning Theory

[75] V. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, USA, 1995.

[76] R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley, New York, USA, 1973.

[77] H. Andrian and K. T. Song, “Embedded CMOS Imaging System for Real-Time Robotic Vision,” in Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, Alberta, Canada, 2005, pp. 3694-3699.

[78] TMS320C6416 DSK technical reference, available online at:

http://c6000.spectrumdigital.com/dsk6416/V1/docs/dsk6416_TechRef.pdf [79] TLV320AIC23B data manual, available online at:

http://www.ti.com/lit/ds/symlink/tlv320aic23b.pdf [80] Pololu serial 8-servo controller, available online at:

http://www.pololu.com/products/pololu/0727/

[81] J. M. Valin, S. Yamamoto, J. Rouat, F. Michaud, K. Nakadai and H. G. Okuno, “Robust Recognition of Simultaneous Speech by A Mobile Robot,” IEEE Trans. on Robotics, Vol.

23, No. 4, pp. 742-752, 2007.

[82] H. Nakajima, K. Nakadai, Y. Hasegawa and H. Tsujino, “Blind Source Separation with

Parameter-free Adaptive Step-size Method for Robot Audition,” IEEE Trans. on Audio, Speech, and Language Processing, Vol. 18, No. 6, pp. 1476-1485, 2010.

[83] Qwerk Platform, Charmed Labs, available online at:

http://www.charmedlabs.com/index.php?option=com_content&task=view&id=29 [84] http://isci.cn.nctu.edu.tw/video/AnthropomorphicRobot/

[85] http://isci.cn.nctu.edu.tw/video/RoboticMoodTransition/

[86] http://isci.cn.nctu.edu.tw/video/RoboticMoodTransitionAnalysis/

[87] M. E. Hoque, R. E. Kaliouby and R. W. Picard, “When Human Coders (and Machines) Disagree on the Meaning of Facial Affect in Spontaneous Videos,” in Proc. of 9th International Conference on Intelligent Virtual Agents, Amsterdam, Netherlands, 2009, pp.

337-343.

[88] M. E. Hoque, L-P. Morency and R. W. Picard, “Are You Friendly or Just Polite? – Analysis of Smiles in Spontaneous Face-to-face Interactions,” in Proc. of 4th International Conference on Affective Computing and Intelligent Interaction, Memphis, TN, USA, 2011, pp. 135-144.

[89] M. E. Hoque and R. W. Picard, “Acted vs. Natural Frustration and Delight: Many People Smile in Natural Frustration,” in Proc. of IEEE 9th International Conference on Automatic Face and Gesture Recognition, Santa Barbara, CA, USA, 2011, pp. 354-359.

[90] M. Pantic, Michel Valstar, R. Rademaker and L. Maat, “Web-based Database for Facial Expression Analysis,” in Proc. of IEEE International Conference on Multimedia and Expo, Amsterdam, 2005, pp. 317-321.

[91] O. Martin, I. Kotsia, B. Macq and I. Pitas, “The eNTERFACE'05 Audio-Visual Emotion Database,” in Proc. of IEEE International Conference on Data Engineering Workshops, Atlanta, 2006.

[92] Intelligent System Control Integration Laboratory, Emotional Utterance Voice Clip, available online at: http://isci.cn.nctu.edu.tw/JCIE/VoiceClip/

[93] Intelligent System Control Integration Laboratory, Experimental Video Clip, available online at: http://isci.cn.nctu.edu.tw/JCIE/VideoClip/

[94] Cohn-Kanade AU-Coded Facial Expression Database, available online at:

http://www.pitt.edu/~jeffcohn/CKandCK+.htm

Vita

姓名: 韓孟儒 性別: 男

生日: 中華民國 65 年 7 月 21 日 籍貫: 台北市

論文題目: 中文: 機器人情感模型及情感辨識設計

英文: Design of Robotic Emotion Model and Human Emotion Recognition

學/經歷:

1. 民國 87 年 6 月 國立台北科技大學電機工程技術系畢業 2. 民國 92 年 6 月 國立中興大學電機工程研究所畢業 3. 民國 92 年 9 月 國立交通大學電控工程研究所博士班

4. 民國 99 年 11 月 工業技術研究院機械與系統研究所副研究員

Publication List

Journal paper

[1] Meng-Ju Han, Chia-How Lin and Kai-Tai Song, “Robotic Emotional Expression Generation Based-on Mood Transition and Personality Model,” IEEE Transactions on Systems, Man and Cybernetics, Part B, to appear, November 5, 2012 (Accepted).

[2] Kai-Tai Song, Meng-Ju Han and Shih-Chieh Wang, “Speech-Signal-Based Emotion Recognition and Its Application to Entertainment Robots,” Journal of the Chinese Institute of Engineers, to appear, September 20, 2012 (Accepted).

[3] Kai-Tai Song, Meng-Ju Han and Jung-Wei Hong, “Online Learning Design of an Image-Based Facial Expression Recognition System,” Intelligent Service Robotics, Vol.

3, No. 3, pp. 151-162, 2010.

[4] Meng-Ju Han, Jing-Huai Hsu, Kai-Tai Song and Fuh-Yu Chang, “A New Information Fusion Method for Bimodal Robotic Emotion Recognition,” Journal of Computers, Vol.

3, No. 7, pp. 39-47, 2008.

Patent

[1] 宋開泰、韓孟儒、王仕傑、林家合、林季誼,“表情檢測裝置及其表情檢測方法”,

中華人民共和國發明專利證書號: ZL 2009 1 0141299.1.

[2] 宋開泰、韓孟儒、許晉懷、洪濬尉、張復瑜,“情緒辨識與對新辨識資訊之學習方

法”,台灣發明專利證書號: I365416.

[3] Kai-Tai Song, Meng-Ju Han, Jing-Huai Hsu, Jung-Wei Hong and Fuh-Yu Chang, “Method of Emotion Recognition,” 美國專利公開號: 20080201144.

[4] 陳豪宇、韓孟儒、吳至仁、林泓宏、康哲儒、楊谷洋、宋開泰、蔡文祥、莊仁輝,“移

動式取像系統及其控制方法”,台灣專利公開號: 200905617.

[5] 宋開泰、韓孟儒、王仕傑、林家合、林季誼,“表情偵測裝置及其表情偵測方法”,

台灣專利公開號:201039251;美國專利公開號: 20100278385.

[6] 宋開泰、韓孟儒、王仕傑、江銘峰、林家合,“人臉偵測裝置及其人臉偵測方法”,

台灣專利公開號:201040846;中國專利申請號:200910141418.3;美國專利公開號:

20100284619.

[7] 宋開泰、韓孟儒、林嘉豪,“機器人自主情感表現裝置以及表現機器人自主情感之方

法”,台灣專利公開號:201123036;美國專利公開號: 20110144804.

[8] 宋開泰、韓孟儒、王仕傑,“人臉辨識方法及應用此方法之系統”,台灣專利公開號:

201123030; 美國專利公開號:20110150301.

Conference paper

[1] Meng-Ju Han, Chia-How Lin and Kai-Tai Song, “A Design for Smooth Transition of Robotic Emotional States,” in Proc. of IEEE International Conference on Advanced Robotics and Its Social Impacts, Seoul, Korea, 2010, pp. 13-18.

[2] Yi-Wen Chen, Meng-Ju Han, Kai-Tai Song and Yu-Lun Ho, “Image-Based Age-Group Classification Design Using Facial Features,” in Proc. of IEEE International Conference on System Science and Engineering, Taipei, Taiwan, 2010, pp. 548-552.

[3] Kai-Tai Song, Shih-Chieh Wang, Meng-Ju Han and Ching-Yi Kuo, “Pose-Variant Face Recognition Based on an Improved Lucas-Kanade Algorithm,” in Proc. of IEEE International Conference on Advanced Robotics and Its Social Impacts, Tokyo, Japan, 2009, pp. 87-92.

[4] Meng-Ju Han, Chia-How Lin and Kai-Tai Song, “Autonomous Emotional Expression Generation of a Robotic Face,” in Proc. of IEEE International Conference on Systems, Man and Cybernetics, San Antonio, Texas, USA, 2009, pp. 2501-2506.

[5] Kai-Tai Song, Meng-Ju Han, Fu-Hua Jen and Jen-Chao Tai, “Facial Expression Recognition and Its Application to Emotional Interaction of a Robotic Head,” in Proc. of the 10th International Conference on Automation Technology, Tainan, Taiwan, 2009, pp.

493-498.

[6] Kai-Tai Song, Meng-Ju Han and Shuo-Hung Chang, “Pose-Variant Facial Expression Recognition Using an Embedded Image System,” in Proc. of International Symposium on Precision Mechanical Measurements, Hefei, China, 2008.

[7] Shih-Chieh Wang, Meng-Ju Hanand Kai-Tai Song, “Human Emotion Recognition of a Pet Robot Using Natural Speech Information,” in Proc. of National Symposium on System

[7] Shih-Chieh Wang, Meng-Ju Hanand Kai-Tai Song, “Human Emotion Recognition of a Pet Robot Using Natural Speech Information,” in Proc. of National Symposium on System