• 沒有找到結果。

Experiments on Image-based Emotional State Recognition

Chapter 4 Experimental Results

4.4 Experiments on Image-based Emotional State Recognition

In this design, the user’s emotional state (UEkn) is used as input to the system. In order to obtain UEkn, an image-based facial expression recognition module has been designed and implemented. The facial expression recognition module consists of face detection stage, feature extraction stage and emotional intensity analyzer. The method of facial feature extraction is descripted in 3.1.1. After obtaining the facial feature points, twelve significant feature values, which are distances between two selected feature points. In order to reduce the influence of distance between a user and the camera, these feature values are normalized for emotion recognition. Thus, every facial expression is presented as a feature set.

To recognize user’s emotional states, we further developed an image-based method to extract facial expression intensity. Four feature vectors, namely,FvNeu, FvHa, FvAng and FvSad are defined to represent the standard neutral, happy, angry and sad expressions. Dissimilarities between current feature set of a user (FvUser,k) and the standard facial expressions are calculated such that:

Neu k User k

N F F

d v v

= ,

, (4.1)

Ha

where dN,K, dH,K, dA,K and dS,K represent respectively, the dissimilarities between the feature set of user and the defined standard neutral, happy, angry and sad expression at sampling instant k.

ǁ ǁ represents the Euclidean distance. In our design, the intensity of user’s emotion is recognized as the standard facial expression while the dissimilarities between the current feature set and standard facial expression is small. Therefore, the user’s emotional intensities

kn intensities at sampling instant k for neutral, happy, angry and sad expressions. By using this procedure, the user’s emotional state is represented as a set of four emotional intensities.

In this section, Cohn-Kanade AU-Coded Facial Expression Database [94] is used to verify the proposed method of emotional state recognition. Twenty-four sets of facial images of different basic facial expressions were selected as training data. Each set contains 7 facial images of a particular emotion with various facial expressions. 60 face images of different basic facial expressions were selected as test data. To compare the system with ground truth, we choose the strongest emotion as recognition results. The result of this experiment is shown in Table 4-15. The average recognition rate is 90%.

Table 4-15: Test result of emotion state recognition.

Output

Input

Neutral Anger Happiness Sadness RecognitionRate

Neutral 13 1 0 1 87%

Anger 0 15 0 0 100%

Happiness 2 0 13 0 87%

Sadness 1 1 0 13 87%

Figure 4-17 shows an example of emotional state recognition. In this example, neutral, happy, angry and sad facial expressions are used as testing samples. In Fig. 4-17(a), fourteen dot marks represent the extracted feature points for facial expression recognition. The emotional intensities are obtained using (4.5)-(4.8). As shown in Fig. 4-17(a), the ratio of the neutral component amounts to 54%, which dominates the facial expression, although the other emotion components also contribute to the facial expression. Similar results are obtained as shown in Figs. 4-17(b)-(d).

Fig. 4-17 Examples of user emotional state recognition.

4.5 Summary

In the part of robotic emotion generation, experimental results reveal that the simulated artificial face interacts with people in a manner of mood transition and with robotic personality. The questionnaire investigation confirms positive results on the evaluation of responsive robotic facial expressions generated by the proposed design. In the part of human emotion recognition, the experimental results of proposed bimodal emotion recognition system show that an average recognition rate of 86.9% is achieved, a 5% improvement compared to using only image information. On the other side, the experimental results of speech-signal-based emotion recognition for the entertainment robot show that the robot interacts with a person in a responsive manner. The average recognition rate for five emotional states is 73.8% using the database constructed in the authors’ lab.

Chapter 5

Conclusions and Future Work

5.1 Dissertation Summary

In this work, a robotic mood transition model for autonomous emotional interaction has been developed. An emotional model is proposed for mood state transition exploiting a robotic personality approach. By adopting Big Five factors to represent robot personality in the 2-D emotional model, one is able to generate facial expressions in a more natural manner.

The behavior fusion architecture with a designed rule table provides a robot the capability to generate emotional interactions. Experimental results on the artificial face show that the robot interacts with people with suitable mood transition and a kind of robotic personality. The questionnaire investigation confirms positive results on the evaluation of responsive robotic facial expressions generated by the proposed design.

For the bimodal information fusion algorithm, the proposed bimodal fusion scheme and statistically-determined fusion weights computed from individual modality effectively increase the recognition accuracy. Practical experiments have been carried out using a stand-alone robotic vision system. With a self-built database of fourteen persons, the proposed system achieves a recognition rate of 86.9%. For the proposed speech-signal-based emotion recognition, the emotion recognition system developed classifies five emotional categories, in real time. Experimental results using an entertainment robot show that the robot can interact with a user in a responsive manner, using the developed speech signal recognition system.

Using a database built in the lab, the proposed system achieves an average recognition rate of 73.8% for five emotional states.

5.2 Future Directions

Some directions deserve further study in the future:

1) More comparisons with other emotional models will be further studied. It will be interesting to investigate different models for robotic emotion generation and evaluate their emotional intelligence with practical experiments.

2) For human emotion recognition, it is suggested to focus on the development of robust algorithms to deal with more natural visual and audio signals. Methods to extract more reliable features of both visual and audio modalities will also be investigated to improve the performance. The direct fusion of both visual and audio features is considered for the future to overcome the incomplete problem.

3) Because the voice signal must be acquired using the embedded system, it is difficult to establish a benchmark to evaluate the developed recognition algorithm. In the future, a method to extract key phrases in an utterance will be investigated, to increase the recognition rate. The emotional state can be estimated more directly from the speech signal, than from the extracted statistical features of the whole voice frame.

4) In this study, all participants in our experiment are aware of the test. It belongs to intrusive testing. Other types of testing can be studied in the future.

Appendix A

Evaluation Questionary of Emotional Interaction

Bibliography

[1] M. Fujita, “On Activating Human Communications with Pet-type Robot AIBO,”

Proceedings of IEEE, Vol. 92, No. 11, pp. 1804-1813, 2004.

[2] H. H. Lund, “Modern Artificial Intelligence for Human-robot Interaction,” Proceedings of IEEE, Vol. 92, No. 11, pp. 1821-1838, 2004.

[3] S. G. Roh, K. W. Yang, J. H. Park, H. Moon, H. S. Kim, H. Lee, H. R. Choi, “A Modularized Personal Robot DRPI: Design and Implementation,” IEEE Trans. on Robotics, Vol. 25, No. 2, pp. 414-425, 2009.

[4] NEC's KOTOHANA Emotion communicator,

http://thefutureofthings.com/pod/1042/necs-kotohana-emotion-communicator.html

[5] C. Breazeal, “Emotion and Sociable Humanoid Robots,” International Journal of Human-Computer Studies, Vol. 59, pp. 119-155, 2003.

[6] C. Breazeal, D. Buchsbaum, J. Gray, D. Gatenby and B. Blumberg, “Learning From and About Others: Towards Using Imitation to Bootstrap the Social Understanding of Others by Robots,” Journal of Artificial Life, Vol. 11, pp.1-32, 2005.

[7] MIT Media Lab, personal robot group,

http://robotic.media.mit.edu/projects/robots/mds/headface/headface.html

[8] T. Wu, N. J. Butko, P. Ruvulo, M. S. Bartlett and J. R. Movellan, “Learning to Make Facial Expressions,” in Proc. of IEEE 8th International Conference on Development and Learning, Shanghai, China, 2009, pp. 1-6.

[9] N. Mavridis and D. Hanson, “The IbnSina Center: An Augmented Reality Theater with Intelligent Robotic and Virtual Characters,” in Proc. of IEEE 18th International Symposium on Robot and Human Interactive Communication, Toyama, Japan, 2009, pp.

681-686.

[10] N. Mavridis, A. AlDhaheri, L. AlDhaheri, M. Khanji and N. AlDarmaki, “Transforming IbnSina into an Advanced Multilingual Interactive Android Robot,” in Proc. of IEEE GCC Conference and Exhibition, Dubai, United Arab Emirates, 2011, pp. 120-123.

[11] T. Hashimoto, S. Hiramatsu, T. Tsuji and H. Kobayashi, “Realization and Evaluation of Realistic Nod with Receptionist Robot SAYA,” in Proc. of the 16th IEEE International Symposium on Robot and Human interactive Communication (RO-MAN 2007), Jeju Island, Korea, 2007, pp. 326-331.

[12] T. Hashimoto, S. Hiramatsu, T. Tsuji and H. Kobayashi, “Development of the Face Robot SAYA for Rich Facial Expressions,” in Proc. of International Joint Conference on SICE-ICASE, Pusan, Korea, 2006, pp. 5423-5428.

[13] D. W. Lee, T. G. Lee, B. So, M. Choi, E. C. Shin, K. W. Yang, M. H. Back, H. S. Kim and H. G. Lee, “Development of an Android for Emotional Expression and Human Interaction,” in Proc. of International Federation of Automatic Control, Seoul, Korea, 2008, pp. 4336-4337.

[14] M. S. Siegel, “Persuasive Robotics: How Robots Change Our Minds,” Massachusetts Institute of Technology, PhD Thesis, 2009.

[15] N. Mavridis, M. Petychakis, A. Tsamakos, P. Toulis, S. Emami, W. Kazmi, C. Datta, C.

BenAbdelkader and A. Tanoto, “FaceBots: Steps Towards Enhanced Long-Term Human-Robot Interaction by Utilizing and Publishing Online Social Information,”

Springer Paladyn Journal of Behavioral Robotics, Vol. 1, No. 3, pp. 169-178, 2011.

[16] H. Miwa, T. Okuchi, K. Itoh, H. Takanobu and A. Takanishi, “A New Mental Model for Humanoid Robots for Human Friendly Communication,” in Proc. of IEEE International Conference on Robotics and Automation, Taipei, Taiwan, 2003, pp. 3588-3593.

[17] H. Miwa, K. Itoh, M. Matsumoto, M. Zecca, H. Takanobu, S. Rocella, M.C. Carrozza, P.

Dario and A. Takanishi, “Effective Emotional Expressions with Emotion Expression Humanoid Robot WE-4RII,” in Proc. of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, 2004, pp. 2203-2208.

[18] D. Duhaut, “A Generic Architecture for Emotion and Persionality,” in Proc. of IEEE International Conference on Advanced Intelligent Mechatronics, Xi’an, China, 2008, pp.

188-193.

[19] L. Moshkina, S. Park, R. C. Arkin, J. K. Lee and H. Jung, “TAME: Time-Varying Affective Response for Humanoid Robots,” International Journal of Social Robotics, Vol.

3, pp.207-221, 2011.

[20] C. Itoh, S. Kato and H. Itoh, “Mood-transition-based Emotion Generation Model for the Robot’s Personality,” in Proc. of IEEE International Conference on Systems, Man and Cybernetics, St Antonio, TX, USA, 2009, pp. 2957-2962.

[21] S. C. Banik, K. Watanabe, M. K. Habib and K. Izumi, “An Emotion-Based Task Sharing Approach for a Cooperative Multiagent Robotic System,” in Proc. of IEEE International Conference on Mechatronics and Automation, Kagawa, Japan, 2008, pp. 77-82.

[22] J. C. Park, H. R. Kim, Y. M. Kim and D. S. Kwon, “Robot’s Individual Emotion Generation Model and Action Coloring According to the Robot’s Personality,” in Proc. of

IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, 2009, pp. 257-262.

[23] H. R. Kim and D. S. Kwon, “Computational Model of Emotion Generation for Human-Robot Interaction Based on the Cognitive Appraisal Theory,” International Journal of Intelligent and Robotic Systems, Vol. 60, pp. 263-283, 2010.

[24] D. Lee, H. S. Ahn and J. Y. Choi, “A General Behavior Generation Module for Emotional Robots Using Unit Behavior Combination Method,” in Proc. of IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, 2009, pp. 375-380.

[25] M. J. Han, C. H. Lin and K. T. Song, “Autonomous Emotional Expression Generation of a Robotic Face,” in Proc. of IEEE International Conference on Systems, Man and Cybernetics, St Antonio, TX, USA, 2009, pp. 2501-2506.

[26] Y. Tian, T. Kanade and J.F. Cohn, “Recognizing Action Units for Facial Expression Analysis,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 23, No. 2, pp.

97-115, 2001.

[27] M. Pantic and L.J.M. Rothkrantz, “Automatic Analysis of Facial Expressions: The State of the Art,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 22, No. 12, pp. 1424-1445, 2000.

[28] D. Ververidis, C. Kotropoulos and I. Pitas, “Automatic Emotional Speech Classification,” in Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, Quebec, Canada, 2004, pp. 593-596.

[29] B. Schuller, G. Rigoll and M. Lang, “Speech Emotion Recognition Combining Acoustic Features and Linguistic Information in a Hybrid Support Vector Machine - Belief Network Architecture,” in Pro. of IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, Quebec, Canada, 2004, Vol. 1, pp. 577-580.

[30] P. S. Aleksic and A. K. Katsaggelos, “Audio-Visual Biometrics,” Proceedings of IEEE, Vol. 94, No. 11, pp. 2025-2044, 2006.

[31] L. C. De Silva, T. Miyasato and R. Nakatsu, “Facial Emotion Recognition Using Multi-modal Information,” in Proc. of IEEE International Conference on Information, Communications and Signal Processing, Singapore, 1997, pp. 397-401.

[32] L. C. De Silva, “Audiovisual Emotion Recognition,” in Proc. of IEEE International Conference on Systems, Man and Cybernetics, The Hague, The Netherlands, 2004, pp.

649-654.

[33] H. J. Go, K. C. Kwak, D. J. Lee and M. G. Chun, “Emotion Recognition from the Facial Image and Speech Signal,” in Proc. of SICE Annual Conference, Fukui, Japan, 2003, pp.

2890-2895.

[34] Y. Wang and L. Guan, “Recognizing Human Emotion from Audiovisual Information,” in Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing, Philadelphia, PA, USA, 2005, pp. 1125-1128.

[35] J. C. Platt, Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods, MIT Press, Cambridge, MA, 2000.

[36] O. W. Kwon, K. Chan, J. Hao and T. W. Lee, “Emotion Recognition by Speech Signals,”

in Proc. of 8th European Conference on Speech Communication and Technology, Geneva, Switzerland, 2003, pp. 55 125–128.

[37] T. L. Nwe, S. W. Foo, and L. C. De Silva, “Speech Emotion Recognition Using Hidden Markov Models,” Speech Communication, Vol. 41, No. 4, pp. 603-623, 2003.

[38] K. H. Hyun, E. H. Kim, and Y. K. Kwak, “Improvement of Emotion Recognition by Bayesian Classifier Using Non-zero-pitch Concept,” in Proc. of IEEE International Workshop on Robot and Human Interactive Communication, Nashville, USA, 2005, pp.

312-316.

[39] T. L. Pao and Y. T. Chen, “Mandarin Emotion Recognition in Speech,” in Proc. of IEEE Workshop on Automatic Speech Recognition and Understanding, St. Thomas, Virgin Islands, 2003, pp. 227-230.

[40] D. Neiberg, K. Elenius and K. Laskowski, “Emotion Recognition in Spontaneous Speech Using GMMs,” in Proc. of International Conference on Spoken Language Processing, Pittsburgh, Pennsylvania, USA, 2006, pp. 809-812.

[41] M. You, C. Chen, J. Bu, J. Liu and J. Tao, “Emotional Speech Analysis on Nonlinear Manifold,” in Proc. of IEEE International Conference on Pattern Recognition, Hong Kong, China, 2006, pp. 91-94.

[42] M. You, C. Chen, J. Bu, J. Liu and J. Tao, “A Hierarchical Framework for Speech Emotion Recognition,” in Proc. of IEEE International Symposium on Industrial Electronics, Montreal, Quebec, Canada, 2006, pp. 515-519.

[43] Z. J. Chuang and C. H. Wu, “Emotion Recognition Using Acoustic Features and Textual Content,” in Proc. of IEEE International Conference on Multimedia and Expo, Taipei, Taiwan, 2004, pp. 53-56.

[44] C. Busso, S. Lee and S. Narayanan, “Analysis of Emotionally Salient Aspects of

Fundamental Frequency for Emotion Detection,” IEEE Trans. on Audio, Speech, and Language Processing, Vol. 17, No. 4, pp. 582-596, 2009.

[45] B. Yang and M. Lugger, “Emotion Recognition from Speech Signals Using New Harmony Features,” Signal Processing, Vol. 90, No. 5, pp. 1415-1423, 2010.

[46] C. Li, Q. Zhou, J. Cheng, X. Wu and Y. Xu, “Emotion Recognition in a Chatting Robot,”

in Proc. of 2008 IEEE International Conference on Automation and Logistics, Qingdao, China, 2008, pp. 1452-1457.

[47] E. H. Kim, K. H. Hyun, S. H. Kim and Y. K. Kwak, “Improved Emotion Recognition with a Novel Speaker-independent Feature,” IEEE Trans. on Mechatronics, Vol. 14, No. 3, pp. 317-325, 2009.

[48] J. S. Park, J. H. Kim and Y. H. Oh, “Feature Vector Classification Based Speech Emotion Recognition for Service Robots,” IEEE Trans. on Consumer Electronics, Vol. 55, No. 3, pp. 1590-1596, 2009.

[49] K. T. Song, M. J. Han and J. W. Hong, “Online Learning Design of an Image-Based Facial Expression Recognition System,” Intelligent Service Robotics, Vol. 3, No. 3, pp.

151-162, 2010.

[50] M. A. Amin and H. Yan, “Expression Intensity Measurement from Facial Images by Self Organizing Maps,” in Proc. of IEEE International Conference on Machine Learning and and Cybernetics, Kunming, 2008, pp. 3490-3496.

[51] M. Beszedes and P. Culverhouse, “Comparison of Human and Automatic Facial Emotions and Emotion Intensity Levels Recognition,” in Proc. of IEEE International Symposium on Image and Signal Processing and Analysis, Istanbul, Turkey, 2007, pp.

429-434.

[52] M. Oda and K. Isono, “Effects of Time Function and Expression Speed on the Intensity and Realism of Facial Expressions,” in Proc. of IEEE International Conference on Systems, Man and Cybernetics, Singapore, 2008, pp. 1103-1109.

[53] K. K. Lee and Y. Xu, “Real-time Estimation of Facial Expression Intensity,” in Proc. of IEEE International Conference on Robotics and Automation, Taipei, Taiwan, 2003, pp.

2567-2572.

[54] K. T. Song and J. Y. Lin, “Behavior Fusion of Robot Navigation Using a Fuzzy Neural Network,” in Proc. of IEEE International Conference on Systems, Man and Cybernetics, Taipei, Taiwan, 2006, pp. 4910-4915.

[55] Grimace project, available online at: http://grimace-project.net/

[56] P. Ekman and W. V. Friesen, The Facial Action Coding System: A Technique for The Measurement of Facial Movement, Consulting Psychologists Press, San Francisco, 1978.

[57] D. G. Myers, Theories of Emotion, NY: Worth Publishers, New York, 2004.

[58] S. F. Locke, MIT Meter Measures the Mood of Passers-By, available online at:

http://www.popsci.com/technology/article/2011-11/mit-meter-measures-mood-passers [59] R. R. McCrae and P. T. Costa, “Validation of the Five Factor Model of Personality across

Instruments and Observers,” Journal of Personality and Social Psychology, Vol. 51, 81-90, 1987.

[60] P. T. Costa and R. R. McCrae, “Normal Personality Assessment in Clinical Practice: The NEO Personality Inventory,” Journal of Psychological Assessment, Vol. 4, 5-13, 1992.

[61] A. Mehrabian, “Analysis of the Big-five Personality Factors in Terms of the PAD Temperament Model,” Australian Journal of Psychology, Vol. 48, No. 2, pp. 86-92, 1996.

[62] L. R. Goldberg, “The Development of Markers for the Big-Five Factor Structure,”

Psychological Assessment, Vol. 4, pp.26-42, 1992.

[63] J. A. Russell and G. Pratt, “A Description of the Affective Quality Attributed to Environments,” Journal of Personality and Social Psychology, Vol. 38, No. 2, 311-322, 1980.

[64] J. A. Russell and M. Bullock, “Multidimensional Scaling of Emotional Facial Expressions: Similarity from Preschoolers to Adults,” Journal of Personality and Social Psychology, Vol. 48, 1290–1298, 1985.

[65] J. A. Russell, “A Circumplex Model of Affect,” Journal of Personality and Social Psychology, Vol. 39, No. 6, 1161-1178, 1980.

[66] Frederick Jelinek, Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA, 1999.

[67] N. Christianini and J.S. Taylor, An Introduction to Support Vector Machines, MIT Press, Cambridge, MA, 2000.

[68] P. Viola and M. Jones, “Rapid Object Detection Using a Boosted Cascade of Simple Features,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Kauai Marriott, Hawaii, 2001, pp. 511-518.

[69] J. H. Lai, P. C Yuen, W. S. Chen, S. Lao and M. Kawade, “Robust Facial Feature Point Detection Under Nonlinear Illuminations,” in Proc. of IEEE ICCV Workshop on Recognition, Analysis and Tracking of Faces and Gestures in Real-time Systems,

Vancouver , Canada, 2001, pp.168-174.

[70] M. J. Han, J. H. Hsu, K. T. Song, and F. Y. Chang, “A New Information Fusion Method for SVM-based Robotic Audio-visual Emotion Recognition,” in Proc. of IEEE International Conference on Systems, Man and Cybernetics, Montreal, Canada, 2007, pp.

2656-2661.

[71] H. C. Kim, D. J. Kim and S. Y. Bang, “Face Recognition Using LDA Mixture Model,” in Proc. of IEEE International Conference on Pattern Recognition, Quebec, Canada, 2002, pp. 925-928.

[72] K. M. Yan, “Development of A Home Robot Speech Recognition System,” National Chiao Tung University, Master Thesis, 2002.

[73] B. Gold and N. Morgan, Speech and Audio Signal Processing: Processing and Perception of Speech and Music, John Wiley & Sons, New York, USA, 2000.

[74] S. Mika, G. Ratsch, J. Weston, B. Scholkopf and K. R. Muller, “Fisher Discriminant Analysis with Kernels,” in Proc. of IEEE International Workshop on Neural Networks for Signal Processing, Madison. WI. USA, 1999, pp. 41-48. Nature of Statistic Learning Theory

[75] V. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, USA, 1995.

[76] R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley, New York, USA, 1973.

[77] H. Andrian and K. T. Song, “Embedded CMOS Imaging System for Real-Time Robotic Vision,” in Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, Alberta, Canada, 2005, pp. 3694-3699.

[78] TMS320C6416 DSK technical reference, available online at:

http://c6000.spectrumdigital.com/dsk6416/V1/docs/dsk6416_TechRef.pdf [79] TLV320AIC23B data manual, available online at:

http://www.ti.com/lit/ds/symlink/tlv320aic23b.pdf [80] Pololu serial 8-servo controller, available online at:

http://www.pololu.com/products/pololu/0727/

[81] J. M. Valin, S. Yamamoto, J. Rouat, F. Michaud, K. Nakadai and H. G. Okuno, “Robust Recognition of Simultaneous Speech by A Mobile Robot,” IEEE Trans. on Robotics, Vol.

23, No. 4, pp. 742-752, 2007.

[82] H. Nakajima, K. Nakadai, Y. Hasegawa and H. Tsujino, “Blind Source Separation with

Parameter-free Adaptive Step-size Method for Robot Audition,” IEEE Trans. on Audio, Speech, and Language Processing, Vol. 18, No. 6, pp. 1476-1485, 2010.

[83] Qwerk Platform, Charmed Labs, available online at:

http://www.charmedlabs.com/index.php?option=com_content&task=view&id=29 [84] http://isci.cn.nctu.edu.tw/video/AnthropomorphicRobot/

[85] http://isci.cn.nctu.edu.tw/video/RoboticMoodTransition/

[86] http://isci.cn.nctu.edu.tw/video/RoboticMoodTransitionAnalysis/

[87] M. E. Hoque, R. E. Kaliouby and R. W. Picard, “When Human Coders (and Machines) Disagree on the Meaning of Facial Affect in Spontaneous Videos,” in Proc. of 9th International Conference on Intelligent Virtual Agents, Amsterdam, Netherlands, 2009, pp.

337-343.

[88] M. E. Hoque, L-P. Morency and R. W. Picard, “Are You Friendly or Just Polite? – Analysis of Smiles in Spontaneous Face-to-face Interactions,” in Proc. of 4th International Conference on Affective Computing and Intelligent Interaction, Memphis, TN, USA, 2011, pp. 135-144.

[89] M. E. Hoque and R. W. Picard, “Acted vs. Natural Frustration and Delight: Many People Smile in Natural Frustration,” in Proc. of IEEE 9th International Conference on Automatic Face and Gesture Recognition, Santa Barbara, CA, USA, 2011, pp. 354-359.

[90] M. Pantic, Michel Valstar, R. Rademaker and L. Maat, “Web-based Database for Facial

[90] M. Pantic, Michel Valstar, R. Rademaker and L. Maat, “Web-based Database for Facial