• 沒有找到結果。

Skeleton Extraction usingOpenNI

Chapter 4. Experimental Results and Discussion

4.4 Skeleton Extraction usingOpenNI

Here we discuss the skeleton generated by OpenNI library[17]. As shown in Fig. 4-4, OpenNI skeleton consists of 15 joints: head, neck, torso center, right shoulder, left shoulder, right elbow, left elbow, right hand, left hand, right hip, left hip, right knee, left knee, right foot and left foot.

Fig. 4-4.Skeleton of OpenNI.

OpenNI skeleton works well in the situation that the user’s body and limbs can be separated obviously. However, OpenNI skeleton is not applicable to our yoga system due to the problem that some parts of the body may be occluded by the practitioner him/herself when performing most of Yoga asanas, as shown in Fig. 4-5. For example, the posture of the practitioner in Fig. 4-5(g), who is performing Warrior II, can be described well by OpenNI skeleton in the side view. However, as for the practitioner in Fig. 4-5(c), who is performing Downward-Facing Dog, the OpenNI skeleton cannot describe the posture properly. The OpenNI skeleton does not make sense both in the front view and in the side view. Therefore, we compute star/topological skeleton and design several posture descriptors in our YogaST, instead of directly using OpenNI skeleton. The skeletons and descriptors used in our YogaST are more applicable to these asanas.

49

50

51

Fig. 4-5. OpenNI skeletons for 12 asanas.

52

Chapter 5. Conclusion and Future Work

Computer-assisted self-training in sports exercise is an ever-growing trend. In this thesis, we develop a preliminary system, entitled YogaST, which is capable of assisting the Yoga practitioner in self-training, aiming at instructing him/her to perform asanas correctly and preventing injury caused by improper postures. Firstly, two Kinects with perpendicular viewing directions are used to obtain the practitioner’s body map from both front and side views. Visual features including the contour, skeleton, and descriptors of the human body are extracted as posture representation. Involving professional Yoga training knowledge, YogaST analyzes the practitioner’s posture and presents visualized instruction for posture rectification so that the practitioner can easily understand how to adjust his/her posture.

Currently, we are working on enhancing the YogaST system by adding more modules of other asanas. Also, we attempt to enhance the system by adding voice feedback and use the depth information to build 3D model of the practitioner. In the future, the proposed scheme will be adapted to more sports exercises. It can be expected that the effectiveness of sports learning will thus be significantly improved.

53 highlight ranking in broadcast racket sports video,” IEEE Trans. on Multimedia, vol. 9, no. 6, pp. 1167-1182, 2007.

[3] C. C. Cheng and C. T. Hsu, “Fusion of audio and motion information on HMM-based highlight extraction for baseball games,” IEEE Trans. on Multimedia, vol. 8, no. 3, pp.

585-599, 2006.

[4] Y. Gong, M, Han, W. Hua, and W. Xu, “Maximum entropy model-based baseball highlight detection and classification,” Computer Vision and Image Understanding, vol.

96, no. 2,pp. 181-199, 2004. tracking framework with enrichment for broadcast baseball videos,” Journal of Information and Science Engineering, vol. 24, no. 1, pp. 143-157, 2008.

[8] H. T. Chen, M. C. Tien, Y. W. Chen, W. J. Tsai, and S. Y. Lee, “Physics-based ball tracking and 3D trajectory reconstruction with applications to shooting location estimation in basketball video,” Journal of Visual Communication and Image Representation, vol. 20, no.3,pp. 204-216, 2009.

[9] H. T. Chen, W. J. Tsai, S. Y. Lee, and J. Y. Yu, “Ball tracking and 3D trajectory approximation with applications to tactics analysis from single-camera volleyball sequences,” Multimedia Tools and Applications, vol. 6, no. 3, pp. 641-667, 2012.

[10] G. Zhu, C. Xu, Q. Huang, Y. Rui, S. Jiang, W. Gao, and H. Yao, “Event tactic analysis based on broadcast sports video,” IEEE Trans. on Multimedia, vol. 11, no. 1, pp. 49-67,

54

2009.

[11] M. C. Hu, M. H. Chang, J. L. Wu, and L. Chi, “Robust camera calibration and player tracking in broadcast basketball video,” IEEE Trans. on Multimedia, vol. 13, no. 2, pp.

266-279, 2011.

[12] H. T. Chen, C. L. Chou, T. S. Fu, S. Y. Lee, and B. S. P. Lin, “Recognizing tactic patterns in broadcast basketball video using player trajectory,” Journal of Visual Communication and Image Representation, vol. 23, no. 6, pp. 932-947, 2012.

[13] Kinect. Available: http://www.xbox.com/zh-TW/Kinect

[14] T. Ingham, “Kinect cruises past 10m sales barrier,” March 9, 2011.

Available:

http://www.computerandvideogames.com/292825/kinect-cruises-past-10m-sales-barrier/

[15] OpenNI, 2011. Available: http://openni.org/

[16] Z. Ren, J. Meng and J. Yuan, “Depth Camera Based Hand Gesture Recognition and its Applications in Human-Computer-Interaction,” in Proc. IEEE International Conference on Information, Communications and Signal Processing (ICICS), pp. 1-5, 2011.

[17] J. L. Raheja, A. Chaudhary and K. Singal, “Tracking of Fingertips and Centers of Palm using Kinect,” in Proc. IEEE International Conference on Computational Intelligence, Modelling and Simulation (CIMSiM), pp. 248-252, 2011.

[18] V. Frati and D. Prattichizzo, “Using Kinect for hand tracking and rendering in wearable haptics,” in Proc. IEEE World Haptics Conference (WHC), pp. 317-321, 2011.

[19] L. Gallo, A. P. Placitelli and M. Ciampi, "Controller-free exploration of medical image data: Experiencing the kinect," in Proc. IEEE International Symposium on Computer-Based Medical Systems (CBMS), pp. 1 -6, 2011.

[20] M. Van den Bergh, D. Carton, R. D. Bijs, N. Mitsou, C. Landsiesdel, K. Kuehnlenz, D.

Wollherr, L. V. Gool, and M. Buss, “ Real-time 3D hand gesture Interaction with robot for understanding directions from humans,” in Proc. IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man), pp. 357-362, 2011.

[21] K. F. Li, “A web-based sign language translator using 3D video processing,” in Proc.

IEEE International Conference on Network-Based Information Systems (NBiS), pp.

356-361, 2011.

[22] X. Yu, L. Wu, Q. Liu and H. Zhou, “Children tantrum behavior analysis based on Kinect sensor,” in Proc. IEEE Chinese Conference on Intelligent Visual Surveillance (IVS), pp.

49-52, 2012.

[23] G. Mastorakis and D. Makris, “Fall detection system using Kinect’s infrared sensor,”

55

Journal of Real-Time Image Processing, pp. 1-12, 2012.

[24] S. Ganesan and L. Anthony, “Using the Kinect to encourage older adults to exercise: a prototype,” in Proc. ACM SIGCHI conference on Human Factors in Computing Systems(CHI),pp. 2297-2302, 2012.

[25] T Kajinami, T Narumi, T Tanikawa and M Hirose, “Digital display case using non-contact head tracking,” in Proc. ACM The 2011 International Conference on Virtual and Mixed Reality: New Trends, pp. 250-259, 2011.

[26] J. Stowers, M. Hayes and A. Bainbridge-Smith, “Altitude control of a quadrotor helicopter using depth map from Microsoft Kinect sensor,” in Proc. IEEE International Conference on Mechatronics (ICM),pp. 358-362, 2011.

[27] J. Cunha, E. Prdrosa, C. Cruz, A. J. R. Neves, and N. Lau, “Using a depth camera for indoor robot localization and navigation,” in Proc. Robotics Science and Systems RGB-D Workshop, 2011.

[28] S. Patil, A. Pawar, A. Peshave, A. N. Ansari, and A. Navada, “Yoga tutor visualization and analysis using SURF algorithm,” in Proc. IEEE Control and System Graduate Research Colloquium (ICSGRC), pp. 43-46, 2011.

[29] Z. Luo, W. Yang, Z. Q. Ding, L. Liu, I. M. Chen, S. H. Yeo, K. V. Ling, and H. B. L. Duh,

“Left arm up! Interactive yoga training in virtual environment,” in Proc. IEEE Virtual Reality Conference (VR), pp. 261-262, 2011.

[30] W. Wu, W. Yin, and F. Guo, “Learning and self-Instruction expert system for Yoga,” in Proc. 2nd International Workshop on Intelligent Systems and Applications (ISA), pp. 1-4, 2010.

[31] H. S. Chen, H. T. Chen, Y. W. Chen, S. Y. Lee, ”Human action recognition using star skeleton,” in Proc. ACM International Workshop on Video surveillance and sensor networks (VSSN), pp. 171-178, 2006

[32] G. R. Bradski, “Open source computer vision library reference manual,” Intel Corporation, 123456-001, 2001.

[33] C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. Alvey Vision conference, vol. 15, pp. 50, 1988.

[34] Yoga Journal. Available: http://www.yogajournal.com/

[35] Wikipedia. Available: http://en.wikipedia.org/wiki/Body_proportions

相關文件