• 沒有找到結果。

We have proposed a scheme that can detect and classify screens in basketball games, which is the fundamental essence of basketball tactics. Through combining screens and trajectories in each possession, our system is able to recognize what tactics are executed. The collected tactics are then indexed so that users can query the tactic that they are interested in. Our system can also work with other researches and lead to more applications. For instance, when combining the shooting location estimation [7], we can speculate on the movement of the ball and obtain further information about basketball tactics. It is difficult to tell who the ball handler is at the beginning of a possession, and it prohibits us from tracking the ball as well.

Nevertheless, once we know the shooting location, we can trace back how the ball is passed to the shooter. For another instance, with the wide-open detection [1, 50], we can verify whether the execution of a tactic is successful in making open shot since tactics are set in order to make open shots, with which the players score easily.

(a) (b)

Figure 5.1: Real game example. (a) Coach setting tactic. (b) Tactic execution.

Our system currently identifies tactics with the patterns of screens and trajectories from video clips. That is, we do not know if the verified tactics are

74

equivalent to those set by the coach. In fact, there are some factors affecting the execution of tactics, such as the interference from the defensive players. Although the team on offense sets tactics to block the defensive players, the opponent team has its way to counter. Imagine that the ball handler is trying to pass the ball to a teammate as the tactic indicates, but that teammate is being double teamed and is not free to catch the pass. The ball handler therefore has no choice but to pass the ball to another teammate. As a result, the behaviors of offensive players we see on the screen may not follow the instructions from the coach, and the tactics identified by our system may differ from those set by the coach. Figure 5.1 is a real example that the tactic execution is different from the coach’s instruction. Hence, we want to add coach’s instructions to our system in the future. With the real instruction, we not only can verify the identified tactics but also figure out why the offensive players are not able to implement the tactics. First, we query the database with the real tactic and obtain its pattern of screens and trajectories. After analyzing the behavior of the players from the video clip, we compare the result with the pattern of the real tactic set by the coach. By searching the difference between the performance of the players and the instruction of the coach, we can realize what keeps the team on offense from executing the tactic successfully. Furthermore, we can speculate on the strategy used by the team on defense, which is also an important issue that both professional coaches and players are interested in. We hope to have the chance to cooperate with basketball teams so that we can improve the proposed system with the real information they provide.

75

Bibliography

[1] M.-H. Chang, M.-C Tien, J.-L. Wu, “WOW: Wild-Open Warning for Broadcast Basketball Video Based on Player Trajectory,” in Proceedings of the ACM International Conference of Multimedia, pp. 821-824, 2009.

[2] D. Farin, S. Krabbe, P. H. N. d. With, W. Effelsberg, “Robust Camera Calibration for Sport Videos Using Court Models,” in Proceedings of Storage and Retrieval Methods and Applications for Multimedia, pp. 80-91, 2004.

[3] G.-G. Lee, H.-K. Kim, W.-Y. Kim, “Highlight Generation for Basketball Video Using Probabilistic Excitement,” in Proceedings of the IEEE International Conference on Multimedia and Expo, pp. 318-321, 2009.

[4] L. Li, Y. Chen, W. Hu, W. Li, X. Zhang, “Recognition of Semantic Basketball Events Based on Optical Flow Patterns,” in Proceedings of the International Symposium on Visual Computing, pp. 480-488, 2009.

[5] Y. Zhang, C. Xu, Y. Rui, J. Wang, H. Lu, “Semantic Event Extraction from Basketball Games Using Multi-Modal Analysis,” in Proceedings of the IEEE International Conference on Multimedia and Expo, pp. 2190-2193, 2007.

[6] W. Kim, H. Kong, J. Choi, K. Kim, P. Kim, “Event Detection from Basketball Video Using Audio Information,” in Proceedings of the International Conference on Artificial Intelligence, pp. 716-721, 2002.

[7] H.-T. Chen, M.-C. Tien, Y.-W. Chen, W.-J. Tsai, S.-Y. Lee, “Physics-Based Ball Tracking and 3D Trajectory Reconstruction with Applications to Shooting Location Estimation in Basketball Video,” Journal of Visual Communication and Image Representation, vol. 20, no. 3, pp. 204-216, 2009.

[8] M.-C. Tien, H.-T. Chen, Y.-W. Chen, M.-H. Hsiao, S.-Y. Lee, “Shot Classification of Basketball Videos and Its Application in Shooting Position Extraction,” in Proceedings of the International Conference on Acoustics, Speech and Signal Processing, pp. 1085-1088, 2007.

[9] G. Miao, G. Zhu, S. Jiang, Q. Huang, C. Xu, W. Gao, “A Real-Time Score Detection and Recognition Approach for Broadcast Basketball Video,” in Proceedings of the IEEE International Conference on Multimedia and Expo, pp.

1691-1694, 2007.

[10] B. Jahne, “Digital Image Processing,” Springer Verlag, 2002.

[11] Y. Liu, S. Jiang, Q. Ye, W. Gao, Q. Huang, “Playfield Detection Using Adaptive GMM and Its Application,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 2, pp. 421-424, 2005.

76

[12] G. Welch, G. Bishop, “An Introduction to the Kalman Filter,” Technical Report TR95-041, University of North Carolina at Chapel Hill, 1995.

[13] A. Yilmaz, O. Javed, M. Shah, “Object Tracking: A Survey,” ACM Computing Surveys, vol. 38, no. 4, 2006.

[14] H. Moravec, “Visual Mapping by a Robot Rover,” in Proceedings of the International Joint Conference on Artificial Intelligence, pp. 598-600, 1979.

[15] C. Harris, M. Stephens, “A Combined Corner and Edge Detector,” in 4th Alvey Vision Conference, pp. 147-151, 1988.

[16] D. G. Lowe, “Distinctive Image Features from Scale-invariant Keypoints,”

International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.

[17] K. Mikolajczyk, C. Schmid, “A Performance Evaluation of Local Descriptors,”

in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 257-263, 2003.

[18] D. Comaniciu, P. Meer, “Mean Shift Analysis and Applications,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 1197-1203, 1999.

[19] J. Shi, J. Malik, “Normalized Cuts and Image Segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888-905, 2000.

[20] V. Caselles, R. Kimmel, G. Sapiro, “Geodesic Active Contours,” in Proceedings of IEEE International Conference on Computer Vision, pp. 694-699, 1995.

[21] C. Stauffer, W. E. L. Grimson, “Learning Patterns of Activity Using Real-Time Tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.

22, no. 8, pp. 747-757, 2000.

[22] N. Oliver, B. Rosario, A. Pentland, “A Bayesian Computer Vision System for Modeling Human Interactions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 831-843, 2000.

[23] K. Toyama, J. Krumm, B. Brumitt, B. Meyers, “Wallflower: Principles and Practice of Background Maintenance,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 255-261, 1999.

[24] A. Monnet, A. Mittal, N. Paragios, V. Ramesh, “Background Modeling and Subtraction of Dynamic Scenes,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 1305-1312, 2003.

[25] C. Papageorgiou, M. Oren, T. Poggio, “A General Framework for Object Detection,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 555-562, 1998.

[26] H. A. Rowley, S. Baluja, T. Kanade, “Neural Network-Based Face Detection,”

IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 23-38, 1998.

[27] P. A. Viola, M. J. Jones, D. Snow, “Detecting Pedestrians Using Patterns of

77

Motion and Appearance,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 734-741, 2003.

[28] R. Jain, H. Nagel, “On the Analysis of Accumulative Difference Pictures from Image Sequences of Real World Scenes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 1, no. 2, pp. 206-214, 1979.

[29] C. R. Wren, A. Azarbayejani, T. Darrell, A. Pentland, “Pfinder: Real-Time Tracking of the Human Body,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 780-785, 1997.

[30] D. Comaniciu, P. Meer, “Mean Shift: A Robust Approach Toward Feature Space Analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.

24, no. 5, pp. 603-619, 2002.

[31] D. Comaniciu, V. Ramesh, P. Meer, “Kernel-Based Object Tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 5, pp.

564-575, 2003.

[32] V. Salari, I. K. Sethi, “Feature Point Correspondence in the Presence of Occlusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 87-91, 1990.

[33] C. J. Veenman, M. J. T. Reinders, E. Backer, “Resolving Motion Correspondence for Densely Moving Points,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 1, pp. 54-72, 2001.

[34] T. Broida, R. Chellappa, “Estimation of Object Motion Parameters from Noisy Images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.

8, no. 1, pp. 90-99, 1986.

[35] Y. Bar-Shalom, T. Foreman, “Tracking and Data Association,” Academic Press Inc., 1988.

[36] R. L. Streit, T. E. Luginbuhl, “Maximum Likelihood Method for Probabilistic Multi-hypothesis Tracking,” in Proceedings of the International Society for Optical Engineering, vol. 2235, pp. 394-405, 1994.

[37] J. Shi, C. Tomasi, “Good Features to Track,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 593-600, 1994.

[38] H. Tao, H. S. Sawhney, R. Kumar, “Object Tracking with Bayesian Estimation of Dynamic Layer Representations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 1, pp. 75-89, 2002.

[39] M. J. Black, A. D. Jepson, “EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation,” International Journal of Computer Vision, vol. 26, no. 1, pp. 63-84, 1998.

[40] S. Avidan, “Support Vector Tracking,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 184-191, 2001.

78

[43] R. Ronfard, “Region-based Strategies for Active Contour Models,” International Journal of Computer Vision, vol. 13, no. 2, pp. 229-251, 1994.

[44] D. Huttenlocher, J. Noh, W. Rucklidge, “Tracking Nonrigid Objects in Complex Scenes,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 93-101, 1993.

[45] K. Sato, J. K. Aggarwal, “Temporal Spatio-velocity Transform and Its Application to Tracking and Interaction,” Computer Vision and Image Understanding, vol. 96, no. 2, pp. 100-128, 2004.

[46] J. Kang, I. Cohen, G. G. Medioni, “Object Reacquisition Using Invariant Appearance Model,” in Proceedings of the International Conference on Pattern Recognition, pp. 759-762, 2004.

[47] G. Kitagawa, “Non-Gaussian State-Space Modeling of Nonstationary Time Series,” Journal of the American Statistical Association, vol. 82, no. 400, pp.

1032-1041, 1987.

[48] K. Levenberg, “A Method for the Solution of Certain Non-Linear Problems in Least Squares,” The Quarterly of Applied Mathematics, vol. 2, pp. 164-168, 1944.

[49] D. Marquardt, “An Algorithm for Least-Squares Estimation of Nonlinear Parameters,” SIAM Journal on Applied Mathematics, vol. 11, pp. 431-441, 1963.

[50] M.-C. Hu, M.-S. Chang, J.-L. Wu, L. Chi, “Robust Camera Calibration and Player Tracking in Broadcast Basketball Video,” IEEE Transactions on Multimedia, 2010.

相關文件