• 沒有找到結果。

Chapter 2. Physics-based Ball Tracking and 3D Trajectory Reconstruction with

2.4 Camera Calibration

2.5.2. Ball tracking

(a) Fewer ball candidates produced if the camera motion is small.

(b) More ball candidates would be produced if there is large camera motion.

Fig. 2-12. Left: detected ball candidates, marked as yellow circles. Right: motion history image to present the camera motion.

2.5.2. Ball tracking

Many non-ball objects might look like a ball in video frames and it is difficult to recognize which is the true one. Therefore, we integrate the physical characteristic of the ball motion into a dynamic programming-based route detection mechanism to track the ball candidates, generate potential trajectories and identify the true ball trajectory.

For ball tracking, we need to compute the ball velocity constraint first. Since the displacement of the ball in a long shoot would be larger than that in a short shoot, we take a long shoot into consideration, as diagramed in Fig. 2-13. The time duration from the ball leaving the hand to the ball reaching the peak in the trajectory t1 and the time duration of the ball moving from the peak to the basket t2 can be represented by Eq. (2-10) and Eq. (2-11), respectively:

H+h = g t12 /2 , t1 = [2(H+h)/g]1/2 (2-10) H = g t22 /2 , t2 = (2H/g)1/2 (2-11) where g is the gravity acceleration (9.8 m/s2) and t is the time duration, H and h is the vertical distances from the basket to the trajectory peak and to the position of ball leaving the hand, respectively. Thus, the highest vertical velocity Vv of the ball in the trajectory should be Vv = g t1 and the horizontal velocity Vh can be calculated as Vh = Dis / (t1+t2), where Dis is the distance from the shooter to the basket center. With the vertical and horizontal velocities, the ball velocity Vb can be derived as Eq. (2-12):

Vb = (Vh2+ Vv2)1/2 (2-12) Vb value increases as Dis increases. Since our goal is to compute the upper limit of the ball velocity, we consider the distance from the 3-point line to the basket (6.25m), which is almost the longest horizontal distance from the shooter to the basket. To cover all cases, we set Dis = 7m. Considering an l meter tall player, the height of the ball leaving the hand should be higher than (l+0.2) m. Thus, the value h should be less than (3.05−0.2−l) m. To cover most players, we set l = 1.65, that is, h ≤ 1.2. Besides, there are few shooting trajectories with the vertical distance H greater than 4 meters. Given different h values (0, 0.3, 0.6, 0.9 and 1.2), the values of Vb computed using Eq. (2-10)-(2-12) for H varying between 1 and 4 meters are plotted in Fig. 2-14, showing the reasonable values of Vb. It can be observed that, when H = 4 m and h = 1.2 m, we have the maximum value of Vb (≈ 10.8 m/s). Thus, we set the velocity constraint (upper limit) as Vb ≈ 10.8 m/s ≈ 36 cm/frm. Finally, similar to Eq. (2-9), the in-frame velocity constraint Vc can be proportionally estimated by applying pinhole camera imaging principle as Eq. (2-13):

(Vc / Vb) = (Lfrm / Lreal) , Vc = Vb (Lfrm / Lreal) (2-13)

Fig. 2-13. Diagram of a long shoot. Fig. 2-14. Relation between Vb an H.

The goal of ball velocity constraint is to determine the search range for ball tracking. To avoid ball missing in ball tracking, what we want to derive is the upper limit of in-frame ball velocity. Hence, although there may be deviation of in-frame ball velocity due to the different relationship between the angle of camera shooting and the angle of player’s shooting, the derived upper limit of ball velocity still significantly improves the computational efficiency and accuracy for ball tracking by setting an appropriate search range.

Fig 2-15 illustrates the ball tracking process. The X and Y axes represent the in-frame coordinates of ball candidates, and the horizontal axis indicates the frame number. The nodes C1, C2, C3 and C4 represent the ball candidates. Initially, for the first frame of a court shot, each ball candidate is considered as the root of a trajectory. For the subsequent frames, we check if any ball candidate can be added to one of the existing trajectories based on the velocity property. The in-frame ball velocity can be computed by Eq. (2-14):

j frame i and frame j, respectively, and ti → j is the time duration. Trajectories grow by adding the ball candidates in the subsequent frames which satisfy the velocity constraint. Although it is possible that no ball candidate is detected in some frames, the trajectory growing process

does not terminate until no ball candidate is added to the trajectory for Tf consecutive frames (we use Tf = 5). The missed ball position(s) can be estimated from the ball positions in the previous and the subsequent frames by interpolation.

Fig. 2-15. Illustration of ball tracking process. (X and Y represent ball coordinates)

To extract the shooting trajectory, we exploit the characteristic that the ball trajectories are near parabolic (or ballistic) due to the gravity, even though the trajectories are not actually parabolic curves because of the effect of the air friction, ball spin, etc. As illustrated in Fig.

2-16, we compute the best-fitting quadratic function f(x) for each route using the least-squares-fitting technique of regression analysis and determine the distortion as the average of the distances from ball candidate positions to the parabolic curve. A shooting trajectory is then verified according to its length and the distortion. Although the passing trajectories are often more linear in nature, still some passing trajectories in the form of parabolic (or ballistic) curves are verified as shooting trajectories. We can further identify a shooting trajectory by examining if it approaches the backboard. Thus, the passing trajectories can be discarded even though they may be parabolic (or ballistic).

Fig. 2-16. Illustration of the best-fitting function.

2.6 3D Trajectory Reconstruction and Shooting Location Estimation

With the 2D trajectory extracted and the camera parameters calibrated, now we are able to employ the physical characteristics of ball motion in real world for 3D trajectory reconstruction. The relationship between each pair of corresponding points in the 2D space (u′, v′) and 3D space (Xc, Yc, Zc) is given in Eq. (2-3). Furthermore, the ball motion should fit the physical properties, so we can model the 3D trajectory as:

Xc = x0 + Vx t

Yc = y0 + Vy t (2-15) Zc = z0 + Vz t + gt2/2

where (Xc, Yc, Zc) is the 3D real world coordinate, (x0, y0, z0) is the initial 3D coordinate of the ball in the trajectory, (Vx, Vy, Vz) is the 3D ball velocity, g is the gravity acceleration and t is the time interval. Substituting Xc, Yc and Zc in Eq. (2-3) by Eq. (2-15), we obtain:

(2-16)

Multiplying out the equation with u =u′w and v = v′w, we get two equations for each ball candidate:

c11 x0 + c11 Vx t + c12 y0 + c12 Vy t + c13 z0 + c13 Vz t + c13 g t2 /2+ c14

= u′ (c31 x0 + c31 Vx t + c32 y0 +c32 Vy t + c33 z0 +c33 Vz t + c33 g t2 /2+1) (2-17) c21 x0 + c21 Vx t + c22 y0 + c22 Vy t + c23 z0 + c23 Vz t + c23 g t2 /2+ c24

= v′ (c31 x0 + c31 Vx t + c32 y0 +c32 Vy t + c33 z0 +c33 Vz t + c33 g t2 /2+1) (2-18)

Since the eleven camera calibration parameters cij and the time of each ball candidate on the trajectory are known, we set up a linear system D2Nx6 E6x1 = F2Nx1 from Eq. (2-17) and Eq.

(2-18) to compute the six unknowns (x0, Vx, y0, Vy, z0, Vz) of the parabolic (or ballistic) trajectory:

(2-19) where N is the number of ball candidates on the trajectory and (ui′, vi′) are the 2D coordinates of the candidates. Similar to Eq. (2-8), we can over-determine D with three or more ball candidates on the 2D trajectory and find a least squares fitting for E by pseudo-inverse.

Finally, the 3D trajectory can be reconstructed from the six physical parameters (x0, Vx, y0, Vy, z0, Vz).

Originally, the definition of shooting location should be the location of the player shooting the ball. However, the starting position of the trajectory is almost the position of the ball leaving the player’s hand. Thus, we can estimate the shooting location on the court model

as (x0, y0, 0) via projecting the starting position of the trajectory onto the court plane.

Moreover, the occurring time of a shooting action can also be recorded for event indexing and retrieval.

相關文件