• 沒有找到結果。

Correcting Distortion for Digital Cameras

N/A
N/A
Protected

Academic year: 2022

Share "Correcting Distortion for Digital Cameras"

Copied!
5
0
0

加載中.... (立即查看全文)

全文

(1)

I. Introduction

A photograph is one of the elementary contents of a digital library. Some of photographs are taken using high-resolution digital camera. In the digital library project at National Taiwan University, Dan- shin files have some documents. We used a Kodak DCS 460 digital camera to take photographs of them, 3060×2036 pixels in size. For higher resolu- tion (DPI: dots per inch) and better quality, we could not put a large object in a single frame.

Therefore, we had to take multiple pictures of a large object and stitch them together. However, geo- metric distortion is the major problem with optical components. Particularly along the edges of a pho- tograph, pixels are severely distorted, and these are the most important data when stitching is conducted.

Many previous works (Stein, 1997; Sawhney and Kumar, 1997; Chiang and Boult, 1995) have studied camera calibration, but they did not show how to correct distortion if the effective focal length, the distance from the image to the lens center, is changed. Furthermore, they assume that the distor- tion is radial and use global transformation to cor- rect the distortion. We think this method is not suit- able because a camera usually does not have only a single lens element. The model will not be radial when the lens elements are not mounted exactly along the same axis.

In this study, we assumed that our platform was

well calibrated, so that the image plane was parallel to the table (Jain et al., 1995; Lin and Fuh, 1998).

In addition, there was a rectilinear white grid on the table for calibration as shown in Fig. 1. Therefore, the calibration lines could be identified in the back- ground of the pictures as shown in Fig. 2. These lines were used to determine the image resolution.

II. Distortion Model

After setting a fixed focal length, we took a photograph of the table for the purpose of calibra- tion. Our distortion model simply considers each block in the grid to be transformed using Eq. (1) as stated by Gonzalez and Woods (1993):

u (1) v

c c c c

c c c c

x y xy

 

 =

 





1 2 3 4

5 6 7 8

1 ,

(Scientific Note)

Correcting Distortion for Digital Cameras

YUNG-CHIEH LIN AND CHIOU-SHANN FUH

Department of Computer Science and Information Engineering National Taiwan University

Taipei, Taiwan, R.O.C.

(Received March 3, 1999; Accepted September 14, 1999)

ABSTRACT

This paper describes a method for correcting lens distortion for digital cameras. A simple but effective distortion model using local spatial transformation was used. In addition, the effect of the effective focal length on the distortion model is considered, and we discuss how to introduce this factor to enhance the model. Finally we present the result of theoretical analysis and experiments demonstrat- ing that the correction improves the accuracy of image stitching.

Key Words: camera calibration, lens distortion, image reconstruction, fast Fourier transform

Fig. 1. The grid used for calibration.

To whom all correspondence should be addressed.

(2)

where (u,v) is the coordinate of the distorted picture, cl, ..., c8 are the coefficients of the transformation matrix, and (x,y) is the coordinate of the corrected photograph. We can obtain the shapes of the distort- ed grid from the photograph, and evenly distribute the blocks in the corrected picture. Then, transfor- mation of each block can be conducted using its four points. To locate incomplete blocks at the edges of the photograph, we use the same transformation parameters in the nearest block.

The next problem is locating the tie-points of the calibration grid. In our implementation, we use a pattern-recognition technique to find crosses in the photograph. After locating the tie-points, we connect neighboring points and reconstruct the shape of each block.

III. Distortion Center

The commonly used lenses are spherical, so the transmitted light is symmetric to the lens center.

When we compute the distortion center, we assume that the distortion is radial, so that the model is symmetric to the distortion center. Notice that this radial assumption is not necessary for our distortion model. The distortion center is used only when we enhance the model by considering the effective focal length. In Eqs. (2) and (3), ∆x and ∆y become zero when (xc,yc) approaches the distortion center because any two displacements can be canceled during inte- gration when they are symmetric to the distortion center:

(2) (3)

where (u,v) is the coordinate of the distorted picture, (x,y) is the coordinate of the corrected paragraph, (xc,yc) is the candidate of the image center, w and h are the width and height of the region used for calculation, and (∆x,∆y) represent integration of the distortion displacements. First, we choose the image center as a candidate, and then we use a gra- dient descent strategy to determine the distortion center. The following pseudo code describes the algorithm:

(xc,yc) image center, w (image width)/2, h (image height)/2, compute ∆x, ∆y,

repeat until ∆x, ∆y are small enough,

find the proper (δx,δy) that (xc+δx,yc+δy) can reduce

∆x and ∆y, xcxc+δx, ycyc+δy,

w MIN(w,image width-w), h MIN(h,image height-h), compute ∆x, ∆y,

end repeat.

Since it is possible for some local minimums to occur at points other than the distortion center, the algorithm only succeeds when the distortion center is not too far from the image center.

IV. Resolution

We simply determine the resolution by using the background information. According to the as- sumption described above, some calibration lines are always identifiable in the background. We can find them by choosing rows of darker pixels. After apply- ing fast Fourier transformation (FFT) to each row, the coefficients have the first jump around the spa- tial frequency value. As shown in Fig. 3, the first jump around the spatial frequency value is at 28, meaning that there are 28 grids horizontally with extension. Therefore, we can estimate the resolution.

The width of the picture is about 3×28/2.54≈33 inch because the length of each square is 3 cm.

According to the camera specifications, there are 3060 pixels in each row, and the resolution should be 3060/33≈92 DPI.

V. Distortion Correction

Equation (4) expresses the relation between the effective focal length and the distance from the lens center to the object:

d–1+ e–1= f–1, (4)

x u x y u x y x x dxdy

y v x y v x y y y dxdy

y h y h

x w x w

c c c

y h y h

x w x w

c c c

c c

c c

c c

c c

= ∫ − −

= ∫ − −

+

+

+

+ / /

/ /

/ /

/ /

( ( , ) ( , )) ( ) ,

( ( , ) ( , )) ( ) ,

2 2

2 2

2 2

2 2

Fig. 2. Right half of Danshin Document #13214034.

(3)

where d is the distance from the object to the lens center, e is the effective focal length, i.e., the dis- tance from the image to the lens center, and f is the focal length. Equation (5) expresses the relation between two projections (Fig. 4):

el –1+ d1

–1= e2 –1+ d2

–1,

h1/d1 = x1/e1,

h2/d2 = x1/e2,

x2/x1 = e1/e2, (5)

where ei is the effective focal length, di is the object distance, hi is the actual object size, x1 is the size of the camera sensor, and x1/x2 is the ratio between the distortion models. Using Eq. (5), we can derive the

ratio function as Eq. (6):

(6)

We can assume that the distortion is due to refraction through the lens center. If we put the cali- bration pattern at d1, we can get the distortion model at e1. When we take a photograph of a distant object at d2, the effective focal length must be e2 to get the correct focus. However, the size of camera sen- sor is still x1, and the view angle becomes wider.

Considering the whole scene in the calibration view, its resolution becomes smaller in the new view, and the ratio is x1/x2. Since the distortion depends on the angle of refraction, we can construct the distortion model by subsampling the distortion model of the calibration view. For those pixels outside this view, we use the same parameters of the nearest neighbors.

In Eq. (6), the object size hi can be determined from the determined resolution, and x1 can be deter- mined from the manufacturer’s specifications. By scaling the distortion model with the distortion center as the symmetric point, we can obtain a suitable dis- tortion model for other photographs. Then, the dis- tortion can be corrected by applying Eq. (1) to the pictures. To be specific, if we have α = x1/x2 and want to estimate the color of (x',y'), whose corre- sponding point on the distorted image is (u',v'), then we first convert the e2 coordinate (x',y') to the e1

coordinate (x,y) using Eq. (7):

(7)

Then we can use Eq. (1) to solve for the coordinate (u,v) on the distorted image, and convert it to the e1

coordinate (u',v') using Eq. (8):

(8)

where (uc,vc) can be solved by replacing (x,y) with (xc,yc) in Eq. (1).

For example, we set the focal length to be 40 mm when taking photographs as shown in Figs. (1) and (2). The actual width of Fig. 1 is about 0.51 m (3 cm×17), the actual width of Fig. 2 is about 0.84 m (3 cm×28), and x1 is 0.0276 m for a Kodak DCS 460 digital cameras. Therefore, we can estimate the distortion ratio using Eq. (6):

and Fig. 5 shows the results of correction.

x x

1 2

1 1

1 1

0 0276 0 84

0 0276 0 51 97 98

= +

+ =

. .

. . . %,

( , )u v′ ′ =α(( , ) (u vu vc, c)) (+ u uc, c), ( , )x y1(( ,x y′ ′ −) (x yc, c)) (+ x yc, c).

x x

x h

x h

1 2

1 1 2 1 1 1

1 1

= +

+

.

Fig. 3. Results of FFT on the 200th row of Fig. 2 (R, G, B for each curve). The first jump around the spatial frequency value is at 28, meaning that there are 28 grids horizontal- ly with extension.

Fig. 4. Camera geometry and parameters, where ei is the effec- tive focal length; diis the object distance; hiis the actual object size; x1 is the size of the camera sensor; and x2is the projection size of object 2 if it is focused on the same plane as object 1 is.

(4)

VI. Experiments on Image Stitching

First we calibrated the camera by means of the following steps:

(1) set up the table and the camera to be hori- zontal;

(2) take a photograph of the table;

(3) find the tie-points in the photograph;

(4) solve for the coefficients of the transforma- tion matrix for each block;

(5) estimate the actual size of the photograph and the position of the distortion center.

For each document of Danshin files, we take a photograph by means of the following steps:

(1) take a photograph of the document;

(2) estimate the actual size of the photograph;

(3) compute the distortion ratio;

(4) scale the distortion model;

(5) use the transformation matrix to correct the distortion.

We have conducted some experiments on image stitching by translating and rotating the images man- ually. Figure 6 shows the stitched image of Danshin Document #13214034 without correction. We rotated the left image about 1° counterclockwise to align the words to get the same orientation on the overlap between two images. However, they still did not match well because the words on the right image were shorter. If we corrected them using our distor- tion model, there was about 0.5° rotation between the orientations of the corrected images. Figure 7 shows that the correction improved the accuracy of image stitching.

VII. Discussion

Our distortion model fails unless it is assumed that the image plane is parallel to the table. For gen- eral condition, there is a good method to calibrate the camera and table geometry as described by Nomura et al. (1992). However, we cannot ignore the factor of the effective focal length if we take photographs at different distances. Here, we have provided a method which considers this factor and shown that we do not have to calibrate for each effective focal length. The better way to determine the effective focal length is to read the camera zoom setting, which should be accurate. Since the maxi- mum error of h1 is the block width of the calibration grid, the error of the distortion ratio can be estimat- ed using Eq. (9):

E x h (9)

x h

x h w

x h

= +

+ − + +

+

1 1 2 1 1 1

1 1 1 1

2 1

1 1 1 1

( )

,

Fig. 5. Right half of Danshin Document #13214034 after correc- tion.

Fig. 6. The stitched image of Danshin Document #13214034 without distortion correction. The bright part in the mid- dle is the overlap of the images. The inset on the left side is a magnification of the bottom part of the overlap to show the poor result without distortion correction.

Fig. 7. The stitched image of Danshin document #13214034 with distortion correction. The inset on the left side is a mag- nification of the bottom part of the overlap to show the good result achieved with distortion correction.

(5)

where E is the error and w is the width of a block.

In our experiments, hi was about 1 m; w was 3 cm;

x1 was 27.6 mm; and E was about 0.08%. The error could be reduced by using a thicker grid.

We have implemented our method using a portable C++ program and tested it on several plat- forms. Table 1 shows the performance of the pro- gram on different machines. The program uses multi- threading to perform the correction, so it is very fast on machines with multiple processors in the correc- tion step.

Acknowledgment

This research was supported by the National Science Council, R.O.C., under Grants NSC 85-2212-E-002-077 and NSC

86-2212-E-002-025, by Mechanical Industry Research Laboratories, Industrial Technology Research Institute, R.O.C., under Grant MIRL 873K67BN3, and by the EeRise Corporation, ACME Sys- tems, Tekom Technologies, and Foxconn Inc.

References

Chiang, M. C. and T. E. Boult (1995) A Public Domain System for Camera Calibration and Distortion Correction. Technical Report CUSS-038-095, Dept. of Computer Science, Colum- bia Univ., New York, NY, U.S.A.

Gonzalez, R. C. and R. E. Woods (1993) Digital Image Proces- sing. Addison Wesley, Reading, MA, U.S.A.

Jain, R., R. Kasturi, and B. G. Schunck (1995) Machine Vision.

McGrawHill, New York, NY, U.S.A.

Lin, Y. C. and C. S. Fuh (1998) Distortion correction for digital cameras. Proceedings of International Symposium on Com- puter Graphics, Image Processtng, and Vision, pp. 396-401, Rio de Janeiro, Brazil.

Nomura, Y., M. Sagara, H. Naruse, and A. Ide (1992) Simple cal- ibration algorithm for high-distortion-lens camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14, 1095-1099.

Sawhney, H. S. and R. Kumar (1997) True multi-image align- ment and its application to mosaicking and lens distortion correction. Proceedings of IEEE Computer Society Confer- ence on Computer Vision and Pattern Recognition, pp. 450- 460, Puerto Rico, U.S.A.

Stein, G. P. (1997) Lens distortion calibration using point corre- spondences. Proceedings of IEEE Computer Society Confer- ence on Computer Vision and Patter Recognition, pp. 602- 608, Puerto Rico, U.S.A.

Table 1. The Execution Time for One Image (3060×2036)

OS CPU Memory Compiler Timea

SunOS 5.5.1 UltraSPARC 256 MB g++ 4:3:3 250 MHZ×4

Linux 2.0.33 PentiumII 128 MB g++ 3:7:5 233 MHZ×2

Windows NT 4.0 PentiumII 64 MB VC++ 5:13:6 233 MHZ×1

aTime is expressed in seconds and in the order of loading, correc- tion, and writing.

參考文獻

相關文件

The time complexity of this algorithm is exponential, so this description does not imply that primality is P..

In addition, two frequency domain data hiding schemes [6][7] may be used for authentication, but they cannot always locate alteration and the distortion introduced by embedding

It is required to do radial distortion correction for better stitching results. correction for better

Our experiments show that the proposed method can provide accurate intrinsic camera parameters for all the lens settings, even though camera calibration is performed only for a

(3%) (c) Given an example shows that (a) may be false if E has a zero divisors. Find the invariant factors of A and φ and their minimal polynomial. Apply

Because the Internet allows people to read newspapers and access many databases online, they don’t have to use paper to get needed information.. 37 , they don’t have to actually

Show that the inequality remains true for n − 1 non-negative real numbers and that the equality holds if and only if they

(Correct formula but with wrong answer form: 1-point deduction for each error.) (Did not use unit vectors: 1-point deduction for each