• 沒有找到結果。

Computer Vision: Algorithms and Applications

N/A
N/A
Protected

Academic year: 2022

Share "Computer Vision: Algorithms and Applications"

Copied!
76
0
0

加載中.... (立即查看全文)

全文

(1)

DC & CV Lab. CSIE NTU

Chapter 4

Model fitting and optimization

Advanced Computer Vision

Computer Vision:

Algorithms and Applications

Presented by: 傅楸善 & 蕭延儒 E-mail: luis862013@gmail.com

手機 : 0978193029 授課教授:傅楸善 博士

(2)

Outline

4.1 Scattered data interpolation

4.2 Variational methods and regularization

4.3 Markov random fields

(3)

4.1 Scattered data interpolation

4.1.1 Radial basis functions

4.1.2 Overfitting and underfitting

4.1.3 Robust data fitting

(4)

4.1.0 Scattered data interpolation

Example of scattered data interpolation

Produce a function , such that Let ,

(5)

4.1.0 Scattered data interpolation

Example of scattered data interpolation

1. It requires the function to pass through data point.

2. The data points are irregularly placed throughout the domain.

(6)

4.1.0 Scattered data interpolation

Common methods:

1. Triangulation & interpolation

2. Pull-push algorithms

(7)

4.1.0 Scattered data interpolation

Triangulation & interpolation

Scattered data Triangulation Interpolation

(8)

4.1.0 Scattered data interpolation

Triangulation & interpolation

Triangulation: Division into triangles.

A good triangulation should avoid producing long skew triangles.

(9)

4.1.0 Scattered data interpolation

Triangulation & interpolation

Interpolation

Barycentric coordinates are usually used to do interpolation.

If a smoother surface is desired, we can use higher-order splines.

Cubic splines interpolation

(10)

4.1.0 Scattered data interpolation

Pull-push algorithms

Original image Scattered data Pull-push Interpolation result

The algorithm is very fast but less accurate.

(11)

4.1 Scattered data interpolation

4.1.1 Radial basis functions

4.1.2 Overfitting and underfitting

4.1.3 Robust data fitting

(12)

4.1.1 Radial basis functions

Meshed-based approaches are limited to low dimensional domain.

We here introduce radial basis functions, a meshed-free approach, which can be easily extended to higher dimension.

� (�)=

∅(ǁ�−�

ǁ)

Interpolated function using radial basis functions

(13)

4.1.1 Radial basis functions

Some commonly used basis functions include:

controls the size (radial fallout) of the basis function, and hence its smoothness.

(14)

4.1.1 Radial basis functions

Let be a scattered data, the equation need to satisfy:

are the locations of the scattered data points, s are radial basis functions, are the local weights.

We need to obtain the desired set of

(15)

4.1.1 Radial basis functions

Solution:

1. Minimizing the data constraint energy together with a weight penalty.

2. Kernel regression

(16)

4.1.1 Radial basis functions

Minimizing the data constraint energy together with a weight penalty:

( {

} ) =

+ λ

(17)

4.1.1 Radial basis functions

Minimizing the data constraint energy together with a weight penalty:

( {

} ) =

+ λ

When ,

it becomes a pure least squares problem.

(18)

4.1.1 Radial basis functions

Kernel regression

We simply set to :

( ) = ∑

∅ ( ǁ �−� ǁ )

However, this fails to interpolate the data.

(19)

4.1.1 Radial basis functions

Kernel regression

� ( � ) =

(ǁ� − �ǁ)

(ǁ � − �ǁ)

So, we divide the data-weighted summed basis functions by the sum of all the basis functions:

While not that widely used in computer vision, kernel regression techniques have been applied to a number of low-level image processing operations.

(20)

4.1 Scattered data interpolation

4.1.1 Radial basis functions

4.1.2 Overfitting and underfitting

4.1.3 Robust data fitting

(21)

4.1.2 Overfitting and underfitting

Some data are noisy, so fitting them makes no sense.

Doing so can produce a lot of spurious wiggles, which may cause overfitting.

(22)

4.1.2 Overfitting and underfitting

overfitting

underfitting underfitting

M=0 M=1

M=9 plausible

M=3

(23)

4.1.2 Overfitting and underfitting

How can we quantify the amount of underfitting and overfitting?

How can we get just the right amount?

Adjust to get different results.

(24)

4.1.2 Overfitting and underfitting

.

ln λ=− 18 ln λ=0

(Plausible fit) (underfitting)

(25)

4.1.2 Overfitting and underfitting

Validation Set:

We save some data in a validation set in order to see if the function we compute is overfitting or underfitting.

data Training set Validation set

(26)

4.1.2 Overfitting and underfitting

Now we vary , we can typically obtain a curve as below:

(27)

4.1.2 Overfitting and underfitting

Cross validation:

1. Split the data into K folds.

2. We train for K runs, with different folds to be validation set.

3. Estimate the best result by averaging over all K training runs’ result.

1

2 K-1 K (K=5 is often used.)

(28)

4.1 Scattered data interpolation

4.1.1 Radial basis functions

4.1.2 Overfitting and underfitting

4.1.3 Robust data fitting

(29)

4.1.3 Robust data fitting

Robust loss function

Lower weights to larger error, which are more likely to be outlier.

(30)

4.1.3 Robust data fitting

Penalty function

controls the range of residual values corresponds to inliers

was often determined based on the expected shape of the outlier distribution.

(31)

4.2 Variational methods and regularization

4.2.1 Discrete energy minimization 4.2.2 Total variation

4.2.3 Bilateral solver

4.2.4 Application: Interactive colorization

(32)

4.2.0 Variational methods and regularization

The methods in the previous section provide reasonable solutions, but:

1. Cannot directly quantify and hence optimize the amount of smoothness in the solution.

2. No local control over where the solution should be discontinuous.

(33)

4.2.0 Variational methods and regularization

Variational methods:

1-dimensional functions

Such functions ( are often called variational methods,

because they measure the variation (non-smoothness) in a function.

(34)

4.2.0 Variational methods and regularization

Variational methods in 2-dimensions:

However, these smoothness functions cannot provide discontinuities.

(35)

4.2.0 Variational methods and regularization

Variational methods in 2-dimensions:

( �,� ) {[ 1− �(�,�)][� 2 ( �,� ) + 2 ( �,� ) ����]+� ( �,� ) [ �� 2 ( �,� ) +2 ∗� 2 �� ( �,� ) + 2 �� ( �,� ) ]} ����

Ɛ

��

=¿

controls the continuity of the surface

controls how flat the surface wants to be.

(36)

4.2.0 Variational methods and regularization

In addition to the smoothness term, variational problems also require a data penalty .

Ɛ = ∑ [ ( , ) ] 2

For scattered data interpolation, the data penalty measures the distance Between the function and a set of data points

(37)

4.2.0 Variational methods and regularization

In addition to the smoothness term, variational problems also require a data penalty .

Ɛ = ∫ [ ( �, � ) ( � , � ) ] 2 ����

For a problem like noise removal,

a continuous version of this measure can be used,

(38)

4.2.0 Variational methods and regularization

Finally, we get a global energy that can be minimized, the two energy penalties are usually added together,

is the smoothness penalty. ( or some weighted blend such as )

is the regularization parameter, which controls the smoothness of the solution.

Ɛ =Ɛ

+ λ Ɛ

(we can use the methods in 4.1.2 to estimate good values for .)

(39)

4.2 Variational methods and regularization

4.2.1 Discrete energy minimization 4.2.2 Total variation

4.2.3 Bilateral solver

4.2.4 Application: Interactive colorization

(40)

4.2.1 Discrete energy minimization

are optional smoothness weights.

They control the location of horizontal and vertical weakness in the surface.

The exact elements they control depend on the problem itself.

(41)

4.2.1 Discrete energy minimization

are gradient data constraints used by algorithms, such as Photometric stereo, Poisson blending, etc.

They are set to zero when just discretizing the conventional first-order smoothness functional.

(42)

4.2.1 Discrete energy minimization

is the size of the finite element grid. It is only important if the energy is being discretized at a variety of resolutions.

(43)

4.2.1 Discrete energy minimization

are crease variables.

They control the locations of creases in the surface.

(44)

4.2.1 Discrete energy minimization

controls how strongly the data constraint is enforced.

The 2-dimensional discrete data energy is written

as:

(45)

4.2.1 Discrete energy minimization

The total energy of the discretized problem can now be written as a quadratic form:

�=�

+ λ

=

�� −2 �

�+�

is called state vector.

is Hessian. It encodes the second derivative of the energy function.

is the weighted data vector.

(46)

4.2.1 Discrete energy minimization

Minimizing the quadratic form is equivalent to solving the following linear system:

�=�

+ λ

=

�� −2 �

�+�

��=�

(47)

4.2.1 Discrete energy minimization 4.2.2 Total variation

4.2.3 Bilateral solver

4.2.4 Application: Interactive colorization

4.2 Variational methods and regularization

(48)

4.2.2 Total variation

Today, many regularized problems are formulated using norm, which is often called total variation.

¿ � ∨¿

( )=¿

(49)

4.2.2 Total variation

It tends to better preserve discontinuities, but still results in a convex problem that has a globally unique solution.

¿ � ∨¿

( )=¿

(50)

4.2.2 Total variation

Hyper-Laplacian norms with have gained popularity.

They have even stronger tendency to prefer large discontinuities over small ones.

¿ � ∨¿

( )=¿

(51)

4.2.1 Discrete energy minimization 4.2.2 Total variation

4.2.3 Bilateral solver

4.2.4 Application: Interactive colorization

4.2 Variational methods and regularization

(52)

4.2.3 Bilateral solver

As discussed in 3.3.2, we can often get better results by looking at a larger spatial neighborhood. We can extend this idea to energy minimization. Recall the bilateral

weight function:

( � , � ,� , � ) =exp ⁡(− ( � −� )

2

+ ( � − � )

2

2

2

ǁ ( � , � ) � (�, �)ǁ

2

2

2

)

(53)

4.2.3 Bilateral solver

A wider-neighborhood, bilaterally weighted version of nearest-neighbor smoothness penalty:

^ ( � , � ,� , � ) = �(� , � , � , �)

� ,�

� (� , � ,� ,�)

(54)

4.2.3 Bilateral solver

The bilateral solver has been used in a number of demanding

video processing and 3D reconstruction application, including

the stitching of binocular omnidirectional panoramic videos,

and smartphone AR (Augmented Reality) system.

(55)

4.2.1 Discrete energy minimization 4.2.2 Total variation

4.2.3 Bilateral solver

4.2.4 Application: Interactive colorization

4.2 Variational methods and regularization

(56)

4.2.4 Application: Interactive colorization

A good use of edge-aware interpolation techniques is in

colorization, which manually adding colors to grayscale

image.

(57)

4.2.4 Application: Interactive colorization

The user draws some scribbles and the system

interpolates the specified chrominance values

(58)

4.2.4 Application: Interactive colorization

Then, re-combined chrominancevalues with the

luminance channel to produce a final colorized image.

(59)

4.2.4 Application: Interactive colorization

The interpolation is performed using locally weighted

regularization introduced in 4.2.1. This approach has

inspired many later algorithms.

(60)

4.3 Markov random fields

4.3.1 Conditional random fields

4.3.2 Application: Interactive segmentation

(61)

4.3.0 Markov random fields

An alternative technique is to use probabilistic model.

( | ) = ( | ) �(�)

�( � )

− log ( | ) =− log ( | ) − log ( | ) +

( �, � ) =

( �, � ) +

( �, �)

(62)

4.3.0 Markov random fields

An alternative technique is to use probabilistic model.

( �, � ) =

( �, � ) +

( �, �)

�=[ � ( 0,0 ) � (�− 1,�−1)]

�=[� ( 0,0 ) �(�−1, �−1)]

Input pixels:

output pixels:

(63)

4.3.0 Markov random fields

Graphical model for () Markov random field:

(64)

4.3.0 Markov random fields

Binary MRFs:

It simply segments images into foreground and background regions.

( �, � ) =� �( � ( �, � ) , �(�, �))

( �, � ) = � � ( ( �, � ) , ( �+1, � ) ) + ��( � ( �, � ) , �(�, �+1))

(65)

4.3.0 Markov random fields

Ordinal-valued MRFs:

The term “ordinal” indicates that the labels have an

implied ordering, e.g., that higher values are lighter

pixels (greyscale image).

(66)

4.3.0 Markov random fields

Unordered labels:

Another case with multi-valued labels where MRFs are often applied are unordered labels, such as

segmentation.

(67)

4.3.1 Conditional random fields

4.3.1 Conditional random fields

4.3.2 Application: Interactive segmentation

(68)

4.3.1 Conditional random fields

(69)

4.3.1 Conditional random fields

In addition to minimizing a data term, the MRF need to

be modified so that the smoothness terms depend on

the magnitude of the gradient between adjacent pixels.

(70)

4.3.1 Conditional random fields

Graphic model for a conditional random field (CRF):

(71)

4.3.1 Conditional random fields

Penalty function for a conditional random field (CRF):

(72)

4.3.1 Conditional random fields

Dense conditional random fields (CRFs):

Image often contain longer-range interactions.

(73)

4.3.1 Conditional random fields

Dense conditional random fields (CRFs):

In order to model such longer-range interactions, we

introduce a fully connected CRF (aka. dense CRF).

(74)

4.3.1 Conditional random fields

Dense conditional random fields (CRFs):

(75)

4.3.2 Application: Interactive segmentation

4.3.1 Conditional random fields

4.3.2 Application: Interactive segmentation

(76)

4.3.2 Application: Interactive segmentation

The algorithm performs a binary segmentation, and the

process is repeated with better region statistics.

參考文獻

相關文件

第二級失能 生活補助金 滿第一年 15萬元 11.25萬元 滿第二年 20萬元 15.00萬元 滿第三年 25萬元 18.75萬元 滿第四年 30萬元

[r]

[r]

[r]

Know how to implement the data structure using computer programs... What are we

• Recorded video will be available on NTU COOL after the class..

—we cannot teach all, but with reading you can learn all 3-6: 3 hour teaching, 6 hour reading/writing after class as important as writing assignments:. some may show up

• Detlef Ruprecht, Heinrich Muller, Image Warping with Scattered Data Interpolation, IEEE Computer Graphics and Applications, March 1995, pp37-43. • Seung-Yong Lee, Kyung-Yong