• 沒有找到結果。

# >> Lecture 7 2

N/A
N/A
Protected

Share ">> Lecture 7 2"

Copied!
79
0
0

(1)

1 >> Lecture 7

2 >>

3 >> -- Optimization

4 >>

Zheng-Liang Lu 421 / 499

(2)

“In my opinion, no single design is apt to be optimal for everyone.”

–Donald Norman (1935–)

(3)

### Contents

Introduction

Optimization Problem in Standard Form Linear Programming Problems

Quadratic Programming Problems Unconstrained Nonlinear Programming

Zheng-Liang Lu 423 / 499

(4)

### Introduction

Mathematical optimization is to find the optimal selection of feasible solutions with regard to the specificcriteria.

In the simplest case, an optimization problem consists of maximizing orminimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function.

The generalization of optimization theory and techniques to other formulations comprises a large area of applications.

EE: circuit layout, fabrication parameters of transistors...

CS: model parameters in machine learning...

Fin: optimal portfolio...

Economics: tax, wage rate...

· · ·

(5)

### Optimization Problem in Standard Form (1/3)

An optimization problem can be represented in the following way:

Given a function f : M → R.

(Minimization) Find x0∈ M such that f (x0) ≤ f (x ) for all x ∈ M.

(Maximization) Find x0∈ M such that f (x0) ≥ f (x ) for all x ∈ M.

Many real-world and theoretical problems may be modeled in this general framework.

Zheng-Liang Lu 425 / 499

(6)

### Optimization Problem in Standard Form (2/3)

Typically, M is some subset of the Euclidean space Rn, often specified by a set of constraints, equalities or inequalities that the members of M have to satisfy.

The domain M of f is called the search space or the choice set, while the elements of M are called feasible solutions.

The function f is called an objective function.

Aka loss function, cost function, utility function, and fitness function.

A feasible solution that minimizes (or maximizes, if that is the goal) the objective function is called an optimal solution.

(7)

### Optimization Problem in Standard Form (3/3)

By convention, the standard form of an optimization problem is stated in terms of minimization.

Convex optimization, a subfield of optimization, studies the problem of minimizing convex functions over convex sets.

(You will see later.)

With recent improvements in computing and in optimization theory, convex minimization is nearly as straightforward as linear programming1.

Many optimization problems can be reformulatedas convex optimization problems.

For example, the problem of maximizing a concave function f can be re-formulated equivalently as a problem of minimizing the function −f , which is convex.

1Aka 線性規劃(高二上).

Zheng-Liang Lu 427 / 499

(8)

### Classification of Optimization Problems

Finite- vs. infinite-dimensional problems Unconstrained vs. constrained problems Convex vs. non-convex problems Linear vs. non-linear problems Continuous vs. discrete problems Deterministic vs. stochastic problems

(9)

### Example

Consider f (x ) = x4− 10.5x3+ 39x2− 59.5x + 30.

1 >> g=@(x) polyval([1 -10.5 39 -59.5 30], x);

2 >> x=1:0.05:4;

3 >> plot(x,g(x)); grid on;

4 >> [s, fval]=fminunc(g,0) % unconstrained ...

minimizing g

5

6 s =

7

8 1.4878

9

10 fval =

11

12 -1.8757

Zheng-Liang Lu 429 / 499

(10)

Try: [s, fval ] = fminunc(g , 5)

(11)

### Example: Utility Maximization

A consumer has a budget flush with w = 10 and faces prices p1= 1, p2= 2 for Product 1 and 2, respectively.

Let x1 and x2 be the weights of two products, and u be the utility function over two goods, given by u = x10.8+ x20.8. Then, what is the optimal (x1, x2)?

Zheng-Liang Lu 431 / 499

(12)

We can observe the behavior of u first.

∂u

∂xi

= 0.8xi−0.2> 0 for all xi > 0.

2u

∂xi2 = −0.16xi−1.2< 0 for all xi > 0.

So, u increases with a gradual slowdown as x1and x2increase.

Formulate the problem into a standard form of optimization:

maxx1,x2

{u}, x1p1+ x2p2 = 10.

Let Aeq =

p1 p2  , ~x =

 x1

x2



and beq = 10. Then the second equation can be Aeq· ~x = beq.

(13)

### mesh (Recap)

1 [X,Y]=meshgrid(0:.5:10);

2 u=(X.ˆ0.8+Y.ˆ0.8);

3 LX=0:.5:10;

4 LY=-0.5*LX+5;

5 uu=LX.ˆ0.8+LY.ˆ0.8;

6 mesh(X,Y,u);grid on; hold on;

7 plot3(LX,LY,uu);

Zheng-Liang Lu 433 / 499

(14)

0

2 4

6 8

10

0 5

10 0 5 10 15

y z=x0.8+y0.8

(15)

1 >> f=@(x) -(x(1)ˆ0.8+x(2)ˆ0.8);

2 >> Aeq=[1 2];

3 >> beq=10;

4 >> x 0=[8 1]; % initial guess

5 >> [xx, fval]=fmincon(f,x 0 ,[],[],Aeq,beq)

6 7 xx =

8

9 9.4118 0.2941

10

11 fval =

12

13 -6.3865

The maximization problem could be equivalent to minimize

−u.

Zheng-Liang Lu 435 / 499

(16)

### Linear Programming Problems

If the objective function f and the defining functions of M are linear, then the problem you are concern about will be a linear optimization problem.

A general form of a linear programming problem is given by

That is, f (x ) = cTx and

M = {x ∈ Rn|Ax = a, Bx ≤ b, lb ≤ x ≤ ub}.

(17)

Once you have defined the matrices A, B, and the vectors c, a, b, lb and ub, then you can call linprog to solve the problem:

[x , fval , exitflag , output, lambda]

= linprog(c, A, a, B, b, lb, ub, x 0, options), where

c: coefficient vector of the objective A: matrix of inequality constraints

a: right hand side of the inequality constraints

B or []: matrix of equality constraints, or no constraints b or []: right hand side of the equality constraints, or no constraints

lb, ub or []: lower/upper bounds for x , or no lower/upper bounds

x0: initial vector for the algorithm if known; otherwise [].

options: options are set using the optimset funciton which determines the details in the algorithm.

Zheng-Liang Lu 437 / 499

(18)

(Continued)

x: optimal solution

fval: optimal value of the objective function

exitflag: tells whether the algorithm converged or not (exitflag > 0 means convergence.)

output: a struct for number of iterations, algorithm used and PCG iterations (when LargeScale = on)

lambda: a struct containing Lagrange multipliers corresponding to the constraints

(19)

The input argument options is a structure, which contains several parameters that you can use with a given Matlab optimization routine. (Try optimset(’linprog’)!!)

For example,

1 >> options=optimset('ParameterName1',value1,...

2 'ParameterName2',value2,...)

Zheng-Liang Lu 439 / 499

(20)

The following are parameters and their corresponding values which are frequently used with linprog:

0LargeScale0: ’on’,’off’

0Simplex0: ’on’,’off’

0Display0: ’iter,’final’,’off’

0Maxiter0: maximum number of iteration

0TolFun0: termination tolerance for the objective function

0TolX0: termination tolerance for the iterates

0Diagnostics0: ’on’ or ’off’

(21)

### Example 1

Solve the following linear optimization problem using linprog.

Zheng-Liang Lu 441 / 499

(22)

1 c=[-2,-3]';

2 A=[1,2;2,1;0,1];

3 a=[8,10,3]';

4 options=optimset('LargeScale','off');

5 xsol=linprog(c,A,a,[ ],[ ],[ ],[ ],[ ],options);

1 Optimization terminated.

2

3 xsol =

4

5 4.0000

6 2.0000

7 8 >>

(23)

### Example 2

Solve the following LP using linprog:

Zheng-Liang Lu 443 / 499

(24)
(25)

1 clear all;

2 clc

3

4 A=[1,1,1,1,1,1;5,0,-3,0,1,0];

5 a=[10,15]';

6 B1=[1,2,3,0,0,0;0,1,2,3,0,0;0,0,1,2,3,0;0,0,0,1,2,3];

7 b1=[5,7,8,8]';

8 D=[3,0,0,0,-2,1;0,4,0,-2,0,3];

9 d=[5,7]';

10 lb=[-2,0,-1,-1,-5,1]';

11 ub=[7,2,2,3,4,10]';

12 c=[1,-2,3,-4,5,-6]';

13 B=[-B1;D]; b=[-b1;d];

14

15 [xsol,fval,exitflag,output]=linprog(c,A,a,B,b,lb,ub)

16 fprintf('%s %s \n', 'Algorithm Used: ...

',output.algorithm);

17 disp('============================');

18

19 options=optimset('linprog');

Zheng-Liang Lu 445 / 499

(26)

20 options = optimset(options,'LargeScale','off',...

21 'Simplex','on','Display','iter');

22 [xsol,fval,exitflag]=linprog(c,A,a,B,b,lb,ub,[],options)

23 fprintf('%s %s \n', 'Algorithm Used: ...

',output.algorithm);

24 fprintf('%s','Reason for termination:')

25 if (exitflag)

26 fprintf('%s \n',' Convergence.');

27 else

28 fprintf('%s \n',' No convergence.');

29 end

(27)

### Example 3: Approximation of discrete Data by a Curve

Suppose the measurement of a real process over a 24 hours period be given by the following table with 14 data values:

The values ti represent time and ui’s are measurements.

Zheng-Liang Lu 447 / 499

(28)

Assuming there is a mathematical connection between the variables t and u, we would like to determine the coefficients a, b, c, d , e ∈ R of the function

u(t) = at4+ bt3+ ct2+ dt + e,

so that the value of the function u(ti) could best approximate the discrete value ui at ti, i = 1, . . . , 14. in the Chebychev sense2.

(29)

Hence, we need to solve the Chebyshev approximation problem, which is written as

Reformulate it into a linear programming problem:

Objective function?

Constraints?

Zheng-Liang Lu 449 / 499

(30)

### Solution to Chebyshev Approximation Problem

f := maxi =1,...,14|ui− (ati4+ bti3+ cti2+ dti+ e)|.

Then the problem can be equivalently written as min{f },

−(ati4+ bti3+ cti2+ dti+ e) − f ≤ −ui, (ati4+ bti3+ cti2+ dti+ e) − f ≤ ui, where i ∈ {1, . . . , 14}.

More specific, [A]28×6[x ]6×1≤ [u]28×1. Note that [x ]6×1= [a, b, c, e, d , f ]0.

(31)

1 clear all;

2 clc

3

4 t=[0,3,7,8,9,10,12,14,16,18,19,20,21,23]';

5 u=[3,5,5,4,3,6,7,6,6,11,11,10,8,6]';

6 A1=[-t.ˆ4,-t.ˆ3,-t.ˆ2,-t,-ones(14,1),-ones(14,1)];

7 A2=[t.ˆ4,t.ˆ3,t.ˆ2,t,ones(14,1),-ones(14,1)];

8 c=zeros(6,1);

9 c(6)=1; % objective function coefficient (why?)

10 A=[A1;A2]; % inequality constraint matrix

11 a=[-u;u]; % right hand side vectro of ineq ...

constraints

12 [xsol,fval,exitflag]=linprog(c,A,a);

13

14 plot(t,u,'r*'); hold on; grid on;

15 tt=0:0.5:25;

16 ut=xsol(1)*(tt.ˆ4)+xsol(2)*(tt.ˆ3)+xsol(3)*(tt.ˆ2)+...

17 xsol(4)*tt+xsol(5);

18 plot(tt,ut,'-k','LineWidth',2)

Zheng-Liang Lu 451 / 499

(32)

0 5 10 15 20 25 1

2 3 4 5 6 7 8 9 10 11

(33)

### Exercise

randi([range], m, n) generates an m-by-n random matrix integer values drawn uniformly in range.

Use randi as a set of input pairs of the program in Chebyshev Approximation Problem.

Let t be a simple sequence like 0 : 1 : m.

Let u be a sequence generated by randi.

See the fitting result.

Zheng-Liang Lu 453 / 499

(34)

### Integer Programming

Integer programming problem is a mathematical optimization in which some or all of the variables are restricted to be integers.

Sometimes called integer linear programming (ILP), in which the objective function and the constraints (other than the integer constraints) are linear.

Note that integer programming is much harder than linear programming in general. (Why?)

(35)

Quadratic programming is a special type of mathematical optimization problem, which optimizes (minimizing or maximizing) a quadratic function of several variables subject to linear constraints on these variables.

Let Q ∈ Rn×n, A ∈ Rm×n, B ∈ l × n, aRm, and b ∈ Rl. Then a general form of a quadratic programming problem is given by

Zheng-Liang Lu 455 / 499

(36)

The general form for calling quadprog of the problem is given by

[xsol , fval , exitflag , output, lambda]

= quadprog(Q, q, A, a, B, b, lb, ub, x 0, options), where

Q: Hessian of the objective function

q: Coefficient vector of the linear part of the objective function A or []: matrix of inequality constraints, or no inequality constraints

a or []: right hand side of the inequality constraints, or no inequality constraints

B or []: matrix of equality constraints

b or []: right hand side of the equality constraints lb, ub or []: lower/upper bounds for x , or no lower/upper bounds

x : initial vector for the algorithm if known; otherwise [].

(37)

(Continued)

x: optimal solution

fval: optimal value of the objective function

exitflag: tells whether the algorithm converged or not (exitflag > 0 means convergence.)

output: a struct for number of iterations, algorithm used and PCG iterations (when LargeScale = on)

lambda: a struct containing Lagrange multipliers corresponding to the constraints

Zheng-Liang Lu 457 / 499

(38)

### Example 4

Try to re-formulate it.

(39)

Zheng-Liang Lu 459 / 499

(40)

1 clear all;

2 clc

3

4 Q=[2,0;0,4];

5 q=[2,3]';

6 A=[1,2;2,1;0,1];

7 a=[8,10,3]';

8 lb=[0,0]';

9 ub=[inf;inf]';

10

12 options=optimset('LargeScale','off');

13 [xsol,fsolve,exitflag,output]=...

15

16 fprintf('Convergence ');

17 if exitflag > 0

18 fprintf('succeeded.\n');

(41)

21 fprintf('failed.\n');

22 end

23 fprintf('Algorithm used: %s \n' ,output.algorithm);

24

25 x=-3:0.1:3;

26 y=-4:0.1:4;

27 [X,Y]=meshgrid(x,y);

28 Z=X.ˆ2+2*Y.ˆ2+2*X+3*Y;

29 meshc(X,Y,Z); hold on;

30 plot(xsol(1),xsol(2),'r*');

Zheng-Liang Lu 461 / 499

(42)

0

2

4 0

2 4

−20 0 20 40 60

z=x2+y2+2x+3y

(43)

### Example 5

Solve the following LP using quadprog:

Zheng-Liang Lu 463 / 499

(44)
(45)

1 clear all;

2 clc

3

4 % Initialize

5 Q=[2,1,0;1,4,2;0,2,4];

6 q=[4,6,12];

7 A=[-1,-1,-1;1,2,-2];

8 a=[-6,-2];

9 lb=[0;0;0];

10 ub=[inf;inf;inf];

11

programming

13 options=optimset('LargeScale','off');

14

15 [xsol,fsolve,exitflag,output]=...

17

18 fprintf('Convergence ');

19 if exitflag > 0

Zheng-Liang Lu 465 / 499

(46)

20 fprintf('succeeded.\n');

21 xsol

22 else

23 fprintf('failed.\n');

24 end

25 fprintf('Algorithm used: %s \n' ,output.algorithm);

1 Optimization terminated.

2 Convergence succeeded.

3

4 xsol =

5

6 3.3333

7 0

8 2.6667

9

10 Algorithm used: medium-scale: active-set

(47)

### Curve Fitting

Curve fitting is the process of constructing a curve, or

mathematical function, that has the best fit to a series of data points, possibly subject to constraints.

Curve fitting requires aparametric model that relates the response data to the predictor data with one or more coefficients.

The result of the fitting process is an estimate of the model coefficients.

Zheng-Liang Lu 467 / 499

(48)

### Common Techniques

Polynomial interpolation

1 Newton form

2 Lagrange form

3 Polynomial splines Method of least squares

You can find more details in curve fitting in the link:

http://www.mathcs.emory.edu/~haber/math315/chap4.pdf.

(49)

### Method of Least Squares

The first clear and concise exposition of the method of least squares was published by Legendre in 1805.

In 1809,Gauss published his method of calculating the orbits of celestial bodies.

The method of least squares is a standard approach to the approximate solution ofoverdetermined systems, i.e., sets of equations in which there are more equations than unknowns.

To obtain the coefficient estimates,the least-squares method minimizes the summed square of residuals.

Zheng-Liang Lu 469 / 499

(50)

### More specific...

Let {yi}ni =1 be the observed response values and {ˆyi}ni =1 be the fitted response values.

Define the error or residual ei = yi − ˆyi for i = 1, . . . , n.

Then the sum of squares error estimates associated with the data is given by

S =

n

X

i =1

ei2. (1)

The common types of least-squares fitting include linear least square,

nonlinear least squares.

(51)

### Error Distributions

When fitting data that contains random variations, there are two important assumptions that are usually made about the error:

The error exists only in the response data, and not in the predictor data.

The errors are random and follow a normal distribution with zero mean and constant variance σ2, given by

ei ∼ n(0, σ2).

{ei} is so-calledindependent andidenticallydistributed, abbreviated byiid.

Zheng-Liang Lu 471 / 499

(52)

### Why using normal distribution?

The normal distribution is one of the probability distributions in which extreme random errors are uncommon3.

Statistical results such as confidence and prediction bounds do require normally distributed errors for their validity.

If the mean of the errors is zero, then the errors are purely random.

If not, then it might be that the model is not the right choice for your data, orthe errors are not purely random and contain systematic errors.

A constant variance in the data implies that the “spread” of errors is constant.

Data that has the same variance is sometimes said to be of equal quality.

(53)

### Linear Least Squares

A linearmodel is defined as an equation that islinear in the coefficients.

Suppose that you have n data points that can be modeled by a 1st-order polynomial, given by

y = ax + b.

By (1), ei = yi− (axi+ b).

Now S =Pn

i =1(yi − (axi+ b))2.

The least-squares fitting process minimizes the summed square of the residuals.

The coefficient a and b are determined by differentiating S with respect to each parameter, and setting the result equal to zero. (Why?)

Zheng-Liang Lu 473 / 499

(54)

Hence,

∂S

∂a = − 2

n

X

i =1

xi(yi− (axi+ b)) = 0,

∂S

∂b = − 2

n

X

i =1

(yi− (axi + b)) = 0.

The normal equations are defined as

a

n

X

i =1

xi2+ b

n

X

i =1

xi =

n

X

i =1

xiyi,

a

n

X

i =1

xi + nb =

n

X

i =1

yi.

(55)

Solving for a, b, we have

a =nPn

i =1xiyi −Pn

i =1xiPn i =1yi nPn

i =1xi2− (Pn

i =1xi)2 , b =1

n(

n

X

i =1

yi − a

n

X

i =1

xi).

In fact,

 Pn

i =1xi2 Pn i =1xi Pn

i =1xi n

  a b



=

 Pn i =1xiyi Pn

i =1yi

 .

Zheng-Liang Lu 475 / 499

(56)

### Generalized Linear Least Squares

In matrix form, linear models are given by the formula y = Xb + ,

where y is an n-by-1 vector of responses, X is the n-by-m matrix for the model, b is m-by-1 vector of coefficients, and  is an n-by-1 vector of errors.

Then the normal equations are given by (XTX )b = XTy , where XT is the transpose of the X . So, b = (XTX )−1XTy .

(57)

### Example: Drag Coefficients

Let v be the velocity of a moving object and k be a positive constant.

The drag force due to air resistance is proportional to the square of the velocity, that is, d = kv2.

In a wind tunnel experiment, the velocity v can be varied by setting the speed of the fan and the drag can be measured directly.

Zheng-Liang Lu 477 / 499

(58)

The following sequence of commands replicates the data one might receive from a wind tunnel:

1 clear all;

2 clc

3 % main

4 v=0:1:60;

5 d=.1234*v.ˆ2;

6 dn=d+.4*v.*randn(size(v));

7 figure(1),plot(v,dn,'*',v,d,'r-'); grid on;

8 legend('Data','Analytic');

(59)

Zheng-Liang Lu 479 / 499

(60)

The unknown coefficient k is to be determined by the method of least squares.

The formulation could be









v12k = dn1 v22k = dn2

... v612 k = dn61

.

Recall that for any matrix A and vector b with Ax = b, x = A\b returns the least square solution.

1 >> k=(v.ˆ2)'\dn'

2

3 k =

(61)

### Exercise: Chebyshev Approximation Problem (Revisited)

Suppose the measurement of a real process over a 24 hours period be given by the following table with 14 data values:

The values ti represent time and ui’s are measurements.

Consider the polynomial u(t) = at4+ bt3+ ct2+ dt + e.

Please determine the coefficients a, b, c, e, d and e, so that the value of the function u(ti) could best approximate the discrete value ui at ti, i = 1, . . . , 14 in the sense of least square error.

Zheng-Liang Lu 481 / 499

(62)

1 clear all;

2 clc

3 % main

4 t=[0,3,7,8,9,10,12,14,16,18,19,20,21,23]';

5 u=[3,5,5,4,3,6,7,6,6,11,11,10,8,6]';

6 plot(t,u,'r*'); hold on; grid on;

7 % least squares

8 X=[t.ˆ4, t.ˆ3, t.ˆ2, t.ˆ1, ones(length(u),1)]; % ...

basis

9 b=X\u; % least-squares solution by matlab

10 tt=[0:0.5:25];

11 ut1=b(1)*(tt.ˆ4)+b(2)*(tt.ˆ3)+b(3)*(tt.ˆ2)+b(4)*tt+b(5);

12 plot(tt,ut1,'-g','LineWidth',2)

13 % chebyshev

14 A1=[-t.ˆ4,-t.ˆ3,-t.ˆ2,-t,-ones(14,1),-ones(14,1)];

15 A2=[t.ˆ4,t.ˆ3,t.ˆ2,t,ones(14,1),-ones(14,1)];

16 c=zeros(6,1);

c(6)=1; % objective function coefficient (why?)

(63)

19 a=[-u;u]; % right hand side vectro of ineq ...

constraints

20 [xsol,fval,exitflag]=linprog(c,A,a);

21 ut2=xsol(1)*(tt.ˆ4)+xsol(2)*(tt.ˆ3)+xsol(3)*(tt.ˆ2)+...

22 xsol(4)*tt+xsol(5);

23 plot(tt,ut2,'-k','LineWidth',2)

24 % Sum of square errors

25 sum square error=[sum((polyval(b,t')-u').ˆ2)...

26 sum((polyval(xsol,t')-u').ˆ2)]

1 sum square error =

2

3 17.596 1.9151e+005

Zheng-Liang Lu 483 / 499

(64)
(65)

### Linear Least Squares in MATLAB

[x , resnorm, residual , exitflag , output] =lsqlin(C , d , A, b,...

Aeq, beq, lb, ub, x 0, options) returns a structure output that contains information about the optimization.

c: matrix d: vector

Aineq: matrix for linear inequality constraints bineq: vector for linear inequality constraints Aeq: matrix for linear equality constraints beq: vector for linear equality constraints lb, ub: vector of lower/upper bounds x 0: initial point for x

options: using the optimset funciton which determines the details in the algorithm.

Zheng-Liang Lu 485 / 499

(66)

### Example

1 >> C = [

2 0.9501 0.7620 0.6153 0.4057

3 0.2311 0.4564 0.7919 0.9354

4 0.6068 0.0185 0.9218 0.9169

5 0.4859 0.8214 0.7382 0.4102

6 0.8912 0.4447 0.1762 0.8936];

7 >> d = [

8 0.0578

9 0.3528

10 0.8131

11 0.0098

12 0.1388];

(67)

1 >> A =[

2 0.2027 0.2721 0.7467 0.4659

3 0.1987 0.1988 0.4450 0.4186

4 0.6037 0.0152 0.9318 0.8462];

5 >> b =[

6 0.5251

7 0.2026

8 0.6721];

9 >> lb = -0.1*ones(4,1);

10 >> ub = 2*ones(4,1);

11 >> [x,resnorm,residual,exitflag,output] = ...

12 lsqlin(C,d,A,b,[ ],[ ],lb,ub);

Zheng-Liang Lu 487 / 499

(68)

### Summary: Built-in Functions (1/2)

Nonlinear zero finding (equation solving):

fzero fsolve

Linear least squares (of matrix problems):

lsqlin lsqnonneg

(69)

### Summary: Built-in Functions (2/2)

Nonlinear minimization of functions:

fminbnd fmincon fminsearch fminunc fseminf

Nonlinear least squares of functions:

lsqcurvefit lsqnonlin

Nonlinear minimization of multi-objective functions:

fgoalattain fminimax

Zheng-Liang Lu 489 / 499

(70)

1 >> Lecture 8

2 >>

3 >> -- Monte Carlo Simulation

4 >>

(71)

### Contents

Fundamental Concepts in Statistics Simple Random Sampling

Monte Carlo Simulation

Zheng-Liang Lu 491 / 499

(72)

### Random Variables

A random variable is a function from a sample space Ω into the real numbers R.

There are lots of examples where the random variables are used.

In the experiment of tossing two dice once, a random variable X can be the sum of the numbers.

In the experiment of tossing a coin 25 times, a random variable X can be the number of heads in 25 tosses.

(73)

### Distribution Functions

With every random variable X , we associate a function called the cumulative distribution function of x .

Often we call it a cdf of X , denoted by FX(x ) = PX(X ≤ x ) for all x .

The function FX(x ) is a cdf if and only if the following three conditions hold:

1 limx →−∞FX(x ) = 0 and limx →∞FX(x ) = 1.

2 FX(x ) is a nondecreasing function of x .

3 FX(x ) is right-continuous; that is, for every number x0, limx →x+

0 FX(x ) = FX(x0).

Zheng-Liang Lu 493 / 499

(74)

### Random Variable (Revisited)

A random variable X iscontinuousif FX(x ) is a continuous function of x .

Probability density function (pdf) of X is defined by pX(x ) = ∂FX(x )

∂x .

A random variable X isdiscrete if FX(x ) is a step function of x .

Probability mass function (pmf) of X is a similar idea to a pdf, but in the discrete sense.

Mixtures of both types also exist.

(75)

Zheng-Liang Lu 495 / 499

(76)
(77)

### Pseudo Random Numbers in MATLAB

rand(N) returns an N-by-N matrix containing pseudo-random values drawn from the continuous uniformdistribution on the open interval (0, 1).4

randi(imax , N) returns an N-by-N matrix containing pseudo-random values drawn from thediscrete uniform distribution on the open interval (0, imax ).

randn(N) returns an N-by-N matrix containing

pseudo-random values drawn from the standardnormal distribution with zero mean and unit variance.5

randperm(n, k) returns a row vector containing k unique integers selected randomly from 1 to n.

4Usually denoted by X ∼ u(0, 1).

5Usually denoted by X ∼ n(0, 1).

Zheng-Liang Lu 497 / 499

(78)

### Exercise

1 Generate n values from a continuous uniform distribution on the interval [a, b] by extending rand.

2 Generate n values from a normal distribution with mean mu and standard deviation sig by extending randn.

(79)

1 function rv=rand general(a,b,n) %[a,b] with n ...

sampling points

2

3 rv=a+(b-a)*rand(n,1);

1 function rv=rand general(mu,sig,n)

2

3 rv = mu + sig*randn(n,1);

Zheng-Liang Lu 499 / 499

Here, a deterministic linear time and linear space algorithm is presented for the undirected single source shortest paths problem with positive integer weights.. The algorithm

More precisely, it is the problem of partitioning a positive integer m into n positive integers such that any of the numbers is less than the sum of the remaining n − 1

We propose two types of estimators of m(x) that improve the multivariate local linear regression estimator b m(x) in terms of reducing the asymptotic conditional variance while

We show that, for the linear symmetric cone complementarity problem (SCLCP), both the EP merit functions and the implicit Lagrangian merit function are coercive if the underlying

In this paper, we extended the entropy-like proximal algo- rithm proposed by Eggermont [12] for convex programming subject to nonnegative constraints and proposed a class of

Since the generalized Fischer-Burmeister function ψ p is quasi-linear, the quadratic penalty for equilibrium constraints will make the convexity of the global smoothing function

Undirected Single-Source Shortest Paths with Positive Integer Weights in Linear Time.. MIKKEL THORUP 1999 Journal

5 Longest domain token length Integer 6 Longest path token length Integer 7∼9 Spam, phishing and malware SLD hit ratio Real.. 10 Brand name