• 沒有找到結果。

CHAPTER 2 LITERATURE REVIEW

2.1.3 Load Demand Modeling

The modeling of load demand is assumed to be a Gaussian distribution and find the variance in each day. The Gauss distribution often named as the normal distribution is the most important distribution in statistics. Where the standard distribution form is given by:

where the parameter µ is a location parameter, identical with the mean value, and the σ is the value of standard deviation. When µ value is 0 and σ value equal to 1, it is

extensions is adequate to apply this regular form because µ and σ might be considered as the shift and scale parameter respectively.

Central Limit Theorem explained in the last paragraph indicates that if we add a huge number of random variables together, their sum of the distribution roughly will be normal under specific conditions. The importance of this result comes from the fact that many random variables in real life can be expressed as the sum of a large number of random variables and, the Central Limit Theorem, we can argue that distribution of the variable sum should be normal.

To proceed in load variance modeling, first the hourly historical data will be fitted by curve fitting, then the value attain from the curve fitting will be used to model random number with the estimation of load variance.

2.2 Optimization Problem, Classical Numerical Methods, and Deterministic Approaches

The optimization is involved in searching the optimal values of one or even some of problem-solving variables that appropriate with the objectives without violating constraints. Managing the optimization problems could have more than one, or even more result, and not all of them is global optima, it could have several local optima, however, it depends on the defined objective function. As an example, the graph in Figure 10 has an f(x) function which illustrates the objective possibly obtain the x value where the f(x) have its optimum value at xf(x). And obviously, the local maxima are all of the value x1, x2, and x3 that we observe. There is something that alleviates the function f(x)≥ f (x ± ε), and ε → 0 when occurring just little enhance or decrease of x, in

first order shape f(x) equal to 0 is feasible. Nevertheless, the objective function resulting local optima (x1 and x3) in the highest overall value for the global optimum x2.

However, the frequently is difficult to specify as the global optimum or only local optimum, unlike this simple example, because of the complexity of solution space. The resulted variable will contribute from all of the objective functions which become considered part, then the problem space will become multidimensional which usually have discontinuous objective functions [17].

Figure 10. Global and Local Optima Illustration.

Classical numerical methods commonly derived from algorithms which have an iterative search that iteratively improve the initial guess of the solution according to the deterministic rule. As an example to applied the deterministic with Mathematical Programming methods which can manage the objective is in the financial optimization problem, the method also considers constraints contain both equalities and inequalities.

Next following paragraph will give some kind of method that suitable to implement in the kind of matter.

Dynamic Programming is likely a very common concept than an appropriate algorithm. The dynamic programming can be implemented to problems with the temporal structure. As an example in financial multi-temporal problems, the concept is to separate the problem in some areas as a subproblem. Initially, the last “A” sub of the problem should be obtained in the beginning, then the A – 1 is considered the final optimal solution, once the “A” solution found, the work proceed to find a solution in all sub of that problem.

To solve a problem which is a linear optimization, and the objective function have constraints both equality(is) or inequality(is), the Linear Programming would be suitable to solve the problem. Linear Programming is a basic problem solver that used in the various field such as engineering, economy management. In standard linear programming commonly the data are treated as certain or could be random parameters.

The Simplex Method is the most common method use to apply where adding a slack variable in order to transform the inequalities become equalities and next searching solution from initial guess until the optimum obtained. This method works quite efficiently for many optimization problems, although this computational complexity is exponential. To came up the random parameters which are uncertain we can apply statistical analysis. The uncertain data parameters should incorporate into the model in some of the situation.

Mixed integer programming problems are established as those where several or all of the decision variables are only allowed to be integers. This is typically required in a range of real-world applications in allocation and planning problems where the discrete variables represent quantities, such as the number of individual shares to be held or the number of pipelines need or the number of electricity generator should be install, and all of those numbers is require integer values for the solution.

There a lot of optimization problems in scientific and engineering applications involve both nonlinear system dynamics and discrete decisions that affect the result of optimization problems. Mixed-integer nonlinear programming is one of the most general modeling examples in optimization, including both nonlinear programming (NLP) and mixed-integer linear programming (MILP) as sub-problems.

Mixed-integer nonlinear programming problems incorporate the combinatorial difficulty of optimizing over discrete variable sets with the challenges of handling nonlinear functions. Then it is well-known that MINLP problems are NP-hard because they are the generalization of MILP problems, which are NP-hard themselves. The most basic form of an MINLP problem when represented in algebraic form is as follows:

min z=f (x , y ) (11)

Subject to:

g(x , y)≤ 0 (12)

x∈ X , y ∈Y integer (13)

The Quadratic Programming can be implemented when the both of constraints equalities and inequalities are linear, and have quadratic form equation to solve the problem. This method basically same with Simplex Method, however, it is capable of solving the exponential case.

When we have an optimization problem that uses uncertain data to cooperate with an objective function, some of the Stochastic Methods can be applied. Statistical or numerical approaches with the estimation of some probabilities are assessed.

Another example with mathematical programming are integer programming, non-linear programming, binary programming, many kinds of algorithms to obtain optimum solutions. To conduct the optimization modeling, the structure model for the method is considered with the specific problem, as an example in mechanical optimization usually have a strict boundary and specific number which should not exceed the constraints to make decision variables.

The classical optimization method is separated in two categories. The first methods category based on full search or complete registration, for instance, checking every candidate solution. The main method such as branching and bounding will converge as much of feasible area as possible and eliminate candidates which already decide as bad initially. However, after eliminating some of the feasible solutions, candidates number left could yet outpace capacity, organize the solutions quantity is different and the first place is finite. Another classical optimization method type included are particularly derived from differential calculus, in this case, would use the

first order method and change the independent variables become a number with the derivative or objective function probably have 0 points.

The complete presumption is that there is only one best solution, possibly that best might be obtained on initial guess direct way. Optimization progress is generally derived from numerical deterministic rules. The interpret that given the similar initial guess values, iterated perform shall constantly result in similar output which is a dispute as unnecessarily fine result, iteration proceeds with same deterministically produced initial guess values would have similar results, and it is incapable to assess the result is the global optima or only local optima which is has been obtained. To demonstrate this deterministic method, it is illustrated in the function in Figure 10. If the initial guess for that x value is close to the x1 (local optima) or local optima in x3, however, the classical numerical procedure is potential to find in the local maximum which is closest with the initial guess, then the global optima x2 would be still difficult to discovered.

Practically the deterministic method behavior also the directly search for nearest value global optimum from the existing objective problem would become an important issue, for specific example when having a lot of local optima, and those of local optima is far away from the global optimum, however near with the initial guess or starting guess value. The minor enhancements of the objective function could result with immense values different from the decision variables.

The other common option to solve the deterministic problem should be Monte Carlo (MC) search. An extensively huge random number and appropriate with consider

then they consider the related variables of the objective function. With a quite big number of independent variable guesses, the approach like this possible to consequently identify the optimum value or at least to found the regions which are a possibility or impossible to be obtained. This method is more suitable compare than the classical numerical methods as its main limitation are a priori the availability of a suitable random number generator and the time necessary to carry an adequately huge range of attempt. Therefore, it could be applied to reduce the search space which can later be approached with classical numerical methods. This would become the main drawback of it, however, that it could be quite inefficient and inaccurate. Sometimes, it is quite often the important section of the opportunity set may shortly be identified far away from the real optimum value. Further search in this area is just time-consuming.

Heuristic optimization techniques and heuristic search methods also combine the stochastic elements. Unlike the Monte Carlo search, they have a procedure to guide the search towards the feasible area in the opportunity region. Therefore, they integrate the advantages from the previously presented approaches. Then just similar with the numerical methods, they set to converge to the optimum in the method of iteration search, however, they are possibly less to obtain the local optimum and global optimum, they are highly flexible and therefore are less limited or might be absolutely unrestricted to certain forms of constraints.

The heuristics discussed in a proper way and implement in the primary section of the object function was designed to solve optimization problems by again and again generating and examine new solutions. These optimization techniques, therefore, guide problems where there really be found by a well-defined model and objective function.

相關文件