Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Кол. методы МБА 2012 / 2. Оптимизация / Окно поиска решения.docx
Скачиваний:
36
Добавлен:
08.04.2015
Размер:
205.3 Кб
Скачать

Algorithms and methods used by Solver

The Microsoft Office Excel Solver tool uses several algorithms to find optimal solutions.

The GRG Nonlinear Solving Method for nonlinear optimization uses the Generalized Reduced Gradient (GRG2) code, which was developed by Leon Lasdon, University of Texas at Austin, and Alan Waren, Cleveland State University, and enhanced by Frontline Systems, Inc.

The Simplex LP Solving Method for linear programming uses the Simplex and dual Simplex method with bounds on the variables, and problems with integer constraints use the branch and bound method, as implemented by John Watson and Daniel Fylstra, Frontline Systems, Inc.

The Evolutionary Solving Method for non-smooth optimization uses a variety of genetic algorithm and local search methods, implemented by several individuals at Frontline Systems, Inc.

Nonlinear Optimization

A model in which the objective function and all of the constraints (other than integer constraints) are smooth nonlinear functions of the decision variables is called a nonlinear programming (NLP) or nonlinear optimization problem. Such problems are intrinsically more difficult to solve than linear programming (LP) problems. They may be convex or non-convex, and an NLP Solver must compute or approximate derivatives of the problem functions many times during the course of the optimization. Since a non-convex NLP may have multiple feasible regions and multiple locally optimal points within such regions, there is no simple or fast way to determine with certainty that the problem is infeasible, that the objective function is unbounded, or that an optimal solution is the “global optimum” across all feasible regions.

The GRG Nonlinear Solving method uses the Generalized Reduced Gradient method as implemented in Lasdon and Waren’s GRG2 code. The GRG method can be viewed as a nonlinear extension of the Simplex method, which selects a basis, determines a search direction, and performs a line search on each major iteration – solving systems of nonlinear equations at each step to maintain feasibility. Other methods for nonlinear optimization include Sequential Quadratic Programming (SQP) and Interior Point or Barrier methods.

Smooth Nonlinear Functions

A nonlinear function is any function of the variables that is not linear, i.e. which cannot be written in the algebraic form:

a1x1 + a2x2 + ... + anxn

Examples are =1/C1, =LOG(C1), and =C1^2, where C1 is a decision variable. All of these are called continuous functions, because their graphs are curved but contain no “breaks.” =IF(C1>10,D1,2*D1) is also a nonlinear function, but it is “worse” (from Solver’s viewpoint) because it is discontinuous: Its graph contains a “break” at C1=10 where the function value jumps from D1 to 2*D1. At this break, the rate of change (i.e. the derivative) of the function is undefined. The GRG Nonlinear Solving method relies on derivatives to seek improved solutions, so it may have trouble with a Solver model containing functions like =IF(C1>10,D1,2*D1).

If the graph of the function’s derivative also contains no breaks, then the original function is called a smooth function. If it does contain breaks, then the original function is non-smooth. Every discontinuous function is also non-smooth. An example of a continuous function that is non-smooth is =ABS(C1) – its graph is an unbroken “V” shape, but the graph of its derivative contains a break, jumping from –1 to +1 at C1=0. The GRG Nonlinear Solving method uses an approximation of second order derivatives to make faster progress, and to test whether the optimal solution has been found; it may have some trouble with functions such as =ABS(C1).

Convex, Concave and Non-Convex Smooth Functions

A general nonlinear function of even one variable may be convex, concave or non-convex. A function can be convex but non-smooth: =ABS(C1) with its V shape is an example. A function can also be smooth but non-convex: = SIN(C1) is an example. But the “best” nonlinear functions, from Solver’s point of view, are both smooth and convex (concave for the objective if you are maximizing).

If a smooth function’s second derivative is always nonnegative, it is a convex function; if its second derivative is always nonpositive, it is a concave function. This property extends to any number of ‘dimensions’ or variables, where the second derivative becomes the Hessian and “nonnegative” becomes “positive semidefinite.”