- •Окно «Параметры поиска решения в Excel 2010»
- •About Solver http://www.Solver.Com/excel2010/solverhelp.Htm
- •Solver sample worksheets
- •Define and solve a problem
- •Understanding Solver Results messages Understanding Solver Results messages
- • If you receive the message “Solver could not find a feasible solution,” read the topic Understanding the Feasibility Report. Understanding the Feasibility Report
- •Understanding the Linearity Report
- •Integer Constraints and the Integer Optimality Tolerance
- •Show Solver trial solutions
- •Pause or stop Solver
- •Problems with Poorly Scaled Models
- •Limitations on Smooth Nonlinear Optimization
- •Multistart Methods for Global Optimization
- •Problems with Poorly Scaled Models
- •Grg Nonlinear Solving Method Stopping Conditions
- •Evolutionary Solving Method Stopping Conditions
- •Evaluating a Solution Found by the Evolutionary Solving Method
- •Create Solver reports
- •Interpreting the Answer Report
- •Interpreting the Population Report
- •Interpreting the Sensitivity Report
- •Interpreting Dual Values
- •Interpreting Range Information
- •Interpreting the Limits Report
- •Understanding the Linearity Report
- •Understanding the Feasibility Report
- •Understanding the Feasibility Report
- •Algorithms and methods used by Solver
- •Nonlinear Optimization
- •Smooth Nonlinear Functions
- •Linear Programming
- •Integer Programming
- •Non-Smooth Optimization
- •Genetic and Evolutionary Algorithms
Non-Smooth Optimization
The most difficult type of optimization problem to solve is a non-smooth problem (NSP). Such a problem may not only have multiple feasible regions and multiple locally optimal points within each region – because some of the functions are non-smooth or even discontinuous, derivative information generally cannot be used to determine the direction in which the function is increasing (or decreasing). In other words, the situation at one possible solution gives very little information about where to look for a better solution.
In all but the simplest problems, it is impractical to exhaustively enumerate all of the possible solutions and pick the best one, even on a fast computer. Hence, most methods rely on some sort of controlled random search, or sampling of possible solutions – combined with deterministic (non-random) methods for exploring the search space.
Genetic and Evolutionary Algorithms
The Evolutionary Solving method uses genetic and evolutionary algorithms to seek “good” solutions for non-smooth optimization problems. An evolutionary algorithm for optimization is different from “classical” optimization methods in several ways. First, it relies in part on random sampling. This makes it a nondeterministic method, which may yield different solutions on different runs. (To obtain the same solution on each run, you can set a Random Seed option for the Evolutionary Solving method.)
Second, where most classical optimization methods maintain a single best solution found so far, an evolutionary algorithm maintains a population of candidate solutions. Only one (or a few, with equivalent objectives) of these is “best,” but the other members of the population are “sample points” in other regions of the search space, where a better solution may later be found. The use of a population of solutions helps the evolutionary algorithm avoid becoming “trapped” at a local optimum, when an even better optimum may be found outside the vicinity of the current solution.
Third – inspired by the role of mutation of an organism’s DNA in natural evolution – an evolutionary algorithm periodically makes random changes or mutations in one or more members of the current population, yielding a new candidate solution (which may be better or worse than existing population members). There are many possible ways to perform a “mutation,” and the Evolutionary Solver actually employs five different mutation strategies. The result of a mutation may be an infeasible solution, and the Evolutionary Solver attempts to “repair” such a solution to make it feasible; this is sometimes, but not always, successful.
Fourth – inspired by the role of sexual reproduction in the evolution of living things – an evolutionary algorithm attempts to combine elements of existing solutions in order to create a new solution, with some of the features of each “parent.” The elements (e.g. decision variable values) of existing solutions are combined in a crossover operation, inspired by the crossover of DNA strands that occurs in reproduction of biological organisms. As with mutation, there are many possible ways to perform a “crossover” operation – some much better than others – and the Evolutionary Solver actually employs multiple variations of four different crossover strategies.
Fifth – inspired by the role of natural selection in evolution – an evolutionary algorithm performs a selection process in which the “most fit” members of the population survive, and the “least fit” members are eliminated. In a constrained optimization problem, the notion of “fitness” depends partly on whether a solution is feasible (i.e. whether it satisfies all of the constraints), and partly on its objective function value. The selection process is the step that guides the evolutionary algorithm towards ever-better solutions.
For more information about these and other solution algorithms, advice on building effective Solver models, and solving larger-scale problems, contact:
Frontline Systems, Inc.
P.O. Box 4288
Incline Village, NV 89450-4288
(775) 831-0300
Web site: http://www.solver.com
E-mail: info@solver.com
Portions of the Microsoft Office Excel Solver program code are copyright 1990, 1991, 1992, 1995, and 2008 by Frontline Systems, Inc. Portions are copyright 1989 by Optimal Methods, Inc.
