Once you have set up Outcome Optimization by first Defining Outcome Payoffs and then setting up the parameters to vary between a specified minimum and maximum using the Optimization Specs, you run the optimization by clicking on O-Run in the Run Toolbar or selecting Run Optimization from the Model Menu.
Performing optimization requires that the model be run multiple times. The sequence of these runs depends on the optimization method chosen. Goal seeking methods such as Powell and Stepper first assess what effect changing each parameter in isolation has on the payoff, then attempt to make changes to all the parameters together searching for the biggest payoff in the smallest number of simulations. Since each assessment, and every step, requires a complete simulation the algorithms try to make as much use of the parameter by parameter step measurements as possible and choose step sizes that are big enough not to have to repeat them too often (but small enough that overstepping is not that likely). The Grid method breaks the parameter space down into a grid, while the Differential Evolution method uses a genetic algorithm to try to find a family of parameter values that maximize or minimize a number of different criteria.
Regardless of the optimization method chosen, for both single criteria (Powell, Grid Stepper) and multicriteria optimization (Differential Evolution) every time the model is run, the results are checked. Any run that improves the payoff, or dominates other runs, will always be kept in favor of the old best run or runs. Should you stop the optimization run at any point by clicking on the stop button in the run toolbar, this information will be kept.
The goal seeking Optimization methods (Powell, Stepper) explore the parameter space locally and attempt to move along a set of parameters, generally using successively smaller changes, in order to find the maximum payoff. These methods work well for payoffs that have a single global maximum and a clear direction of decrease away from that maximum. If there are multiple local maxima, or significant areas of the parameter space that are largely flat, these methods may not converge, or may converge at parameters that do not give the global maximum.
Each additional start runs the optimization from a different initial parameter selection. Each parameter is randomly chosen by drawing from a uniform distribution between the minimum and maximum specified value. The optimization is then run starting from this point.
Using additional starts only makes sense for methods that make use of the initial parameter values (Powell and Stepper). Methods such as Grid and Differential Evolution, which ignore the original parameter values, will give the same results on each restart and therefore additional starts will not occur for these methods.
If you want to get more control over multiple starts it is possible to set up a sensitivity run with the parameter selection you want (for example Latin Hypercube) and use that as a basis for initial search points. To do this simply set up the sensitivity, and then refer to the optimization set up in the Sensitivity Specs Panel. When you start a sensitivity simulation, an optimization will be performed for each sensitivity run. You will need to determine which of the various runs is best by comparing them as this is not done automatically by the software.
If you change model structure, parameters, or even noise seeds, the results of optimization are also likely to change. If the purpose of the optimization is to find the parameters that give the best performance, those parameters should not be sensitive to changes in model assumptions. This can be tested by performing a sensitivity analysis on the model after finding the parameters through optimization. By optimizing across sensitivity runs you can ensure that the results of that sensitivity analysis do not have big surprises.
Optimization across sensitivity runs generates a payoff not by running the model once, but instead by performing a sensitivity analysis for the model. The payoff is then computed as either the average across the different runs or the smallest (worst) value across the different runs. The resulting parameters work best for a range of parametric assumptions, not just the base values.
Optimizing over sensitivity multiplies the total number of runs by the number required for the sensitivity analysis. For this reason keeping the number of runs small, for example using Latin Hypercube in the Sensitivity Analysis, will keep the overall time to perform the optimization smaller.
There should not be any overlap between the parameters being used for optimization and those being used for sensitivity. If there is, the optimization values will be used, so it will be as if the sensitivity settings do not include that parameter.
The alternative way do combine sensitivity and optimization is to run an optimization for each sensitivity run. This is configured from the Sensitivity Specs Panel by specifying which optimization to run for each sensitivity run.
Multicriteria optimization techniques run just like other techniques, but there is no concept of a best run. Instead, multiple results will be identified. Each result will be optimal in the sense that there is no way to make all of the payoff values better with a different parameter selection. Using the fishing example, suppose that not only the quantity, but also the quality, of the fish was of concern. Suppose that with 100 days of fishing we would catch 100 fish with a quality of 5 (on a scale of 1 to 10), but with 90 days of fishing we would catch 90 fish with a quality of 7. Neither of these is clearly better than the other, but if 110 days of fishing would yield 90 fish with a quality of 3, that is clearly worse than either of the other choices.
Multicriteria optimization eliminates suboptimal results, leaving you with a set (two in our example) of results that allow you to look at the tradeoff between the different performance measures. Such combinations are sometimes referred to as Pareto efficient or dominant. The number of combinations identified will depend on the number of evaluations as well as the nature of the payoffs. If the different payoff values are correlated, there will be fewer combinations. In the extreme case, where the payoffs are perfectly correlated (or perhaps simply the same) there will only be a single identified result. When the payoffs are less correlated, or negatively correlated, there will be more results satisfying the Pareto efficient criteria.
Because there is not a single best result, the natural way to present multicriteria optimization results is using sensitivity analysis, so that results from all can be compared.
The optimization process finishes when the selected method completes, or when the Stop button on the Run toolbar is pressed. What happens next depends on the optimization method and the setting for sensitivity runs.
For optimizations that return a single payoff there is always a best run kept, based on the computation of that payoff and that parameters associated with it. At the end of the optimization those parameter values are set as interactive changes (as shown in the Parameter Control Panel ), the same type you would get by spinning a knob in Stella live. This means that if you run the model (using the Run button on the tool bar or the Run menu item on the Model Menu) those parameters will be used. It also means that you can explore changes locally around those values in Stella live. You can restore inputs (or restore all devices) to set these parameters back to the base model values.
If you are not optimizing across sensitivity runs, a run will automatically be performed using the best values (so you don't need to run it yourself). If you are optimizing across sensitivity parameters, a sensitivity run will be performed with the model parameters found. This is the equivalent of selecting S-Run from the Run toolbar or selecting Run Sensitivity from the Run menu.
In both cases the results that you see reflect the results of the model when run after optimization. In many cases it will make sense to create a baseline, so that you can see the change that results from the new parameter selections. This baseline creation is not part of the optimization process, and must be done separately.
Note After the optimization completes, the parameters found are set as control values. This means subsequent simulations will use these values and subsequent optimizations will start from these values. If you want to repeat an optimization completely first restore the inputs.
Note The resulting parameters can be written to an import file from the Parameter Control Panel
For methods that use multiple payoffs there is not a single best run, but instead a family of runs that are worth further investigation and discussion with stakeholders.
If you are not optimizing across sensitivity runs, at the end of the optimization a sensitivity run including each of the identified parameters will be run. This is done by setting up each parameter as ad-hoc, with the values identified. The sensitivity specs for this are not maintained, but only used internally to generate the sensitivity run. The resulting sensitivity outputs can then be used to discuss performance of the different parameter sets against the metrics of importance to stakeholders.
If you are optimizing across sensitivity runs, a final sensitivity run will be performed using those settings, but ignoring the optimization results. You will need to explore the family of results by other means in this case as combining the two sets of sensitivity inputs would just lead to confusing results.
Note When using multicriteria methods with more than one payoff model parameters will not be controlled.