Outcome optimization allows you to find a set of parameters (constant values) that maximize some measure, or measures, of performance when a model is simulated. For example, suppose that you had a model of a fishery that related the number of days spent fishing each year to the number of fish caught. One question that you could address with such a model would be what number of days fishing will give you the most fish. Spend no days fishing and you won't catch any, but fish too many days and the stock of fish will be depleted. Sensitivity analysis, allows you to map out the relationship between days fishing and cumulative catch and use that relationship to find the number of days to spend fishing. Optimization is a way of automating that process. Conceptually, it is similar to performing sensitivity analysis and then selecting the best value, but it does this work behind the scenes, and can work with multiple parameters changing together.
To perform optimization you need to do two things. The first is to define a payoff (see Defining Outcome Payoffs) that, numerically, indicates how good a particular run was in achieving the outcomes (goals) you specified. In our fishery example the payoff would be the cumulative number of fish caught over the course of the run. The second thing that you need to do is select which parameters to vary over what range of values. In this case we would use the parameter "days spent fishing" ranging from 0 to 365. See Performing Optimizations for details.
Once you have set up the payoff and optimization parameters you can click on the O-Sim button in the toolbar or select Run Optimization from the Model menu. The software will run the model a number of times, varying the input parameters in order to find those that give the best possible (maximum) payoff. After the best possible payoff has been found, the model will be simulated with that set of parameters so you can view the results created using these values.
Note Running the model again after an optimization with a single payoff will give the same results as the final optimization run.
Note After optimizing you can write the resulting parameter values to an import file using the Parameter Control Panel .
Stella has several features that allow you to use optimization beyond the context of searching on parameters to find the best payoff. These features are designed to make the optimization process more robust, and accommodate situations in which a single payoff definition is not sufficient to characterize the goals of the optimization.
The relationship of parameters to performance can be complex. For example, things might get better for a while as you start to increase a parameter, then get worse as you continue increasing it, then get better again. Traditional optimization techniques rely on local exploration of the parameter space. In the better, worse, better example, they might stop as soon as things stop getting better the first time, and skip altogether the later (and globally better) values. Regular optimization will tend to find the parameter set closest to the starting point of the search, but this might not be the global optimum. To get around this there is an option to repeat the optimization a number of times starting at different parameter selections. This should help to uncover local optima, and provide a better chance at finding the global optimum.
When trying to define a set of policies, it is desirable to make those policies robust for different conditions that might be present over the course of a simulation. We can explore different conditions, whether they result from different assumptions on sensitivities or simply from different realizations of random numbers, using sensitivity analysis. The same approach can be applied as part of optimization. If you select the inclusion of a sensitivity simulation the payoff will be computed over all runs of the sensitivity simulation rather than a single run. This approach will likely require a very large number of simulations, and can be quite time consuming for large models. It does, however, guarantee that the results work well across the variety of assumptions being explored during sensitivity.
Traditional optimization assumes that there is a single payoff function that needs to be maximized. In many situations there may be competing interests or concerns that have trade-offs between them. While this can, to some extent, be handled by weighting the competing performance measures, an alternative approach is to find parameter sets that do as well as they can along multiple criteria. In a nutshell, only parameter combinations which can't be changed without making one criterion worse will be considered. The end of an optimization using multiple payoffs is not a single simulation, but instead a sensitivity simulation over the selected parameter sets. The behavior for the sensitivity simulations can then be evaluated to determine the robustness of the selected parameters and, potentially, let the user choose one set as the global best.