## Documentation Center |

On this page… |
---|

Too Many Iterations or Function Evaluations |

The solver stopped because it reached a limit on the number of iterations or function evaluations before it minimized the objective to the requested tolerance. To proceed, try one or more of the following.

Set the `Display` option to `'iter'`.
This setting shows the results of the solver iterations.

To enable iterative display:

Using the Optimization app, choose

**Level of display**to be`iterative`or`iterative with detailed message`.At the MATLAB

^{®}command line, enteroptions = optimoptions('

','Display','iter');`solvername`Call the solver using the

`options`structure.

For an example of iterative display, see Interpreting the Result.

**What to Look For in Iterative Display. **

See if the objective function (

`Fval`or`f(x)`or`Resnorm`) decreases. Decrease indicates progress.Examine constraint violation (

`Max constraint`) to ensure that it decreases towards`0`. Decrease indicates progress.See if the first-order optimality decreases towards

`0`. Decrease indicates progress.See if the

`Trust-region radius`decreases to a small value. This decrease indicates that the objective might not be smooth.

If the solver seemed to progress:

Set

`MaxIter`and/or`MaxFunEvals`to values larger than the defaults. You can see the default values in the Optimization app, or in the Options table in the solver's function reference pages.Start the solver from its last calculated point.

If the solver is not progressing, try the other listed suggestions.

If `TolX` or `TolFun`, for
example, are too small, the solver might not recognize when it has
reached a minimum; it can make futile iterations indefinitely.

To change tolerances using the Optimization app, use the **Stopping
criteria** list at the top of the **Options** pane.

To change tolerances at the command line, use `optimoptions` as described in Set and Change Options.

The `DiffMaxChange` and `DiffMinChange` options
can affect a solver's progress. These options control the step size
in finite differencing for derivative estimation.

For example, check that your objective and nonlinear constraint functions return the correct values at some points. See Check your Objective and Constraint Functions. Check that an infeasible point does not cause an error in your functions; see Iterations Can Violate Constraints.

Solvers run more reliably when each coordinate has about the same effect on the objective and constraint functions. Multiply your coordinate directions with appropriate scalars to equalize the effect of each coordinate. Add appropriate values to certain coordinates to equalize their size.

**Example: Centering and Scaling. **Consider minimizing `1e6*x(1)^2 + 1e-6*x(2)^2`:

f = @(x) 10^6*x(1)^2 + 10^-6*x(2)^2;

Minimize `f` using the medium-scale `fminunc` algorithm:

opts = optimoptions('fminunc','Display','none','Algorithm','quasi-newton'); x = fminunc(f,[0.5;0.5],opts) x = 0 0.5000

The result is incorrect; poor scaling interfered with obtaining a good solution.

Scale the problem. Set

D = diag([1e-3,1e3]); fr = @(y) f(D*y); y = fminunc(fr, [0.5;0.5], opts) y = 0 0 % the correct answer

Similarly, poor centering can interfere with a solution.

fc = @(z)fr([z(1)-1e6;z(2)+1e6]); % poor centering z = fminunc(fc,[.5 .5],opts) z = 1.0e+005 * 10.0000 -10.0000 % looks good, but... z - [1e6 -1e6] % checking how close z is to 1e6 ans = -0.0071 0.0078 % reveals a distance fcc = @(w)fc([w(1)+1e6;w(2)-1e6]); % centered w = fminunc(fcc,[.5 .5],opts) w = 0 0 % the correct answer

If you do not provide gradients or Jacobians, solvers estimate gradients and Jacobians by finite differences. Therefore, providing these derivatives can save computational time, and can lead to increased accuracy.

For constrained problems, providing a gradient has another advantage.
A solver can reach a point `x` such that `x` is
feasible, but finite differences around `x` always
lead to an infeasible point. In this case, a solver can fail or halt
prematurely. Providing a gradient allows a solver to proceed.

Provide gradients or Jacobians in the files for your objective function and nonlinear constraint functions. For details of the syntax, see Writing Scalar Objective Functions, Writing Vector and Matrix Objective Functions, and Nonlinear Constraints.

To check that your gradient or Jacobian function is correct,
use the `DerivativeCheck` option, as described in Checking Validity of Gradients or Jacobians.

If you have a Symbolic Math Toolbox™ license, you can calculate gradients and Hessians programmatically. For an example, see Symbolic Math Toolbox Calculates Gradients and Hessians.

For examples using gradients and Jacobians, see Minimization with Gradient and Hessian, Nonlinear Constraints with Gradients, Symbolic Math Toolbox Calculates Gradients and Hessians, Nonlinear Equations with Analytic Jacobian, and Nonlinear Equations with Jacobian.

Solvers often run more reliably and with fewer iterations when you supply a Hessian.

The following solvers and algorithms accept Hessians:

`fmincon``interior-point`. Write the Hessian as a separate function. For an example, see fmincon Interior-Point Algorithm with Analytic Hessian.`fmincon``trust-region-reflective`. Give the Hessian as the third output of the objective function. For an example, see Minimization with Dense Structured Hessian, Linear Equalities.`fminunc``trust-region`. Give the Hessian as the third output of the objective function. For an example, see Minimization with Gradient and Hessian.

If you have a Symbolic Math Toolbox license, you can calculate gradients and Hessians programmatically. For an example, see Symbolic Math Toolbox Calculates Gradients and Hessians.

The solver was unable to find a point satisfying all constraints
to within the `TolCon` constraint tolerance. To proceed,
try one or more of the following.

1. Check Linear Constraints |

2. Check Nonlinear Constraints |

Try finding a point that satisfies the bounds and linear constraints by solving a linear programming problem.

Define a linear programming problem with an objective function that is always zero:

f = zeros(size(x0)); % assumes x0 is the initial point

Solve the linear programming problem to see if there is a feasible point:

xnew = linprog(f,A,b,Aeq,beq,lb,ub);

If there is a feasible point

`xnew`, use`xnew`as the initial point and rerun your original problem.If there is no feasible point, your problem is not well-formulated. Check the definitions of your bounds and linear constraints.

After ensuring that your bounds and linear constraints are feasible (contain a point satisfying all constraints), check your nonlinear constraints.

Set your objective function to zero:

@(x)0

Run your optimization with the zero objective. If you find a feasible point

`xnew`, set`x0 = xnew`and rerun your original problem.If you do not find a feasible point using a zero objective function, use the zero objective function with several initial points.

If you find a feasible point

`xnew`, set`x0 = xnew`and rerun your original problem.If you do not find a feasible point, try relaxing the constraints, discussed next.

Try relaxing your nonlinear inequality constraints, then tightening them.

Change the nonlinear constraint function

`c`to return`c-`Δ, where Δ is a positive number. This change makes your nonlinear constraints easier to satisfy.Look for a feasible point for the new constraint function, using either your original objective function or the zero objective function.

If you find a feasible point,

Reduce Δ

Look for a feasible point for the new constraint function, starting at the previously found point.

If you do not find a feasible point, try increasing Δ and looking again.

If you find no feasible point, your problem might be truly infeasible, meaning that no solution exists. Check all your constraint definitions again.

The solver reached a point whose objective function was less than the objective limit tolerance.

Your problem might be truly unbounded. In other words, there is a sequence of points

*x*with_{i}lim

*f*(*x*) = –∞._{i}and such that all the

*x*satisfy the problem constraints._{i}Check that your problem is formulated correctly. Solvers try to minimize objective functions; if you want a maximum, change your objective function to its negative. For an example, see Maximizing an Objective.

Try scaling or centering your problem. See Center and Scale Your Problem.

Relax the objective limit tolerance by using

`optimoptions`to reduce the value of the`ObjectiveLimit`tolerance.

`fsolve` can fail to solve an equation for
various reasons. Here are some suggestions for how to proceed:

Try Changing the Initial Point.

`fsolve`relies on an initial point. By giving it different initial points, you increase the chances of success.Check the definition of the equation to make sure that it is smooth.

`fsolve`might fail to converge for equations with discontinuous gradients, such as absolute value.`fsolve`can fail to converge for functions with discontinuities.Check that the equation is "square," meaning equal dimensions for input and output (has the same number of unknowns as values of the equation).

Change tolerances, especially

`TolFun`and`TolX`. If you attempt to get high accuracy by setting tolerances to very small values,`fsolve`can fail to converge. If you set tolerances that are too high,`fsolve`can fail to solve an equation accurately.Check the problem definition. Some problems have no real solution, such as

`x^2 + 1 = 0`.

Solvers can take excessive time for various reasons. To diagnose the reason, use one or more of the following techniques.

Set the `Display` option to `'iter'`.
This setting shows the results of the solver iterations.

To enable iterative display:

Using the Optimization app, choose

**Level of display**to be`iterative`or`iterative with detailed message`.At the MATLAB command line, enter

options = optimoptions('

','Display','iter');`solvername`Call the solver using the

`options`structure.

For an example of iterative display, see Interpreting the Result. For more information, see What to Look For in Iterative Display.

Sometimes a solver fails because an objective function or nonlinear
constraint function returns a complex value, infinity, or NaN. To
halt solver iterations in these cases, enable the `FunValCheck` option.

Using the Optimization app, check the box labeled

**Error if user-supplied function returns Inf, NaN, or complex**in the**Function value check**pane.At the MATLAB command line, enter

options = optimoptions('

','FunValCheck','on');`solvername`Call the solver using the

`options`structure.

Solvers can fail to converge if tolerances are too small, especially `TolFun` and `TolX`.

To change tolerances using the Optimization app, use the **Stopping
criteria** list at the top of the **Options** pane.

To change tolerances at the command line, use `optimoptions` as
described in Set and Change Options.

You can obtain more visual or detailed information about solver
iterations using a plot function. For a list of the predefined plot
functions, see **Options > Plot functions** in
the Optimization app. The Options section of your solver's function
reference pages also lists the plot functions.

To use a plot function:

Using the Optimization app, check the boxes next to each plot function you wish to use.

At the MATLAB command line, enter

options = optimoptions('

','PlotFcns',{@`solvername`,@`plotfcn1`,...});`plotfcn2`Call the solver using the

`options`structure.

For an example of using a plot function, see Using a Plot Function.

If you have supplied derivatives (gradients or Jacobians) to
your solver, the solver can fail to converge if the derivatives are
inaccurate. For more information about using the `DerivativeCheck` option,
see Checking Validity of Gradients or Jacobians.

If you use a large, arbitrary bound (upper or lower), a solver
can take excessive time, or even fail to converge. However, if you
set `Inf` or `-Inf` as the bound,
the solver can take less time, and might converge better.

Why? An interior-point algorithm can set an initial point to the midpoint of finite bounds. Or an interior-point algorithm can try to find a "central path" midway between finite bounds. Therefore, a large, arbitrary bound can resize those components inappropriately. In contrast, infinite bounds are ignored for these purposes.

Minor point: Some solvers use memory for each constraint, primarily
via a constraint Hessian. Setting a bound to `Inf` or `-Inf` means
there is no constraint, so there is less memory in use, because a
constraint Hessian has lower dimension.

You can obtain detailed information about solver iterations using an output function. Solvers call output functions at each iteration. You write output functions using the syntax described in Output Function.

For an example of using an output function, see Example: Using Output Functions.

Large problems can cause MATLAB to run out of memory or time. Here are some suggestions for using less memory:

Use a large-scale algorithm if possible (see Large-Scale vs. Medium-Scale Algorithms). These algorithms include

`trust-region-reflective`,`interior-point`, the`fminunc``trust-region`algorithm, the`fsolve``trust-region-dogleg`algorithm, and the`Levenberg-Marquardt`algorithm. In contrast, the`active-set`,`quasi-newton`, and`sqp`algorithms are not large-scale.Use sparse matrices for your linear constraints.

Use a Jacobian multiply function or Hessian multiply function. For examples, see Jacobian Multiply Function with Linear Least Squares, Quadratic Minimization with Dense, Structured Hessian, and Minimization with Dense Structured Hessian, Linear Equalities.

If you have a Parallel Computing Toolbox™ license, your solver might run faster using parallel computing. For more information, see Parallel Computing.

Was this topic helpful?