Accelerating the pace of engineering and science

# Documentation Center

• Trial Software

## Parallel MultiStart

### Steps for Parallel MultiStart

If you have a multicore processor or access to a processor network, you can use Parallel Computing Toolbox™ functions with MultiStart. This example shows how to find multiple minima in parallel for a problem, using a processor with two cores. The problem is the same as in Multiple Local Minima Via MultiStart.

1. Write a function file to compute the objective:

```function f = sawtoothxy(x,y)
[t r] = cart2pol(x,y); % change to polar coordinates
h = cos(2*t - 1/2)/2 + cos(t) + 2;
g = (sin(r) - sin(2*r)/2 + sin(3*r)/3 - sin(4*r)/4 + 4) ...
.*r.^2./(r+1);
f = g.*h;
end```
2. Create the problem structure:

```problem = createOptimProblem('fminunc',...
'objective',@(x)sawtoothxy(x(1),x(2)),...
'x0',[100,-50],'options',...
optimoptions(@fminunc,'Algorithm','quasi-newton'));```
3. Validate the problem structure by running it:

```[x fval] = fminunc(problem)

x =
8.4420 -110.2602

fval =
435.2573```
4. Create a MultiStart object, and set the object to use parallel processing and iterative display:

`ms = MultiStart('UseParallel','always','Display','iter');`
5. Set up parallel processing:

```parpool

Starting parpool using the 'local' profile ... connected to 4 workers.

ans =

Pool with properties:

AttachedFiles: {0x1 cell}
NumWorkers: 4
IdleTimeout: 30
Cluster: [1x1 parallel.cluster.Local]
RequestQueue: [1x1 parallel.RequestQueue]
SpmdEnabled: 1```
6. Run the problem on 50 start points:

```[x fval eflag output manymins] = run(ms,problem,50);
Running the local solvers in parallel.

Run       Local       Local      Local    Local   First-order
Index     exitflag      f(x)     # iter   F-count   optimality
17         2         3953         4        21        0.1626
16         0         1331        45       201         65.02
34         0         7271        54       201         520.9
33         2         8249         4        18         2.968
... Many iterations omitted ...
47         2         2740         5        21        0.0422
35         0         8501        48       201         424.8
50         0         1225        40       201         21.89

MultiStart completed some of the runs from the start points.

17 out of 50 local solver runs converged with a positive
local solver exit flag.```

Notice that the run indexes look random. Parallel MultiStart runs its start points in an unpredictable order.

Notice that MultiStart confirms parallel processing in the first line of output, which states: "Running the local solvers in parallel."

7. When finished, shut down the parallel environment:

```delete(gcp)

Sending a stop signal to all the workers ... stopped.```

For an example of how to obtain better solutions to this problem, see Example: Searching for a Better Solution. You can use parallel processing along with the techniques described in that example.

### Speedup with Parallel Computing

The results of MultiStart runs are stochastic. The timing of runs is stochastic, too. Nevertheless, some clear trends are apparent in the following table. The data for the table came from one run at each number of start points, on a machine with two cores.

Start PointsParallel SecondsSerial Seconds
503.63.4
1004.95.7
2008.310
5001623
10003146

Parallel computing can be slower than serial when you use only a few start points. As the number of start points increases, parallel computing becomes increasingly more efficient than serial.

There are many factors that affect speedup (or slowdown) with parallel processing. For more information, see Improving Performance with Parallel Computing in the Optimization Toolbox™ documentation.