How can I use the Genetic Algorithm (GA) to train a Neural Network in Neural Network Toolbox?

74 views (last 30 days)

Accepted Answer

MathWorks Support Team
MathWorks Support Team on 12 Nov 2020
Edited: MathWorks Support Team on 5 Nov 2020
The ability to set the algorithm to ga in the train function is not currently directly available in Neural Network Toolbox (as of R2017a at least).
To work around this issue, use the steps outlined below to optimize a neural network using a genetic algorithm.
The "ga" function requires a function handle as an input argument to which it passes a 1xN vector, where N is the number of variables in the system to be optimized.
For a neural network, the weights and biases are a Mx1 vector. These may be optimized using "ga".
A function can be written to accept the network, weights and biases, inputs and targets. This function may return the mean squared error based on the outputs and the targets as "ga" requires a function handle that only returns a scalar value.
The following code example describes a function that returns the mean squared error for a given input of weights and biases, a network, its inputs and targets.
function mse_calc = mse_test(x, net, inputs, targets)
% 'x' contains the weights and biases vector
% in row vector form as passed to it by the
% genetic algorithm. This must be transposed
% when being set as the weights and biases
% vector for the network.
%
% To set the weights and biases vector to the
% one given as input
net = setwb(net, x');
%
% To evaluate the ouputs based on the given
% weights and biases vector
y = net(inputs);
%
% Calculating the mean squared error
mse_calc = sum((y-targets).^2)/length(y);
end
The following code example describes a separate script that sets up a basic Neural Network problem and the definition of a function handle to be passed to "ga". It uses the above function to calculate the Mean Squared Error.\n
% INITIALIZE THE NEURAL NETWORK PROBLEM %
%
% inputs for the neural net
inputs = (1:10);
% targets for the neural net
targets = cos(inputs.^2);
%
% number of neurons
n = 2;
%
% create a neural network
net = feedforwardnet(n);
%
% configure the neural network for this dataset
net = configure(net, inputs, targets);
%
% create handle to the MSE_TEST function, that
% calculates MSE
h = @(x) mse_test(x, net, inputs, targets);
%
% Setting the Genetic Algorithms tolerance for
% minimum change in fitness function before
% terminating algorithm to 1e-8 and displaying
% each iteration's results.
ga_opts = gaoptimset('TolFun', 1e-8,'display','iter');
%
% PLEASE NOTE: For a feed-forward network
% with n neurons, 3n+1 quantities are required
% in the weights and biases column vector.
%
% a. n for the input weights
% b. n for the input biases
% c. n for the output weights
% d. 1 for the output bias
%
% running the genetic algorithm with desired options
[x_ga_opt, err_ga] = ga(h, 3*n+1, ga_opts);
Please note that the above example makes use of
, which was first introduced in R2010b.
  1 Comment
Mukul Rao
Mukul Rao on 19 Jun 2017
Hi Jagriti,
I believe the principle would remain the same, you would have to define an objective function that returns a scalar. Plugging in inputs of size 13x300 into the network will return an output that is of size 3x300. You could then try to minimize the sum of the squares of the errors associated with each row.

Sign in to comment.

More Answers (2)

Don Mathis
Don Mathis on 7 Jun 2017
The code posted by MathWorks Support Team on 18 Oct 2013 throws an error if you put it in a file and run it as-is, because the function definition precedes the script. To fix it, you just need to put the function definition last. Here is a corrected version that seems to work fine. Just paste it into the MATLAB editor and hit the Run button.
% INITIALIZE THE NEURAL NETWORK PROBLEM %
% inputs for the neural net
inputs = (1:10);
% targets for the neural net
targets = cos(inputs.^2);
% number of neurons
n = 2;
% create a neural network
net = feedforwardnet(n);
% configure the neural network for this dataset
net = configure(net, inputs, targets);
% create handle to the MSE_TEST function, that
% calculates MSE
h = @(x) mse_test(x, net, inputs, targets);
% Setting the Genetic Algorithms tolerance for
% minimum change in fitness function before
% terminating algorithm to 1e-8 and displaying
% each iteration's results.
ga_opts = gaoptimset('TolFun', 1e-8,'display','iter');
% PLEASE NOTE: For a feed-forward network
% with n neurons, 3n+1 quantities are required
% in the weights and biases column vector.
%
% a. n for the input weights
% b. n for the input biases
% c. n for the output weights
% d. 1 for the output bias
% running the genetic algorithm with desired options
[x_ga_opt, err_ga] = ga(h, 3*n+1, ga_opts);
function mse_calc = mse_test(x, net, inputs, targets)
% 'x' contains the weights and biases vector
% in row vector form as passed to it by the
% genetic algorithm. This must be transposed
% when being set as the weights and biases
% vector for the network.
% To set the weights and biases vector to the
% one given as input
net = setwb(net, x');
% To evaluate the ouputs based on the given
% weights and biases vector
y = net(inputs);
% Calculating the mean squared error
mse_calc = sum((y-targets).^2)/length(y);
end
  2 Comments
Cam Salzberger
Cam Salzberger on 22 Jun 2017
Hello Don,
I think the original answer was intended to be in two separate files (which is why there was a break in code there). Putting local functions into script files is only supported in R2016b and later.

Sign in to comment.


Greg Heath
Greg Heath on 21 Jun 2017
>Let us clear some point out of Logic used GA for Updating Weight of NN >Previously we have studied that weight keeps on updating for better >optimized answer.
That is because at each stage, MANY sets of random candidates are created
before one is found that decreases the error. Then that set of weights is accepted
and the multiple random searches are continued.
> but here the weight are fixed as they are just changing their position...
No! Absolutely not!
>So how come one can guarantee this will lead to optimize answer as the weight initially taken is random.
No. You do not understand.
1. ALL of the weights are randomly changed
2. The resulting error is calculated
3. If the new error is not lower than the existing error, the new set of
weights is discarded and the algorithm goes back to step 1.
4. If the new error is lower than the existing error, the new set of
weights are accepted.
a. If the new error is smaller or equal to the goal the algorithm is
terminated.
b. If the new error not smaller or equal to the goal, the algorithm
goes back to step1.
>Once random value assigned then we are not changing it, we just change >the position of value for multiplication...
No.
>It means for every alternate time you run you will get different result...
That is the point: Keep generating sets of weights until one lowers the error.
Accept that set. If the final error has reached the goal you are finished.
Otherwise start over with generating more randon sets.
Hope this helps.
Thank you for formally accepting my answer
Greg

Products


Release

R2010b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!