1

I am looking at (two-layer) feed-forward Neural Networks in Matlab. I am investigating parameters that can minimise the classification error.

A google search reveals that these are some of them:

  • Number of neurons in the hidden layer
  • Learning Rate
  • Momentum
  • Training type
  • Epoch
  • Minimum Error
  • Any other suggestions?

I've varied the number of hidden neurons in Matlab, varying it from 1 to 10. I found that the classification error is close to 0% with 1 hidden neuron and then grows very slightly as the number of neurons increases. My question is: shouldn't a larger number of hidden neurons guarantee an equal or better answer, i.e. why might the classification error go up with more hidden neurons?

Also, how might I vary the Learning Rate, Momentum, Training type, Epoch and Minimum Error in Matlab?

Many thanks

2
  • what type of NN you are looking for? say feed forward network? or something else? Commented Jan 1, 2017 at 23:21
  • 1
    added: two-layer feed-forward NN Commented Jan 1, 2017 at 23:28

1 Answer 1

2

Since you are considering a simple two layer feed forward network and have already pointed out 6 different things you need to consider to reduce classification errors, I just want to add one thing only and that is amount of training data. If you train a neural network with more data, it will work better. Note that, training with large amount of data is a key to get good outcome from neural networks, specially from deep neural networks.

Why the classification error goes up with more hidden neurons?

Answer is simple. Your model has over-fitted the training data and thus resulting in poor performance. Note that, if you increase the number of neurons in hidden layers, it would decrease training errors but increase testing errors.

In the following figure, see what happens with increased hidden layer size!

enter image description here

How may I vary the Learning Rate, Momentum, Training type, Epoch and Minimum Error in Matlab?

I am expecting you have already seen feed forward neural net in Matlab. You just need to manipulate the second parameter of the function feedforwardnet(hiddenSizes,trainFcn) which is trainFcn - a training function.

For example, if you want to use gradient descent with momentum and adaptive learning rate backpropagation, then use traingdx as the training function. You can also use traingda if you want to use gradient descent with adaptive learning rate backpropagation.

You can change all the required parameters of the function as you want. For example, if you want to use traingda, then you just need to follow the following two steps.

  • Set net.trainFcn to traingda. This sets net.trainParam to traingda's default parameters.

  • Set net.trainParam properties to desired values.

Example

net = feedforwardnet(3,'traingda');
net.trainParam.lr = 0.05;  % setting the learning rate to 5%
net.trainParam.epochs = 2000  % setting number of epochs

Please see this - gradient descent with adaptive learning rate backpropagation and gradient descent with momentum and adaptive learning rate backpropagation.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.