next up previous contents
Next: Unsupervised, Ordered Methods Up: Neural Networks Previous: From Perceptrons to Multi-layer-perceptrons

Discussion:

Relating the Multi layer Perceptron (MLP) back to the basic techniques, we see that MLP's iterate the process of creating functional dimensions of the given variables. The hope is to end up with some final dimensions which are strongly correlated with our desired classification. Or in NN-words: the focus is on final dimensions which transform the input dimensions sufficiently enough to approximate the unknown decision surface.

Other NN methods include Radial Basis Functions (RBF) for similarity based classification, Adaptive Resonance Theory (ART) for clustering, Kohonen Networks (self organizing maps), Hopfield-Networks and much more. They are all based on the idea that neurons create functional new dimensions whose output is then used by other neurons. See [43,48,49] for more details and references about NN.

There are also other techniques for improving the use of neural networks and increasing their adaptability plus keeping their complexity (number of neurons) low. One of them is called ``Boosting''. This Meta learning technique weights the training data. In the beginning we use equal weights and obtain one hypothesis (NN-structure) by training. Now the weights of training samples which a classified wrong is increased and the ones of right classified samples decreased. A second NN is trained a linear combination of their results (voting) is used for classification. The weights of the training set are adjusted and we continue creating more NN hyptothesis.


next up previous contents
Next: Unsupervised, Ordered Methods Up: Neural Networks Previous: From Perceptrons to Multi-layer-perceptrons
Thomas Prang
1998-06-07