Other NN methods include Radial Basis Functions (RBF) for similarity based classification, Adaptive Resonance Theory (ART) for clustering, Kohonen Networks (self organizing maps), Hopfield-Networks and much more. They are all based on the idea that neurons create functional new dimensions whose output is then used by other neurons. See [43,48,49] for more details and references about NN.
There are also other techniques for improving the use of neural networks and increasing their adaptability plus keeping their complexity (number of neurons) low. One of them is called ``Boosting''. This Meta learning technique weights the training data. In the beginning we use equal weights and obtain one hypothesis (NN-structure) by training. Now the weights of training samples which a classified wrong is increased and the ones of right classified samples decreased. A second NN is trained a linear combination of their results (voting) is used for classification. The weights of the training set are adjusted and we continue creating more NN hyptothesis.