Supervised Networks
Supervised neural networks are trained to produce desired outputs in response to sample inputs, making them particularly well suited to modeling and controlling dynamic systems, classifying noisy data, and predicting future events.
* Feedforward networks have one-way connections from input to output layers. They are most commonly used for prediction, pattern recognition, and nonlinear function fitting. Supported feedforward networks include feedforward backpropagation, cascade-forward backpropagation, feedforward input-delay backpropagation, linear, and perceptron networks.
* Radial basis networks provide an alternative, fast method for designing nonlinear feedforward networks. Supported variations include generalized regression and probabilistic neural networks.
* Dynamic networks use memory and recurrent feedback connections to recognize spatial and temporal patterns in data. They are commonly used for time-series prediction, nonlinear dynamic system modeling, and control system applications. Prebuilt dynamic networks in the toolbox include focused and distributed time-delay, nonlinear autoregressive (NARX), layer-recurrent, Elman, and Hopfield networks. The toolbox also supports dynamic training of custom networks with arbitrary connections.
* LVQ is a powerful method for classifying patterns that are not linearly separable. LVQ lets you specify class boundaries and the granularity of classification.
Unsupervised Networks
Unsupervised neural networks are trained by letting the network continually adjust itself to new inputs. They find relationships within data and can automatically define classification schemes.
The Neural Network Toolbox supports two types of self-organizing, unsupervised networks: competitive layers and self-organizing maps.
Competitive layers recognize and group similar input vectors. By using these groups, the network automatically sorts the inputs into categories.
Self-organizing maps learn to classify input vectors according to similarity. Unlike competitive layers they also preserve the topology of the input vectors, assigning nearby inputs to nearby categories.
(source: http://www.mathworks.com/products/neuralnet/descripton3.html)
Tuesday, July 04, 2006
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment