Introduction to Artificial Neural Network Model

post

Learn about Artificial Neural Networks (ANNs), including MLP, RBF, and Kohonen networks, their applications, and key differences in this tutorial.

Models of Artificial Neural Networks

There are various Artificial Neural Network models. The main ones include:

1. Multilayer Perceptron (MLP)
It is a feedforward artificial neural network model. It maps sets of input data onto a set of appropriate outputs. In feedforward neural networks, movement is only possible in the forward direction.

An MLP consists of many layers of nodes in a directed graph, with each layer connected to the next one. Each neuron is a linear equation, similar to linear regression.

The multilayer perceptron networks are suitable for discovering complex nonlinear models and are based on approximating any regular function with a sum of sigmoids.

MLP utilizes a supervised learning technique called backpropagation for training the network. This requires a known, desired output for each input value to calculate the loss function gradient. MLP is a modification of the standard linear perceptron and can distinguish data that are not linearly separable.

2. Radial Basis Function Network (RBF)
A Radial Basis Function (RBF) network is a supervised learning network like MLP but works with only one hidden layer. It accomplishes this by calculating the value of each unit in the hidden layer for an observation, using the distance between the observation and the center of the unit.

Unlike the weights of a multilayer perceptron, the centers of the hidden layer in an RBF network are not adjusted during each iteration of learning. The hidden neurons are virtually independent of each other, resulting in faster convergence during the learning phase.

In RBF, the response surface of the hidden unit is a Gaussian function, which is often bell-shaped.

Learning of RBF involves determining the number of units in the hidden layer, their centers, radii, and coefficients.

3. Kohonen Network
A Kohonen network, or self-organizing map (SOM), is a type of ANN trained using unsupervised learning. It produces a low-dimensional discretized representation of the input space, known as a map.

The Kohonen network is a self-organizing, unsupervised learning network that learns the structure of input data to distinguish clusters within it. It consists of:

Input Layer: Each unit represents an input variable.

Output Layer: Typically arranged in a grid (l * m), with each unit connected to the input layer.

In the Kohonen network, only one output unit (the 'winner') is activated for each input. This unit and its neighbors adjust their weights to reflect the similarity of the input data.

Comparison of MLP and RBF Networks

Both MLP and RBF networks are feedforward networks, but they differ in the way hidden units combine values. MLPs use inner products, while RBFs use Euclidean distance.

Key Differences:

NetworkMLPRBF
Hidden Layers≥ 1= 1
Combination FunctionScalar productEuclidean distance
Transfer FunctionLogistic: s(X) = 1 / (1 + exp(-X))Gaussian: Γ(X) = exp(-X² / 2σ²)
SpeedFaster in model application modeFaster in model learning mode
AdvantageBetter generalizationLess risk of non-optimal convergence

Conclusion

In conclusion, ANN models are expected to continue playing a crucial role in modern computational intelligence. The inclusion of ANN-like models in probabilistic modeling can offer techniques that integrate both explanatory and data-driven approaches while maintaining a fuller modeling capability through operating with full distributions instead of simple point estimates.


Share This Job:

Write A Comment

    No Comments