Neural Networks

Wolfgang Härdle and Heiko Lehmann
September 20, 2000

A neural network consists of many simple processing units that are connected by communication channels. Much of the inspiration for the field of neural networks came from the desire to perform artificial systems capable of sophisticated, perhaps intelligent computations similar to those of the human brain.

Neural networks usually learn from examples and exhibit some capability for generalization beyond the data used for training. They are able to approximate highly nonlinear functional relationships in data sets.

Figure 1: A neuron within a neural network.

787

The smallest part of a neural network is one single neuron as shown in Figure 1. It takes a set of individual inputs $x=(x_1,\ldots,x_I)$ and determines (through the learning algorithm) the optimal connection weights $w=(w_1,\ldots,w_I)$ that are appropriate to each input. Next, the neuron aggregates these weighted values to a single value

\begin{displaymath}

u = w_0+\sum_{i=1}^{I} w_ ix_i\,.

\end{displaymath}

An activation function $F(\bullet)$ is then applied to the aggregated weighted value to produce an individual output

F (u)

for the specific neuron. A typical activation function is the logistic distribution function

\begin{displaymath}F(u) = \frac{1}{1+\exp(-u)}.\end{displaymath}

The aim of a neural network is to explain the outputs $y=(y_1,\ldots,y_Q)$ by the input variables $x=(x_1,\ldots,x_I)$. More exactly, we want to find functions $f_k(\bullet)$ such that fk(x) explains the output variable yk.

A neural network with one hidden layer (single hidden layer) consists of neurons of three basic types:



Method and Data Technologies   MD*TECH Method and Data Technologies
  http://www.mdtech.de  mdtech@mdtech.de