Recurrent Neural Networks

© Fraunhfer CSP
Schematic representation of a folded recursive neuron with direct feedback (left) and a unfolded recursive neuron with direct feedback as a function of time (right) Source: http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/.

At Fraunhofer CSP we develop recurrent neural networks to solve time series problems. Such time series problems can be speech recognition or the aging of products. For example, recursive networks can be used to predict the yield of solar parks or the fatigue of materials under load. The basis of these recurrent networks is a direct or indirect feedback loop of a neuron output as a function of time. This feedback loop can be trained in a network consisting of several neurons and so-called hidden layers, allowing conclusions to be predicted about the future. Therefore, recurrent neural networks are predestined for time series problems.

The figure below shows the simplest form of a recurrent neural network, consisting of a single recurrent neuron with direct feedback loop. The neuron shown has an input (x) and an output (o) as well as a direct feedback loop (W). If this neuron is unfolded as a function of the time, the output Ot of the neuron at an arbitrary time t is equal to the sum of the input xt and the historical feedback Wt-1. This sum function can be used by arbitrary activation functions to predict the output. Using a so-called autoregressive neural network, time series problems with arbitrary input and output factors can be trained by this approach.