Jordan-Elman Network
Last updated
Last updated
The Jordan–Elman network is also referred to as the simple recurrent network (SRN) [17]. It is a single hidden-layer feed-forward network with feedback connections from the outputs of the hidden-layer neuron to the input of the hidden layer [15]. It was originally developed to learn temporal sequences or time-varying patterns. As shown in Figure 2-15 the network contains context units located in the upper portion and used to replicate the hidden-layer output signals.
The context units are introduced to resolve conflicts arising from patterns that are similar yet result in dissimilar outputs. The feedback provides a mechanism to discriminate between identical patterns occurring at different times. The context units are referred to as a low-pass filter that creates a weighted average output of some of the more recent past inputs. They are also called “memory units” since they tend to remember information from past events. The training phase of this network is achieved by adapting all the weights using standard back-propagation procedures. More details on this topology can be found in [17] and [21].