Abstract
This paper concerns a class of recurrent neural networks related to Elman networks (simple recurrent networks) and Jordan networks and a class of feedforward networks architecturally similar to Waibel’s TDNNs. The recurrent nets used herein, unlike standard Elman/Jordan networks, may have more than one state vector. It is known that such multi-state Elman networks have better learning performance on certain tasks than standard Elman networks of similar weight complexity. The task used involves learning the graphotactic structure of a sample of about 400 English words. Learning performance was tested using regimes in which the state vectors are, or are not, zeroed between words: the former results in larger minimum total error, but without the large oscillations in total error observed when the state vectors are not periodically zeroed. Learning performance comparisons of the three classes of network favour the feedforward nets.