Stability of Learning in Classes of Recurrent and Feedforward Networks

Download files
Access & Terms of Use
open access
Altmetric
Abstract
This paper concerns a class of recurrent neural networks related to Elman networks (simple recurrent networks) and Jordan networks and a class of feedforward networks architecturally similar to Waibel’s TDNNs. The recurrent nets used herein, unlike standard Elman/Jordan networks, may have more than one state vector. It is known that such multi-state Elman networks have better learning performance on certain tasks than standard Elman networks of similar weight complexity. The task used involves learning the graphotactic structure of a sample of about 400 English words. Learning performance was tested using regimes in which the state vectors are, or are not, zeroed between words: the former results in larger minimum total error, but without the large oscillations in total error observed when the state vectors are not periodically zeroed. Learning performance comparisons of the three classes of network favour the feedforward nets.
Persistent link to this record
Link to Publisher Version
Link to Open Access Version
Additional Link
Author(s)
Wilson, William Hulme
Supervisor(s)
Creator(s)
Editor(s)
Translator(s)
Curator(s)
Designer(s)
Arranger(s)
Composer(s)
Recordist(s)
Conference Proceedings Editor(s)
Other Contributor(s)
Corporate/Industry Contributor(s)
Publication Year
1995
Resource Type
Conference Paper
Degree Type
UNSW Faculty
Files
download wilson stability.pdf 98.88 KB Adobe Portable Document Format
Related dataset(s)