Meaning spotting and robustness of recurrent networks

Stefan Wermter, Christo Panchev, Garen Arevian

Research output: Chapter in Book/Report/Conference proceedingConference proceedingpeer-review

1 Citation (Scopus)


This paper describes and evaluates the behavior of preference-based recurrent networks which process text sequences. First, we train a recurrent plausibility network to learn a semantic classification of the Reuters news title corpus. Then we analyze the robustness and incremental learning behavior of these networks in more detail. We demonstrate that these recurrent networks use their recurrent connections to support incremental processing. In particular, we compare the performance of the real title models with reversed title models and even random title models. We find that the recurrent networks can, even under these severe conditions, provide good classification results. We claim that previous context in recurrent connections and a meaning spotting strategy are pursued by the network which supports this robust processing.
Original languageEnglish
Title of host publicationProceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing
Subtitle of host publicationNew Challenges and Perspectives for the New Millennium
Pages433 - 438
Number of pages6
ISBN (Print)0-7695-0619-4
Publication statusPublished - 2000
Externally publishedYes
EventInternational Joint Conference on Neural Networks (IJCNN) - Como, Italy
Duration: 27 Jul 200027 Jul 2000


ConferenceInternational Joint Conference on Neural Networks (IJCNN)
Abbreviated titleIJCNN 2000


Dive into the research topics of 'Meaning spotting and robustness of recurrent networks'. Together they form a unique fingerprint.

Cite this