Abstract
This paper describes and evaluates the behavior of preference-based recurrent networks which process text sequences. First, we train a recurrent plausibility network to learn a semantic classification of the Reuters news title corpus. Then we analyze the robustness and incremental learning behavior of these networks in more detail. We demonstrate that these recurrent networks use their recurrent connections to support incremental processing. In particular, we compare the performance of the real title models with reversed title models and even random title models. We find that the recurrent networks can, even under these severe conditions, provide good classification results. We claim that previous context in recurrent connections and a meaning spotting strategy are pursued by the network which supports this robust processing.
Original language | English |
---|---|
Title of host publication | Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing |
Subtitle of host publication | New Challenges and Perspectives for the New Millennium |
Publisher | IEEE |
Pages | 433 - 438 |
Number of pages | 6 |
Volume | 3 |
ISBN (Print) | 0-7695-0619-4 |
DOIs | |
Publication status | Published - 2000 |
Externally published | Yes |
Event | International Joint Conference on Neural Networks (IJCNN) - Como, Italy Duration: 27 Jul 2000 → 27 Jul 2000 |
Conference
Conference | International Joint Conference on Neural Networks (IJCNN) |
---|---|
Abbreviated title | IJCNN 2000 |
Country/Territory | Italy |
City | Como |
Period | 27/07/00 → 27/07/00 |