Robust Text Classification Using a Hysteresis-Driven Extended SRN

Garen Arevian, Christo Panchev

Research output: Chapter in Book/Report/Conference proceedingConference proceeding

1 Citation (Scopus)

Abstract

Recurrent Neural Network (RNN) models have been shown to perform well on artificial grammars for sequential classification tasks over long-term time-dependencies. However, there is a distinct lack of the application of RNNs to real-world text classification tasks. This paper presents results on the capabilities of extended two-context layer SRN models (xRNN) applied to the classification of the Reuters-21578 corpus. The results show that the introduction of high levels of noise to sequences of words in titles, where noise is defined as the unimportant stopwords found in natural language text, is very robustly handled by the classifiers which maintain consistent levels of performance. Comparisons are made with SRN and MLP models, as well as other existing classifiers for the text classification task.
Original languageEnglish
Title of host publicationProceedings of the 17th International Conference on Artificial Neural Networks (ICANN)
Place of PublicationBerlin
PublisherSpringer Verlag
Pages425–434
Number of pages10
ISBN (Print)978-3-540-74693-5
DOIs
Publication statusPublished - 2007
Externally publishedYes
Event17th International Conference on Artificial Neural Networks - Porto, Portugal
Duration: 9 Sep 200713 Sep 2007
Conference number: 17

Conference

Conference17th International Conference on Artificial Neural Networks
Abbreviated titleICANN 2007
CountryPortugal
CityPorto
Period9/09/0713/09/07

Fingerprint Dive into the research topics of 'Robust Text Classification Using a Hysteresis-Driven Extended SRN'. Together they form a unique fingerprint.

Cite this