Symbolic state transducers and recurrent neural preference machines for text mining

Garen Arevian, Stefan Wermter, Christo Panchev

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)


This paper focuses on symbolic transducers and recurrent neural preference machines to support the task of mining and classifying textual information. These encoding symbolic transducers and learning neural preference machines can be seen as independent agents, each one tackling the same task in a different manner. Systems combining such machines can potentially be more robust as the strengths and weaknesses of the different approaches yield complementary knowledge, wherein each machine models the same information content via different paradigms. An experimental analysis of the performance of these symbolic transducer and neural preference machines is presented. It is demonstrated that each approach can be successfully used for information mining and news classification using the Reuters news corpus. Symbolic transducer machines can be used to manually encode relevant knowledge quickly in a data-driven approach with no training, while trained neural preference machines can give better performance based on additional training.
Original languageEnglish
Pages (from-to)237-258
Number of pages22
JournalInternational Journal of Approximate Reasoning
Issue number2-3
Early online date4 Dec 2002
Publication statusPublished - Feb 2003
Externally publishedYes


  • Finite state automata
  • Symbolic transducers
  • Recurrent neural networks
  • Preference Moore Machines
  • Hybrid systems
  • Text classification


Dive into the research topics of 'Symbolic state transducers and recurrent neural preference machines for text mining'. Together they form a unique fingerprint.

Cite this