An Oscillatory Model for Multimodal Processing of Short Language Instructions

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review


Language skills are dominantly implemented in one hemisphere (usually the left), with the pre-frontal areas playing a critical part (the inferior frontal area of Broca and the superior temporal area of Wernicke), but a network of additional regions in the brain, including some from the non-dominant hemisphere, are necessary for complete language functionality. This paper presents a neural architecture built on spiking neurons which implements a mechanism of associating representations of concepts in different modalities; as well as integrating sequential language input into a coherent representation/interpretation of an instruction. It follows the paradigm of temporal binding, namely synchronisation and phase locking of distributed representations in nested gamma-theta oscillations. The functionality of the architecture is presented in a set of experiments of language instructions given to a real robot.
Original languageEnglish
Title of host publicationArtificial Neural Networks ICANN 2007
EditorsJ P Marques de Sá, Luís A Alexandre, Wlodzislaw Duch, Danilo P Mandic
Place of PublicationBerlin
PublisherSpringer Verlag
Number of pages10
ISBN (Print)978-3-540-74695-9
Publication statusPublished - 2007
Externally publishedYes
Event17th International Conference on Artificial Neural Networks - Porto, Portugal
Duration: 9 Sep 200713 Sep 2007
Conference number: 17

Publication series

Name Lecture Notes in Computer Science


Conference17th International Conference on Artificial Neural Networks
Abbreviated titleICANN 2007


  • Central Pattern Generator
  • Language Instruction
  • Oscillatory Model
  • Neural Architecture
  • Theta Oscillation


Dive into the research topics of 'An Oscillatory Model for Multimodal Processing of Short Language Instructions'. Together they form a unique fingerprint.

Cite this