A Spiking Neural Network Model of Multi-modal Language Processing of Robot Instructions

Research output: Chapter in Book/Report/Conference proceedingChapter

5 Citations (Scopus)

Abstract

Presented is a spiking neural network architecture of human language instruction recognition and robot control. The network is based on a model of a leaky Integrate-And-Fire (lIAF) spiking neurone with Active Dendrites and Dynamic Synapses (ADDS) [1,2,3]. The architecture contains several main modules associating information across different modalities: an auditory system recognising single spoken words, a visual system recognising objects of different colour and shape, motor control system for navigation and motor control and a working memory. The main focus of this presentation is the working memory module whose function is sequential processing of word from a language instruction, task and goal representation and cross-modal association of objects and actions. We test the model with a robot whose goal is to recognise and execute language instructions. The work demonstrates the potential of spiking neurons for processing spatio-temporal patterns and the experiments present spiking neural networks as a paradigm which can be applied for modelling sequence detectors at word level for robot instructions.
Original languageEnglish
Title of host publicationBiomimetic Neural Learning for Intelligent Robots
EditorsStefan Wermter, Gunter Palm, Mark Elshaw
Place of PublicationBerlin
PublisherSpringer
Pages182-210|
Number of pages29
ISBN (Electronic)978-3-540-31896-5
ISBN (Print)978-3-540-27440-7
DOIs
Publication statusPublished - 2005
Externally publishedYes

Publication series

NameLecture Notes in Computer Science
Volume3575

Fingerprint Dive into the research topics of 'A Spiking Neural Network Model of Multi-modal Language Processing of Robot Instructions'. Together they form a unique fingerprint.

Cite this