Towards multimodal neural robot learning

Stefan Wermter, Cornelius Weber, Mark Elshaw, Christo Panchev, Harry Erwin, Friedman Pulvermüller

Research output: Contribution to journalArticle

33 Citations (Scopus)

Abstract

Learning by multimodal observation of vision and language offers a potentially powerful paradigm for robot learning. Recent experiments have shown that ‘mirror’ neurons are activated when an action is being performed, perceived, or verbally referred to. Different input modalities are processed by distributed cortical neuron ensembles for leg, arm and head actions. In this overview paper we consider this evidence from mirror neurons by integrating motor, vision and language representations in a learning robot.
Original languageEnglish
Pages (from-to)171-175
Number of pages5
JournalRobotics and Autonomous Systems
Volume47
Issue number2-3
DOIs
Publication statusPublished - 30 Jun 2004

Fingerprint Dive into the research topics of 'Towards multimodal neural robot learning'. Together they form a unique fingerprint.

Cite this