TY - JOUR
T1 - Towards multimodal neural robot learning
AU - Wermter, Stefan
AU - Weber, Cornelius
AU - Elshaw, Mark
AU - Panchev, Christo
AU - Erwin, Harry
AU - Pulvermüller, Friedman
PY - 2004/6/30
Y1 - 2004/6/30
N2 - Learning by multimodal observation of vision and language offers a potentially powerful paradigm for robot learning. Recent experiments have shown that ‘mirror’ neurons are activated when an action is being performed, perceived, or verbally referred to. Different input modalities are processed by distributed cortical neuron ensembles for leg, arm and head actions. In this overview paper we consider this evidence from mirror neurons by integrating motor, vision and language representations in a learning robot.
AB - Learning by multimodal observation of vision and language offers a potentially powerful paradigm for robot learning. Recent experiments have shown that ‘mirror’ neurons are activated when an action is being performed, perceived, or verbally referred to. Different input modalities are processed by distributed cortical neuron ensembles for leg, arm and head actions. In this overview paper we consider this evidence from mirror neurons by integrating motor, vision and language representations in a learning robot.
U2 - 10.1016/j.robot.2004.03.011
DO - 10.1016/j.robot.2004.03.011
M3 - Article
SN - 0921-8890
VL - 47
SP - 171
EP - 175
JO - Robotics and Autonomous Systems
JF - Robotics and Autonomous Systems
IS - 2-3
ER -