Abstract
This paper explores the possibility of combining an actor and critic in one architecture and uses a mixture of updates to train them. It describes a model for robot navigation that uses architecture similar to an actor-critic reinforcement learning architecture. It sets up the actor as a layer seconded by another layer which deduce the value function. Therefore, the effect is to have similar to a critic outcome combined with the actor in one network. The model hence can be used as the base for a truly deep reinforcement learning architecture that can be explored in the future. More importantly this work explores the results of mixing conjugate gradient update with gradient update for the mentioned architecture. The reward signal is back propagated from the critic to the actor through conjugate gradient eligibility trace for the second layer combined with gradient eligibility trace for the first layer. We show that this mixture of updates seems to work well for this model. The features layer have been deeply trained by applying a simple PCA on the whole set of images histograms acquired during the first running episode. The model is also able to adapt to a reduced features dimension autonomously. Initial experimental result on real robot shows that the agent accomplished good success rate in reaching a goal location.
Original language | English |
---|---|
Title of host publication | Neural Information Processing |
Editors | Sabri Arik, Tingwen Huang, Weng Kin Lai, Qingshan Liu |
Place of Publication | Switzerland |
Publisher | Springer Verlag |
Pages | 1-10 |
Volume | 9492 |
ISBN (Print) | Online: 978-3-319-26561-2, Print: 978-3-319-26560-5 |
DOIs | |
Publication status | Published - 18 Nov 2015 |
Bibliographical note
There is no full text availableKeywords
- Reinforcement learning
- Deep learning
- Actor-critic
- Neural networks
- Robot navigation