Abstract
Deep Learning (DL) has shown real promise for the classification efficiency for emotion recognition problems. In this paper we present experimental results for a deeply-trained model for emotion recognition through the use of facial expression images. We explore two Convolutional Neural Network (CNN) architectures that offer automatic feature extraction and representation, followed by fully connected softmax layers to classify images into seven emotions. The first architecture explores the impact of reducing the number of deep learning layers and the second splits the input images horizontally into two streams based on eye and mouth positions. The first proposed architecture produces state of the art results with an accuracy rate of 96.93 % and the second architecture with split input produces an average accuracy rate of 86.73 %, respectively.
Original language | English |
---|---|
Title of host publication | Artificial Neural Networks and Machine Learning – ICANN 2016 |
Editors | Alessandro E.P. Villa, Paolo Masulli, Antonio Javier Pons Rivero |
Place of Publication | Switzerland |
Publisher | Springer Verlag |
Pages | 38-46 |
Volume | 9887 |
ISBN (Print) | 978-3-319-44780-3, 978-3-319-44781-0 |
DOIs | |
Publication status | Published - 13 Aug 2016 |
Event | The 25th International Conference on Artificial Neural Networks - Barcelona, Spain Duration: 6 Sept 2016 → 9 Sept 2016 |
Conference
Conference | The 25th International Conference on Artificial Neural Networks |
---|---|
Abbreviated title | ICANN 2016 |
Country/Territory | Spain |
City | Barcelona |
Period | 6/09/16 → 9/09/16 |
Bibliographical note
The full text is not available on the repository.Keywords
- Deep learning
- Convolution neural networks
- Emotion recognition
- Empathic robots