AbstractThe ability to perceive and interpret human emotions is an essential aspect of daily life. The recent success of deep learning (DL) has resulted in the ability to utilize automated emotion recognition by classifying affective modalities into a given emotional state. Accordingly, DL has set several state-of-the-art benchmarks on static aﬀective corpora collected in controlled environments. Yet, one of the main limitations of DL based intelligent systems is their inability to generalize on data with nonuniform conditions. For instance, when dealing with images in a real life scenario, where extraneous variables such as natural or artiﬁcial lighting are subject to constant change, the resulting changes in the data distribution commonly lead to poor classiﬁcation performance. These and other constraints, such as: lack of realistic data, changes in facial pose, and high data complexity and dimensionality increase the diﬃculty of designing DL models for emotion recognition in unconstrained environments.
This thesis investigates the development of deep artiﬁcial neural net-work learning algorithms for emotion recognition with speciﬁc attention to illumination and facial pose invariance. Moreover, this research looks at the development of illumination and rotation invariant face detection architectures based on deep reinforcement learning.
The contributions and novelty of this thesis are presented in the form of several deep learning pose and illumination invariant architectures that oﬀer state-of-the-art classiﬁcation performance on data with nonuniform conditions. Furthermore, a novel deep reinforcement learning architecture for illumination and rotation invariant face detection is also presented. The originality of this work is derived from a variety of novel deep learning paradigms designed for the training of such architectures.
|Date of Award||2018|
|Supervisor||Vasile Palade (Supervisor)|