This paper explores the extraction of features from the speech wave to perform intelligent emotion recognition. A feature extract tool (openSmile) was used to obtain a baseline set of 998 acoustic features from a set of emotional speech recordings from amicrophone. The initial features were reduced to the most important ones so recognition ofemotions using a supervised neural network could be performed. Given that the future use of virtual education agents lies with making the agents more interactive, developing agents with the capability to recognise and adapt to the emotional state of humans is an important step.
|Number of pages||7|
|Journal||Journal of Physics: Conference Series|
|Publication status||Published - 2013|
|Event||Sensors & Their Applications XVII 2013 - Dubrovnik, Croatia|
Duration: 16 Sep 2013 → 18 Sep 2013
Conference number: 17