Emotional recognition from the speech signal for a virtual education agent

Andrew Jason Tickle, S Raghu, Mark Elshaw

    Research output: Contribution to journalConference articlepeer-review

    18 Citations (Scopus)
    36 Downloads (Pure)

    Abstract

    This paper explores the extraction of features from the speech wave to perform intelligent emotion recognition. A feature extract tool (openSmile) was used to obtain a baseline set of 998 acoustic features from a set of emotional speech recordings from amicrophone. The initial features were reduced to the most important ones so recognition ofemotions using a supervised neural network could be performed. Given that the future use of virtual education agents lies with making the agents more interactive, developing agents with the capability to recognise and adapt to the emotional state of humans is an important step.
    Original languageEnglish
    Article number012053
    Number of pages7
    JournalJournal of Physics: Conference Series
    Volume450
    DOIs
    Publication statusPublished - 2013
    EventSensors & Their Applications XVII 2013 - Dubrovnik, Croatia
    Duration: 16 Sept 201318 Sept 2013
    Conference number: 17

    Bibliographical note

    Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

    Fingerprint

    Dive into the research topics of 'Emotional recognition from the speech signal for a virtual education agent'. Together they form a unique fingerprint.

    Cite this