A Discussion of Musical Features for Automatic Music Playlist Generation Using Affective Technologies

Darryl Griffiths, Stuart Cunningham, Jonathan Weinel

Research output: Chapter in Book/Report/Conference proceedingConference proceedingpeer-review

5 Citations (Scopus)


This paper discusses how human emotion could be quantified using contextual and physiological information that has been gathered from a range of sensors, and how this data could then be used to automatically generate music playlists. The work is very much in progress and this paper details what has been done so far and plans for experiments and feature mapping to validate the concept in real-world scenarios.

We begin by discussing existing affective systems that automatically generate playlists based on human emotion. We then consider the current work in audio description analysis. A system is proposed that measures human emotion based on contextual and physiological data using a range of sensors. The sensors discussed to invoke such contextual characteristics range from temperature and light to EDA (electro dermal activity) and ECG (electrocardiogram). The concluding section describes the progress achieved so far, which includes defining datasets using a conceptual design, microprocessor electronics and data acquisition using Matlab. Lastly, there is brief discussion of future plans to develop this research.
Original languageEnglish
Title of host publicationAM '13 Proceedings of the 8th Audio Mostly Conference
Place of PublicationUnited States
Publisher Association for Computing Machinery
ISBN (Print)978-1-4503-2659-9
Publication statusPublished - 18 Sept 2013
Externally publishedYes
EventAudio Mostly Conference - Piteå, Sweden
Duration: 18 Sept 201320 Sept 2013
Conference number: 8


ConferenceAudio Mostly Conference


Dive into the research topics of 'A Discussion of Musical Features for Automatic Music Playlist Generation Using Affective Technologies'. Together they form a unique fingerprint.

Cite this