Please use this identifier to cite or link to this item: https://hdl.handle.net/10316/94095
DC FieldValueLanguage
dc.contributor.authorPanda, Renato Eduardo Silva-
dc.contributor.authorMalheiro, Ricardo-
dc.contributor.authorRocha, Bruno-
dc.contributor.authorOliveira, António Pedro-
dc.contributor.authorPaiva, Rui Pedro-
dc.date.accessioned2021-04-02T19:16:54Z-
dc.date.available2021-04-02T19:16:54Z-
dc.date.issued2013-
dc.identifier.urihttps://hdl.handle.net/10316/94095-
dc.description.abstractWe propose a multi-modal approach to the music emotion recognition (MER) problem, combining information from distinct sources, namely audio, MIDI and lyrics. We introduce a methodology for the automatic creation of a multi-modal music emotion dataset resorting to the AllMusic database, based on the emotion tags used in the MIREX Mood Classification Task. Then, MIDI files and lyrics corresponding to a sub-set of the obtained audio samples were gathered. The dataset was organized into the same 5 emotion clusters defined in MIREX. From the audio data, 177 standard features and 98 melodic features were extracted. As for MIDI, 320 features were collected. Finally, 26 lyrical features were extracted. We experimented with several supervised learning and feature selection strategies to evaluate the proposed multi-modal approach. Employing only standard audio features, the best attained performance was 44.3% (F-measure). With the multi-modal approach, results improved to 61.1%, using only 19 multi-modal features. Melodic audio features were particularly important to this improvement.pt
dc.language.isoengpt
dc.relationinfo:eu-repo/grantAgreement/FCT/5876-PPCDTI/102185/PT/MOODetector - A System for Mood-based Classification and Retrieval of Audio Musicpt
dc.rightsopenAccesspt
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/pt
dc.subjectmusic emotion recognitionpt
dc.subjectmachine learningpt
dc.subjectmulti-modal analysispt
dc.titleMulti-Modal Music Emotion Recognition: A New Dataset, Methodology and Comparative Analysispt
dc.typeconferenceObjectpt
degois.publication.firstPage570pt
degois.publication.lastPage582pt
degois.publication.locationMarseille, Francept
degois.publication.title10th International Symposium on Computer Music Multidisciplinary Research (CMMR 2013)pt
dc.peerreviewedyespt
dc.date.embargo2013-01-01*
uc.date.periodoEmbargo0pt
item.languageiso639-1en-
item.fulltextCom Texto completo-
item.grantfulltextopen-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.openairetypeconferenceObject-
item.cerifentitytypePublications-
crisitem.project.grantnoinfo:eu-repo/grantAgreement/FCT/5876-PPCDTI/102185/PT/MOODetector - A System for Mood-based Classification and Retrieval of Audio Music-
crisitem.author.researchunitCISUC - Centre for Informatics and Systems of the University of Coimbra-
crisitem.author.researchunitCISUC - Centre for Informatics and Systems of the University of Coimbra-
crisitem.author.researchunitCISUC - Centre for Informatics and Systems of the University of Coimbra-
crisitem.author.researchunitCISUC - Centre for Informatics and Systems of the University of Coimbra-
crisitem.author.parentresearchunitFaculty of Sciences and Technology-
crisitem.author.parentresearchunitFaculty of Sciences and Technology-
crisitem.author.parentresearchunitFaculty of Sciences and Technology-
crisitem.author.parentresearchunitFaculty of Sciences and Technology-
crisitem.author.orcid0000-0003-2539-5590-
crisitem.author.orcid0000-0002-3010-2732-
crisitem.author.orcid0000-0003-1643-667X-
crisitem.author.orcid0000-0003-3215-3960-
Appears in Collections:I&D CISUC - Artigos em Livros de Actas
Show simple item record

Page view(s)

1,086
checked on Nov 6, 2024

Download(s)

572
checked on Nov 6, 2024

Google ScholarTM

Check


This item is licensed under a Creative Commons License Creative Commons