COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Rainbow Group Seminars > Modeling and Predicting Emotion in Music
Modeling and Predicting Emotion in MusicAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Vaiva Imbrasaite. With the explosion of vast and easily-accessible digital music libraries over the past decade, there has been a rapid expansion of research towards automated systems for searching and organizing music and related data. Online retailers now o er vast collections of music, spanning tens of millions of songs, available for immediate download. While these online stores present a drastically different dynamic than the record stores of the past, consumers still arrive with the same requests – recommendation of music that is similar to their tastes; for both recommendation and curation, the vast digital music libraries of today necessarily require powerful automated tools. The medium of music has evolved specically for the expression of emotions, and it is natural for us to organize music in terms of its emotional associations. But while such organization is a natural process for humans, quantifying it empirically proves to be a very difficult task. Myriad features, such as harmony, timbre, interpretation, and lyrics affect emotion, and the mood of a piece may also change over its duration. Furthermore, in developing automated systems to organize music in terms of emotional content, we are faced with a problem that oftentimes lacks a well-dened answer; there may be considerable disagreement regarding the perception and interpretation of the emotions of a song or even ambiguity within the piece itself. Automatic identication of musical mood is a topic still in its early stages, though it has received increasing attention in recent years. Such work offers potential not just to revolutionize how we buy and listen to our music, but to provide deeper insight into the understanding of human emotions in general. This work seeks to relate core concepts from psychology to that of signal processing to understand how to extract information relevant to musical emotion from an acoustic signal. The methods discussed here survey existing features using psychology studies and develop new features using basis functions learned directly from magnitude spectra. Furthermore, this work presents a wide breadth of approaches in developing functional mappings between acoustic data and emotion space parameters. Using these models, a framework is constructed for content-based modelling and prediction of musical emotion. This talk is part of the Rainbow Group Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCambridge University Surgical Society Public Health Policy Talks Engineering - Mechanics and Materials Seminar Series Machine Learning Clinical Ethics ForumOther talksBP KEYNOTE LECTURE: Importance of C-O Bond Activation for CO2/COUtilization - An Approach to Energy Conversion and Storage Cycles of Revolution in Ukraine How India Became Democratic: Comparative Perspectives (Panel discussion led by Gary Gerstle and Tim Harper) MEASUREMENT SYSTEMS AND INSTRUMENTATION IN THE OIL AND GAS INDUSTRY Public Lecture: Development of social behaviour in children from infancy: neurobiological, relational and situational interactions SciScreen: Finding Dory Scale and anisotropic effects in necking of metallic tensile specimens Towards a whole brain model of perceptual learning Protein Folding, Evolution and Interactions Symposium The role of the oculomotor system in visual attention and visual short-term memory The formation of high density dust rings and clumps: the role of vorticity |