Automatic real-time extraction of musical expression
2002 (English)In: Proceedings of the International Computer Music Conference 2002, 2002, 365-367 p.Conference paper (Refereed)
Previous research has identified a set of acoustical cues that are important in communicating different emotions in music performance. We have applied these findings in the development of a system that automatically predicts the expressive intention of the player. First, low-level cues of music performances are extracted from audio. Important cues include average and variability values of sound level, tempo, articulation, attack velocity, and spectral content. Second, linear regression models obtained from listening experiments are used to predict the intended emotion. Third, the prediction data can be visually displayed using, for example, color mappings in accordance with synesthesia research. Preliminary test results indicate that the system accurately predicts the intended emotion and is robust to minor errors in the cue extraction.
Place, publisher, year, edition, pages
2002. 365-367 p.
music acoustics, musical expression, computer algorithms, analysis
IdentifiersURN: urn:nbn:se:uu:diva-43090OAI: oai:DiVA.org:uu-43090DiVA: diva2:70994