Music Information Retrieval

cassette tapes

Music is ubiquitous in today's world-almost everyone enjoys listening to music. With the rise of streaming platforms, the amount of music available has substantially increased. While users may seemingly benefit from this plethora of available music, at the same time, it has increasingly made it harder for users to explore new music and find songs they like. Personalized access to music libraries and music recommender systems aim to help users discover and retrieve music they like and enjoy. 

To this end, the field of Music Information Retrieval (MIR) strives to make music accessible to all by advancing retrieval applications such as music recommender systems, content-based search, the generation of personalized playlists, or user interfaces that allow to visually explore music collections. This includes gathering machine-readable musical data, the extraction of meaningful features, developing data representations based on these features, methodologies to process and understand that data. Retrieval approaches specifically leverage these representations for indexing music and providing search and retrieval services.

In our research, we develop methods for analyzing user music consumption behavior, investigate deep learning-based feature extraction methods for music content analysis, predicting the potential success and popularity of songs, and distilling sets of features that allow capturing user music preferences for retrieval tasks.

 

Public Datasets

For our research, we employ a variety of datasets that we have curated and utilized in our research and publications. We are happy to share the following datasets:

  • #nowplaying is a dataset that leverages Twitter for the creation of a diverse and constantly updated data set describing the music listening behavior of users. Twitter is frequently facilitated to post which music the respective user is currently listening to. From such tweets, we extract track and artist information and further metadata. You can find the dataset on Zenodo: https://doi.org/10.5281/zenodo.2594482 (CC BY 4.0).
  • The #nowplaying-RS dataset features context- and content features of listening events. It contains 11.6 million music listening events of 139K users and 346K tracks collected from Twitter. The dataset comes with a rich set of item content features and user context features, as well as timestamps of the listening events. Moreover, some of the user context features imply the cultural origin of the users, and some others—like hashtags—give clues to the emotional state of a user underlying a listening event. You can find the dataset on Zenodo: https://doi.org/10.5281/zenodo.2594537 (CC BY 4.0).
  • The Spotify playlists dataset is based on the subset of users in the #nowplaying dataset who publish their #nowplaying tweets via Spotify. In principle, the dataset holds users, their playlists, and the tracks contained in these playlists. You can find the dataset on Zenodo: https://doi.org/10.5281/zenodo.2594556 (CC BY 4.0).
  • The Hit Song Prediction dataset features high- and low-level audio descriptors of the songs contained in the Million Song Dataset (extracted via Essentia) for content-based hit song prediction tasks. You can find the dataset on Zenodo: https://doi.org/10.5281/zenodo.3258042 (CC BY 4.0).

 

Photo by henry perks on Unsplash. 

Team

Current Theses

Currently running

Publications

2021

Bib Link Download

Martin Pichl and Eva Zangerle: User models for multi-context-aware music recommendation. In Multimedia Tools and Applications, vol. 80, no. 15, pages 22509-22531. Springer, 2021

Bib Link Download

Eva Zangerle, Chih-Ming Chen, Ming-Feng Tsai and Yi-Hsuan Yang: Leveraging Affective Hashtags for Ranking Music Recommendations. In IEEE Transactions on Affective Computing, vol. 12, no. 1, pages 78-91. 2021

Bib Link Download

Dominik Kowald, Peter Muellner, Eva Zangerle, Christine Bauer, Markus Schedl and Elisabeth Lex: Support the underground: characteristics of beyond-mainstream music listeners. In EPJ Data Science, vol. 10, no. 1, pages 1-26. Springer, 2021

2020

Bib Link

Julie Cumming, Jin Ha Lee, Brian McFee, Markus Schedl, Johanna Devaney, Cory McKay, Eva Zangerle and Timothy de Reuse: Proceedings of the 21th International Society for Music Information Retrieval Conference, ISMIR 2020, Montreal, Canada, October 11-16, 2020. 

Bib Link Download

Eva Zangerle, Martin Pichl and Markus Schedl: User Models for Culture-Aware Music Recommendation: Fusing Acoustic and Cultural Cues. In Transactions of the International Society for Music Information Retrieval, vol. 3, no. 1. Ubiquity Press, 2020

Bib Link Download

Meijun Liu, Eva Zangerle, Xiao Hu, Alessandro Melchiorre and Markus Schedl: Pandemics, Music, and Collective Sentiment: Evidence from the Outbreak of COVID-19. In Proceedings of the 21st International Society for Music Information Retrieval Conference 2020 (ISMIR 2020), pages 157-165. 2020

Bib Link Download

Michael Vötter, Maximilian Mayerl, Günther Specht and Eva Zangerle: Recognizing Song Mood and Theme: Leveraging Ensembles of Tag Groups. In Working Notes Proceedings of the MediaEval 2020 Workshop. ceur-ws.org, 2020

Bib Link Download

Alessandro B. Melchiorre, Eva Zangerle and Markus Schedl: Personality Bias of Music Recommendation Algorithms. In 14th ACM Conference on Recommender Systems (RecSys 2020), pages 533–538. ACM, 2020.

Bib Link Download

Maximilian Mayerl, Michael Vötter, Manfred Moosleitner and Eva Zangerle: Comparing Lyrics Features for Genre Recognition. In Proceedings of the 1st Workshop on NLP for Music and Audio (NLP4MusA), pages 73-77. 2020.

2019

Bib Link Download

Eva Zangerle, Ramona Huber, Michael Vötter and Yi-Hsuan Yang: Hit Song Prediction: Leveraging Low- and High-Level Audio Features. In Proceedings of the 20th International Society for Music Information Retrieval Conference 2019 (ISMIR 2019), pages 319-326. 2019

Bib Link Download

Maximilian Mayerl, Michael Vötter, Eva Zangerle and Günther Specht: Language Models for Next-Track Music Recommendation. In Proceedings of the 31st GI-Workshop Grundlagen von Datenbanken, Saarburg, Germany, June 11-14, 2019., pages 15-19. 2019

Bib Link Download

Michael Vötter, Eva Zangerle, Maximilian Mayerl and Günther Specht: Autoencoders for Next-Track-Recommendation. In Proceedings of the 31st GI-Workshop Grundlagen von Datenbanken, Saarburg, Germany, June 11-14, 2019., pages 20-25. 2019

Bib Link Download

Maximilian Mayerl, Michael Vötter, Hsiao-Tzu Hung, Boyu Chen, Yi-Hsuan Yang and Eva Zangerle: Recognizing Song Mood and Theme Using Convolutional Recurrent Neural Networks. In Working Notes Proceedings of the MediaEval 2019 Workshop. ceur-ws.org, 2019.

Bib Link Download

Hsiao-Tzu Hung, Yu-Hua Chen, Maximilian Mayerl, Michael Vötter, Eva Zangerle and Yi-Hsuan Yang: MediaEval 2019 Emotion and Theme Recognition task: A VQ-VAE Based Approach. In Working Notes Proceedings of the MediaEval 2019 Workshop. ceur-ws.org, 2019.

2018

Bib Download

Martin Pichl and Eva Zangerle: Latent Feature Combination for Multi-Context Music Recommendation. In 2018 International Conference on Content-Based Multimedia Indexing (CBMI), pages 1-6. 2018

Bib Link Download

Eva Zangerle and Martin Pichl: Content-based User Models: Modeling the Many Faces of Musical Preference. In Proceedings of the 19th International Society for Music Information Retrieval Conference 2018 (ISMIR 2018), pages 709-716. 2018

Bib Link

Asmita Poddar, Eva Zangerle and Yi-Hsuan Yang : #nowplaying-RS: A New Benchmark Dataset for Building Context-Aware Music Recommender Systems. In Proceedings of the 15th Sound & Music Computing Conference. 2018

Bib Link Download

Eva Zangerle, Martin Pichl and Markus Schedl: Culture-Aware Music Recommendation. In Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization (UMAP 2018), pages 357-358. ACM, 2018

Bib Link Download

Benjamin Murauer and Günther Specht: Detecting Music Genre Using Extreme Gradient Boosting. In Companion of the The Web Conference 2018 on The Web Conference 2018, pages 1923-1927. International World Wide Web Conferences Steering Committee, 2018.

Bib Link Download

Eva Zangerle, Michael Tschuggnall, Stefan Wurzinger and Günther Specht: ALF-200k: Towards Extensive Multimodal Analyses of Music Tracks and Playlists. In Advances in Information Retrieval - 39th European Conference on IR Research (ECIR 2018), pages 584-590. Springer, 2018