webinar register page

Webinar banner
Deep Embeddings and Section Fusion for Music Segmentation
Music segmentation algorithms identify the structure of a music recording by automatically dividing it into sections and determining which sections repeat and when.

In this talk I give an overview of this music information retrieval problem and present a novel music segmentation method that leverages deep audio embeddings learned via other tasks.

This approach builds on an existing segmentation algorithm replacing manually engineered features with deep embeddings learned through audio classification problems where data are abundant. Additionally, I present a novel section fusion algorithm that leverages the segmentation with multiple hierarchical levels to consolidate short segments at each level in a way that is consistent with the segmentations at lower levels.

Through a series of experiments and audio examples I show that this method yields state-of-the-art results in most metrics and most popular publicly available datasets.

Nov 16, 2021 11:50 AM in Pacific Time (US and Canada)

* Required information
Loading

Speakers

Oriol Nieto
Oriol Nieto (he/him or they/them) is a Senior Audio Research Engineer at Adobe Research in San Francisco. He previously was a Staff Scientist in the Radio and Music Informatics team at Pandora, and holds a PhD from the Music and Audio Research Laboratory of New York University. His research focuses on topics such as music information retrieval, large scale recommendation systems, music generation, and machine learning on audio with especial emphasis on deep architectures. His PhD thesis is about trying to better teach computers at “understanding” the structure of music. Oriol develops open source Python packages, plays guitar, violin, cajón, and sings (and screams) in their spare time.