Learned complex masks for multi-instrument source separation
Abstract
Music source separation in the time-frequency domain is commonly achieved by applying a <PRE_TAG>soft or binary mask</POST_TAG> to the <PRE_TAG>magnitude component</POST_TAG> of (complex) spectrograms. The <PRE_TAG>phase component</POST_TAG> is usually not estimated, but instead copied from the mixture and applied to the magnitudes of the estimated isolated sources. While this method has several practical advantages, it imposes an upper bound on the performance of the system, where the estimated isolated sources inherently exhibit audible "phase artifacts". In this paper we address these shortcomings by directly estimating masks in the complex domain, extending recent work from the speech enhancement literature. The method is particularly well suited for multi-instrument musical source separation since residual phase artifacts are more pronounced for spectrally overlapping instrument sources, a common scenario in music. We show that complex masks result in better separation than masks that operate solely on the magnitude component.
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper