I'm Jordi Pons, one of the coauthors of the paper.
Is very interesting that you mention that! Actually, we used source separation to remix music to improve the musical experience of cochlear implant users!
Basically, music perception remains generally poor for cochlear implant users (due to the complexity of music signals). In order to simplify music for them, we remove the accompanying instruments to enhance vocals and beat (drums and bass), that is what they perceive the best.
This was a nice source separation application that helped many people! :)
Very cool! I was introduced to the problem by a former colleague of mine: http://ryanmcorey.com/ . His current research on using multichannel microphone arrays to improve real-time source separation, particularly for human listening applications, might be of interest to your team!
Is very interesting that you mention that! Actually, we used source separation to remix music to improve the musical experience of cochlear implant users!
Basically, music perception remains generally poor for cochlear implant users (due to the complexity of music signals). In order to simplify music for them, we remove the accompanying instruments to enhance vocals and beat (drums and bass), that is what they perceive the best.
This was a nice source separation application that helped many people! :)
https://asa.scitation.org/doi/full/10.1121/1.4971424