Hacker Newsnew | past | comments | ask | show | jobs | submit | jordipons_mtg's commentslogin

I'm Jordi Pons, one of the coauthors of the paper.

Is very interesting that you mention that! Actually, we used source separation to remix music to improve the musical experience of cochlear implant users!

Basically, music perception remains generally poor for cochlear implant users (due to the complexity of music signals). In order to simplify music for them, we remove the accompanying instruments to enhance vocals and beat (drums and bass), that is what they perceive the best.

This was a nice source separation application that helped many people! :)

https://asa.scitation.org/doi/full/10.1121/1.4971424


Very cool! I was introduced to the problem by a former colleague of mine: http://ryanmcorey.com/ . His current research on using multichannel microphone arrays to improve real-time source separation, particularly for human listening applications, might be of interest to your team!


I'm Jordi Pons, one of the coauthors of the paper.

Note that in our second part of the demo, besides separating vocals, we also separate drums and bass. See: http://jordipons.me/apps/end-to-end-music-source-separation/


I'm Jordi Pons, one of the coauthors of the paper.

You both are right! We basically mention ICA/sparse coding as prior work on waveform front-ends for source separation.

Our method is supervised, and we did not explore the unsupervised learning approach. However, some people are doing that! Check S. Venkataramani and P. Smaragdis work! https://scholar.google.es/citations?user=hCSSNZwAAAAJ&hl=es&...

Although we did our best via comparing against DeepConvSep and Wave-U-Net, I agree that it would be useful to properly benchmark all that!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: