Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Source separation is much more general than just music. For instance, a common application is hearing assistance; can a hearing aid separate the voice of the person you want to listen to, from the background noise of many other perfectly valid conversations?

I can't speak to the research presented in this link in context, but the first segment provides a good introduction to the problem domain: http://spandh.dcs.shef.ac.uk/chat2017/presentations/CHAT_201...



I'm Jordi Pons, one of the coauthors of the paper.

Is very interesting that you mention that! Actually, we used source separation to remix music to improve the musical experience of cochlear implant users!

Basically, music perception remains generally poor for cochlear implant users (due to the complexity of music signals). In order to simplify music for them, we remove the accompanying instruments to enhance vocals and beat (drums and bass), that is what they perceive the best.

This was a nice source separation application that helped many people! :)

https://asa.scitation.org/doi/full/10.1121/1.4971424


Very cool! I was introduced to the problem by a former colleague of mine: http://ryanmcorey.com/ . His current research on using multichannel microphone arrays to improve real-time source separation, particularly for human listening applications, might be of interest to your team!


Nice, thanks for the link. I'm only familiar with it in the context of music analysis (ISMIR etc), though it's obvious that many of the core algorithms were developed originally for speech analysis.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: