Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An example I know of that resembles the bricklayer problem is synthesis of human singing vocals into MIDI sequence data. There is now a history of products around that check off the boxes - one (Vochlea Dubler) debuted just last year - and every time, it demos well but the potential audience ends up rejecting it, because it does not really add what they thought it would add to their workflow. Even if the results themselves come out usable(already a wicked problem since the DSP has to deal with a multitude of recording scenarios while achieving low latency), users discover that they need to be talented at "singing like an instrument" if they want to play instruments by singing, which is a technical barrier, not an assistance to spontaneous creation. Practically speaking, they're better served creatively by button input tools that work top-down(e.g. pick a scale, then the keyboard only plays notes within that scale) since those define down the medium and therefore perform a creatively assistive function with a legible design paradigm(different scale = different sound).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: