Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't get it! Can someone explain why/how this is special? Seems like it is only a matrix of like 40 states per blob and that the lower blobs just follows with the same state? Of course well executed but..?


The voices are ML generated based on hours of recordings of real opera singers. It seems that the real time synthesis includes details like realistic switching between vowels, realistic trills on long notes, etc. There's a lot going on to make those voices sound natural.


It's cutting edge A.I implemented in a fun, geeky way. It just works effortlessly. What's not to like?!


The lower blobs are creating harmonies based on your blob's state. The fact that it produces results that are both musical and interesting is very impressive to me.


I would like to know what the machine learning part of this is. Not knowing any different, and seeing that you can only drag one character at a time, it seems like it could be entirely done with relatively simple harmony rules.

ML or not I still think it's super cool but it would be great to know what's the ML part.


After experimenting a bit, the harmony appears not to be a simple function of the note you play. Perhaps it's taking into account previous notes, and putting harmony in the context of a 'piece'? (that would be cool) Or maybe it's more simply random.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: