Thanks. I listened to the examples and the pieces sound pretty convincing (ignoring the synthetic character of the instruments). Is what I hear directly generated from the DNN output, or was there some "manual" selection and tweaking of the music? The approaches based on symbolic musical information which I have seen so far sounded quite unnaturaly in constrast with passages no human composer would create like this. But this system generates pieces with a structure, harmony and progression that in times can hardly be distinguished from a human composer (with some exceptions). What is the deciding ingredient? I will now read the paper as well.
Actually, it would be very interesting to use this approach to generate improvised piano music in the style of Keith Jarrett. I had already considered using an AI to translate his various live concerts into midi and then use that to train a DNN. With the approach shown here, this maybe could work.