Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So, just intellectual curiosity, is there a software program that could accept the original notes (arrangement?) for Rhapsody in Blue and produce a passable rendition that I could then use on my (non existent) YouTube documentary and I would not have to pay Gherswin nor an orchestra for the audio recording?

I think I am looking for a whirlwind tour of audio copyright



Yes, a MIDI synthesizer can take the score and convert it to audio. (The discerning ear—or even an untrained one, if it's a poor quality sythesizer—could tell the difference between that and a live orchestra, of course, but that may not be a concern for you.)

However, unless you transcribed the original sheet music to MIDI yourself, the file may constitute an “original work” for which there is a separate copyright[0], in which case you'd still need an appropriate license for that. You'd also want to make sure that the sound fonts the synthesizer is using are OK as well.

Now that the original work is in the public domain, though, I would expect that sooner or later there will be a CC-licensed version of it on Wikimedia Commons or the YouTube Audio Library, and you may want to just wait for that.

[0]: https://meta.wikimedia.org/wiki/Wikilegal/MIDI_Files


Indeed -- transcribing sheet music into MIDI is generally not going to be 100% automatic, whoever is doing it is going to have necessary artistic choices to make -- what metronome speeds do tempo descriptions convert to, how much longer will a fermata hold, what levels of volume the different dynamic markings will hold.

You could take a "pure" mapping of just the notes and relative durations... but then you'd wind up with what would essentially sound like a weird robotic "player piano" version of the music at strange speeds that would be, well... pretty bad.


A player piano doesn’t necessarily have to sound bad.

Back in 1993 Yamaha built a device that could read piano rolls personally “recorded” by Gershwin by hand (he would hammer them out for money in his spare time), and then played back on a modern player piano. The resulting album is one of my all-time favorite albums.

Yamaha still sells these “scans” (on 3.5” floppy!) for their player pianos, and they could certainly be reverse engineered, but I imagine they are considered derivative works and covered by their own copyright.

https://apnews.com/fb3f8c3bc3305506f57120df0755f9d8 https://youtu.be/_kIpr6nSvjI


Interesting! Well, if you perform intentionally without dynamics, the player piano is certainly a valuable artistic choice in its own right. More like a harpsichord, in a way.

But I'll point out that the YouTube performance you linked to isn't a player piano at all -- it's an actual recording, which Gershwin overdubbed to be two pianos.

While player pianos that could incorporate dynamics existed, I'm not sure they were common, and I'm also not clear if there were any that incorporated playtime "per note" dynamics as opposed to generally "overall" dynamics that were added in via a separate mechanism.

But nevertheless yes -- there did exist some player pianos that had the capability of being more expressive than the normal single-volume ones!


>I'm also not clear if there were any that incorporated playtime "per note" dynamics

This[0] is a reproducing piano recording Ravel made in the 20s and was recorded from a Duo-Art piano in the 60s. Hearing Ravel play his own stuff a few years ago radically changed my idea of how Ravel's music should sound. (Pianist, I'd been playing Ravel for a couple of decades before that) This[2] has Prokofiev playing his own music.

"Many famous composers from around the world played their own works for the reproducing piano: Edvard Grieg, 1906 in Leipzig; Alexander Scriabin, 1910 in Moscow; Gabriel Fauré, c. 1913 in Paris; Nikolai Medtner, c. 1925 in New York."[1]

"This video is a practical demonstration and overview of how the Aeolian Duo-Art pneumatic reproducing player piano works, including how the system plays expressively by controlling the loudness of notes played with perforations on the roll."[3]

[0] https://archive.org/details/RavelPlaysRavel

[1] https://www.allclassical.org/player-piano-rolls-listening-to...

[2] https://www.youtube.com/watch?v=9uMRD2o6dJo

[3] https://www.youtube.com/watch?v=w-XrDw04P2M


I’m pretty sure “overdubbed” in this case meant “playing over the piano roll a second time.” Not an actual recording.

This YouTube rip sounds a little crummy to me, though, I’d suggest finding Gershwin Plays Gershwin on Spotify or your streaming service of choice...


They're drastically different, though.

The YouTube link you referenced is an actual recording -- the notes are all distinctly different dynamics, and there's plenty of background hiss. It's a real recording, full of life, on a real piano, and genuinely overdubbed. It's not "a little crummy" -- to the contrary, it's an invaluable historical record.

On the other hand, the "Gershwin Plays Gershwin" album is a genuine player piano, but the Rhapsody in Blue is totally different -- it has zero dynamics, everything is exactly the same volume. [1] It's interesting to hear, for sure -- but more as a historical curiosity, since no pianist would ever perform everything at the same volume unless limited technologically. Your original link is Gershwin's full interpretation. In contrast, his player piano version is nothing like what he would have performed live, but rather adapted to the technological limitations.

[1] https://open.spotify.com/track/2XSBXz4uDvx1PQPYJWQpcK?si=EO3...


Telarc have a number of albums of music reproduced from piano rolls -- this https://smile.amazon.co.uk/gp/product/B000009RCS/ is one of my favourite collections of classical works.

The sleeve notes tell of how modern scanning allowed a more complete reproduction of the notes played than contemporary players allowed, which by all accounts were already pretty good.


And here's the researcher discussing the project: https://www.youtube.com/watch?v=WqOtLSuPCJY


What if I write a program that does that translation agnostically of the specific work being translated?

And what if my program is parametrized by an ML model of unknown training data?


> What if I write a program that does that translation agnostically of the specific work being translated?

Then that program will likely produce non-copyrighted output for non-copyrighted input. But it's also likely going to sound pretty bad.

> And what if my program is parametrized by an ML model of unknown training data?

Then its output could be considered a derivative work of every work in that training corpus, and anyone whose creative work went into that training data will have a copyright claim on the results.


> Then its output could be considered a derivative work of every work in that training corpus, and anyone whose creative work went into that training data will have a copyright claim on the results.

Depends how tech savvy the judge is. An ML model is not really like a copy or reproduction, it's much more like a person's brain in that it aims to take general patterns from various sources.


You can apparently use real recorded instrument sounds with midi and music software, it’s used for professional artists too and some of the “instruments” are many GBs in size - don’t think a living soul could tell the difference.


It depends on how much effort you put into the midi transcription, but a trained musician could probably still tell the difference, especially during the transitions in between notes, and certain dynamics. Some sample libraries go as far as to record entire phrases played by a musician but then you're limited to what musical phrases those recordings include and how they were played (pitch shifting has gotten very good, but let's say you want a violin melody to sound softer, or plucked instead of bowed).

Instruments where notes/hits are isolated from other notes/hits like a piano or drumkit work wonderfully with sample libraries. But things like a guitar, violin, or trombone are still challenges. Even the orchestral libraries that are hundreds of gigs and thousands of dollars don't get it perfectly on their own. You definitely can make recordings that trick a trained musician, but not without tons of effort customizing your midi transcription to that specific sample library and you may have to compromise and avoid certain techniques that still don't translate to sampling well.

This is just talking about trained musicians though. Taking a midi transcription of something, spending 10 minutes adjusting velocity, and then loading up some EastWest samples would probably fool the average person.

Also quick edit but many people would consider something using samples of recorded instruments to still be a midi synthesizer because you're synthesizing a performance from samples, as opposed to recording entire phrases or even longer sections with a real musician and then arranging samples to create a recording. Not everyone would agree, but I'd guess the original commenter also had sample software in mind along with more traditional electronic synthesizers.


Even on a piano, two notes that are played independently, then later mixed, sound different than if those same two notes are played together on the piano. Something to do with the sound from one note affecting the string vibrations on the second note, and vice versa.


Very true, me saying they're totally isolated isn't 100% correct. That's sympathetic resonance, and other details like the pedals also slightly affects the resonance and pressing the pedals may be audible in a quiet/intimate recording. It's very subtle though and probably not noticeable by anyone who doesn't have lots of experience with a real piano. Many modern piano libraries actually do model/sample those properties though (I have no idea how).

While we're on it, drums aren't totally isolated either, they affect themselves. A snare roll is slightly different than just playing many snare samples rapidly because of the chains on the drum, and heavy cymbals also take some time to build up momentum so the first hit and tenth hit will sound different if it has not settled. These are all very subtle though compared to something like a guitar or violin or trombone where transitioning from one note sample to another sample digitally can be very obvious.


On the topic of how those modern libraries manage to reproduce those effects, couldn’t they have used either pure bruteforce (i.e., recording and sampling every combination of notes hit together) or some advanced modeling of physical properties of pianos to recreate how the sounds coming from piano strings would interact simultaneously? Or even better, some combination of those two approaches (modeling and extensive sampling)?

Disclaimer: i by no means have any expertise on the topic and am just as curious about figuring out how it actually works as the parent comment.


On a fairly serious note, how would I find out more of "how to" nature - if for example I wanted to have a project with my kids RaspberryPis and making music ?


Depends on what you're trying to do. If all you want is to play back MIDI files adequately, Firefox or VLC can do that just fine (I'm sure there are better dedicated software synths as well). For making music on the computer, I know there's a lot of software with different interfaces like virtual keyboards, trackers, and programs where you can drag-and-drop notes on a staff.

Since you're talking about the Raspberry Pi, maybe you're looking for a hardware. In this case, you can get MIDI piano keyboards that plug into the computer over USB (or can be used standalone). These can both send MIDI data to the computer and also receive it and play it with their internal synthesizer. (If you go really expensive, you can get actual grand pianos with sensors and actuators under each key, along with a subscription service that gives you access to recordings by professional piano players, replayable at your whim with exact dynamics so it's like they're in your living room.)

You can also build your own hardware and hook it up with a microcontroller; I've had good experiences with the Teensy, which is similar to an Arduino but can present itself to the computer as a USB MIDI device so you can build whatever creative/crazy instruments you want.

Should you come across a device with an actual MIDI port (a big round DIN socket with 5 pins), it's just a serial connection; you can get adaptors to convert them to USB.


To do it in the most legal way possible....

Look up the 1924 public domain arrangement in sheet music form.

Copy those notes into Musescore or an alternative.

Export the notes into MIDI.

Use Pianoteq to play the MIDI file. (It's one of the better sounding virtual instruments out there.) You can export a WAV file from Pianoteq.

Use this WAV file in your documentary on YouTube.

Prepare to get content ID'd by one of the big publishers anyways. You will have to defend the claim with proof that your audio is in the public domain.

I'm not a lawyer, but I happen to know a little about music licensing.


(IANAL) Also since your "work" will likely have some copyright since it itself a derivative work, I would go with the CC0 license so that others can use your work without fear of infringement of _your_ rights. https://creativecommons.org/share-your-work/public-domain/cc...

"CC0 helps solve this problem by giving creators a way to waive all their copyright and related rights in their works to the fullest extent allowed by law. CC0 is a universal instrument that is not adapted to the laws of any particular legal jurisdiction, similar to many open source software licenses. And while no tool, not even CC0, can guarantee a complete relinquishment of all copyright and database rights in every jurisdiction, we believe it provides the best and most complete alternative for contributing a work to the public domain given the many complex and diverse copyright and database systems around the world."

"Metropolitan Museum of Art: All public domain images in its collection are shared under CC0, which expanded their digital collection by over 375,000 images as well as provided data on over 420,000 museum objects spanning more than 5,000 years. Through the power of the commons, billions of people are now able to enjoy the beauty of the Met’s collections as well as participate in the continued growth of the commons, utilizing the infrastructure that makes greater collaboration possible." https://www.metmuseum.org/about-the-met/policies-and-documen...


His audio, a new recording, would NOT be public domain. But it would be HIS.

There is a vital difference between copyright of the written music and any particular recording of said music.


This isn't true. Replaying public domain music through mechanical means adds no creativity to the work, and thus has no copyrightable elements.


For reference: this was specified in article 14 of the new EU copyright directive. One of the few good changes in it: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CEL...


The type and timbre of the instrument used, the speed and volume of playback, any post-processing decisions, etc. are all creative choices and thus can make a resulting work copyrightable.

Be careful using generic music from the Internet.


Yes you are correct. The audio recording any person generates through the procedure I outlined above is now copyrighted by that person, who may then choose to place it in the public domain.

It's like if I recorded myself playing Für Elise. The recording I generate is owned by me, unless I release it to the public domain.


Yes, but then you as an author can choose to place your own performance in the public domain.


I thought that wasn't clearly established. I believe the assumption has been that you could call something public domain, and obviously people do so, but I don't believe it has been tested in court.

Not that I can see an obvious reason why it would be so tested, and frankly these are fuzzy memories from 15-20 years ago.


To what extent it is obvious just a simple statement that something is public domain would work depends on jurisdiction, which is why things like CC0 [1] exists.

[1] https://creativecommons.org/publicdomain/zero/1.0/legalcode


Concerning the possibility of making a "passable rendition" using virtual instruments, it's certainly possible, and it's generally called an orchestral mockup. Having done a lot of those, I can tell you this particular one would require a massive amount of work, most of which will be spent tweaking the dynamics for each note of each instrument or section; testing and selecting the most appropriate instruments from your (hopefully large and expensive) library; perhaps inputting rubato or tweaking the timing of notes; and massaging plugins from several different vendors into a cohesive whole.

Having done this, you could get a result that the vast majority of people would be unable to identify as synthesized.

In some places it will be necessary to do a bit of creative interpretation in order to accommodate for the limitations of your tools, for example, the clarinet trills at the beginning might need to be slowed down a bit. Nonetheless, a virtual instrument like SWAM Clarinets will get you there. These days, they're quite realistic: https://www.youtube.com/watch?v=Mmsehqcjc9g


Wow, that video is impressive. I played clarinet when I was in an orchestra decades ago and I was always jealous of the 1st Clarinet who got to play the Rhapsody in Blue intro. There are so many fun little things in there. I didn't think it was possible to simulate some of the techniques in MIDI.


The commonly performed orchestral arrangement won't be public domain for another 18 years - it didn't debut until 1942.


I wonder how different it has to be. If you were to switch around 2 notes in a couple phrases overall you’d probably end up with more or less the same song. I don’t know of anything that would do it automatically. I’m more curious about the subjectivity of what makes it different or not


See vanilla ice verse queen. Da dun dun dun da dun dun dun.


Others have answered, but in case it's not clear, for software there are two steps.

If you have the notes in a machine-readable format, a midi program can produce an audio recording of it. If you are willing to pay, you can get very good samples and the quality will be more than passable IMO.

Getting to a machine-readable format from a scan of sheet music is separate issue. You can either manually enter the notes into a computer (or pay someone on Fiverr), or there may be an ML program that will turn sheet music into something a computer can understand.


You’re looking for a midi something or another I think. You’ll have to take care that the midi file you use isn’t itself considered a creative work with its own copyright though (as befits an arrangement)


Any midi editor should be able to. You'd need to input it, if you use the paper sheet source.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: