Hacker News new | past | comments | ask | show | jobs | submit login
Aerodynamic by Daft-Punk in 100 lines of code with Sonic Pi (aimxhaisse.com)
379 points by edouardb on Feb 5, 2016 | hide | past | favorite | 69 comments




Warning; turn down your volume before clicking on these examples. I'm pretty sure my tinnitus got worse just now :(


Heh, this reminded me of when Z-100 used to say: "if it's too loud... you're too old."


I have that quote on my guitar case. I always wore ear plugs at shows, hypocrite and I STILL got tinnitus!


I was old for as long as I can remember, then. Since age 4 or so.


Genuinely interested - what makes this stuff more 'hacker' than the original post? I don't see it.


It's basically an ad for his services, nothing else!


You're using GitHub for hosting a straight out commercial advert? That seems... not positive. :(


Very cool.

Why do you modify your code as it plays? Could you simplify and abstract your expressions to call them along a predictable flow pattern? This reminds me of Knuth's beautiful representation of recurrence relations: https://en.wikipedia.org/wiki/The_Complexity_of_Songs

(Aside: His later work on Constraint Based Music Composition https://news.ycombinator.com/item?id=9512962 also offers some interesting insight as to what could come from algorithmic musical composition.)


You don't need to modify your code as it plays. However, if you do you turn what's a more traditional composition style workflow into a much more exciting, expressive performance workflow.

When I gig with Sonic Pi all I do is modify the code on-the-fly. It allows me to react to the crowd, the environment and my feelings :-)


Mad respect, this looks so inventive. But as a coder by day / vinyl and controller DJ by night, shifting code around to gig sounds like a nightmarish personal hell!

Am I missing something about how the tactile control works, or is it really just shifting text around with a keyboard and mouse?


I'm a localization worker by day and DJ on the weekends... I understand your point haha. But we can't escape the curse, I indeed try do 'different activities' but at the end of the day I'm just switching laptops. Translate? computer work. Programm? computer work. DJing? computer work.


As well as including an aspect of live performance this is the most natural way to experiment and try ideas in music. Changes are best heard in immediate context.

It's a case where the Edit, Compile, Run Cycle doesn't fit the medium well :)


The answer's cultural: it's part of music live-coding. You work with some pre-prepared bits and construct them on the fly. A bit like... how Daft Punk performed live. (Although they used Ableton Live, I believe.)


Great work with the tutorial! Thanks for jamming with Sonic Pi and sharing the live coding love <3 <3


Thanks a lot for Sonic Pi and Overtone!


FYI there's a regular feature on Sonic Pi (by the creator) in the official magazine. It's usually the only article in Ruby as the rest are normally in Python.

It's a free PDF download but you can buy it to support them: https://www.raspberrypi.org/magpi

See page 48 (page 50 of the PDF): https://www.raspberrypi.org/magpi-issues/MagPi42.pdf


Yeah, I wish the language syntax in Sonic Pi was less Ruby, more Python. But, seriously disliking Ruby is just a personal hangup. ;)


Here is the video from OSCON in Amsterdam with the creator of Sonic Pi - https://www.youtube.com/watch?v=ENfyOndcvP0

He is a very good speaker and got me excited to try to do some of this with my 9 year old daughter.


Agreed, for a short presentation that was one of the best speakers I've seen, he mentions he works at a university in front of students so perhaps he is comfortable in front of crowds and that combined with his obvious passion for the subject shines through.

Wish I spoke that well, I'm not bad anymore but I'm also not that fluid.


I'm interested in taking a closer look at one of Sonic Pi, Supercollider, WavePot, Chuck, CSound etc.

Can anyone suggest how to choose which one to invest some time in?


I am a SuperCollider fan, too. For me it's the sweet spot between a modern programming language and a deep and performant synthesis/sequencing platform. SonicPi is perfect if you wan to start making music immediately, but consider that underneath is just another language interface for the supercollider synth server, so at some point you will want to go full SC, that is dialect of Smalltalk with some functional ideas. CSound is the oldest/most powerful/most frustating of the bunch, but at the core is composed of two parts: a description of the sound generators/effects graph and a list of notes/events. If you enjoy coding in assembly, CSound will be fun. Chuck has a very nice sync model (SonicPi is similar), but the language is very imperative, not my favourite.


I'm a SuperCollider fanboy. It can be very expressive and terse, as sc140 tweets show. You can use it for livecoding. It also now has a pretty good IDE written in Qt, and you can make your own GUIs in Qt with it too.

I will admit that I haven't tried Sonic Pi yet though.


It very much depends what your end goal is. Are you just looking to experiment and make some sounds? Would you like to embed one of these as a sound engine in another application that you are developing? Certain devices you are targeting?


I have no fixed goal so interested to know the different trade-offs between each one e.g. which ones excel at different goals you suggest?

Also general comments about things like quality, extendibility, community, momentum, learning curve, level of fun, reliability etc. would be really helpful - thanks!


If you're interested in rhythms and are open to using samples, you could take a look at Tidal (http://tidal.lurk.org). Tidal is a DSL embedded in Haskell, and can be used to very quickly create patterns of sound: everything from house music to chaotic breakcore to abstract textures. It's great for rhythm-based performance (and composition). It isn't as good at melody, but it does have some support for MIDI output and melodic expression, and with some effort your library of samples can support melody easily. I found Tidal's learning curve to be very shallow, but I think it depends on how you perceive music and rhythm. The Tidal community is working on a better install experience right now (in the meantime you'll need to compile a few things from source, etc). It primarily supports Emacs and Atom (Emacs appears to be the most stable).

Edit: Tidal is great for both live-coding and static-composition scenarios. In my opinion, it's ideal for live coding performance because minimal code is needed to get sound going quickly.


I'd start with Sonic Pi and see how you go. It's a small but useful subset of the bigger and more complex apps. If you want to take it further you can move up to SuperCollider, because the syntax is similar.

sc is probably the biggest and most open and developed sound language. pd is in the same ballpark, with a different culture.

It's also worth learning web audio, because then you can build toys for the web.

The others are more niche. I'd written off Csound, but I discovered recently that some of the newest Csound music has better than usual production values, which makes me wonder if it's having a revival and breaking out of the academic ghetto these projects tend to get trapped in.


I'll have to look into WavePot, but I'll say that SonicPi is built first and foremost for live-coding; making music in realtime while coding. There's some other languages that focus on this - Gibber in the browser, Tidal in Haskell. Those are probably the best languages to start playing with if you want to get something musical happening quickly.

SuperCollider is much more general - you have a server that can build and execute graphs of unit generators, and a language that has a ton of convenience features for interacting with the server, and abstractions for scheduling events. (sidenote, I'm starting to build an audio patching environment using SuperCollider. It doesn't do anything yet but I'm hoping to have something soon https://github.com/YottaSecond/Triggerfish)

SuperCollider also has a great community - questions on the mailing list are usually answered within a couple of hours, and there's a team of people furiously working on the upcoming 3.7 release.

I love Pure Data to death, it has an amazing community and is actively being developed, but I have some trouble recommending it because of the aging Tcl/Tk interface.

ChucK looks really interesting. In most environments you need to write unit generators in C/C++ to actually do low-level audio processing. ChucK uses a "strongly-timed" programming model, where you can actually use the same language to process sound sample-by-sample and schedule things at real musical intervals.

Extempore is also worth looking into if you aren't afraid of lisp.

So yeah, it depends largely on what you want to do. The live-coding languages like SonicPi are probably the best for getting music going quickly, but the others all have unique things to offer.


Personally I always loved "patching" style environments like Pure Data for fun and experimentation only reaching for the likes of Csound or Supercollider when I wanted extensibility or portability.

ChucK I've had the least experience with of those more mature toolkits (I didn't even know it was still actively maintained or developed), however I'd thoroughly recommend Supercollider due to the emphasis on live coding, mature community and integration with many languages.

Unfortunately I cannot speak to the qualities of Sonic Pi, this being the first I've heard of it, although I must say it looks great!


What would you say is a good live-coding platform for someone who actually has a lot of harmony and music theory training and would want to take advantage of it? I guess that would mean less looping and more motivic and theme development.


You could do worse than to play around with Sam Aaron's other project Overtone which is built on top of Supercollider.


I like playing with a bunch of them: Extempore, Pure Data, Grace and Euterpea (Haskell-based)are my current toys. There are many more to chose from - Openmusic, even Manx, a Forth system. I prefer to have more control, and both Sonic Pi and Overtone are wrappers for Super Collider. In Extempore you can create new ugens, whereas in Overtone and SonicPi I believe you only have available what's provided for in Super Collider. Granted there is plenty enough for 99% of people looking to livecode. I am biased towards Extempore because I like Lisp, it does both visuals, music, 'cyberphysical' programming, and it was truly built from the ground up for livecoding [1]. I am very excited about xtlang as a general purpose programming language outside of livecoding. I believe Extempore is being courted by the HPC crowd too after Ben Swift's and Andrew's work on it. AFAIK, Overtone is Clojure atop Super Collider, and Sonic Pi is Ruby atop Super Collider. They have nice interfaces, and you can do visuals in Overtone with Shadertone, a sort of ShaderToy for Overtone. I am not a Ruby fan however, and I prefer a more traditional Lisp or Scheme to Clojure. If I need functional programming in a Lisp, I like Shen, but nobody has ported a livecoding environment to it. Take a look at Andrew Sorensen here to see how Extempore can be applied to Western Music in action[2]. It has a Scheme language, and a c-like language called xtlang. It can be used for other things besides livecoding too. I started with Fluxus, but on Windows it doesn't support livecoding audio, only livecoding visuals to the audio stream you feed it. GRACE [3] is very easy to start with, and complete with built-in tutorials. It is Scheme-based, but has a more generic language called SAL. It is crossplatform, and comes packaged as one self-contained file to download and execute. It is the quickest to start with in my opinion. Jason Levine has ported the code from Daniel Shiffman's book, The Nature of Code book to Extempore's xtlang [4]. A great way to learn Extempore.

Livecoding seems to be growing more and more with a lot of hardware toys to go along with it. Exciting times.

    [1] http://extempore.moso.com.au/
    [2] https://www.youtube.com/watch?v=xpSYWd_aIiI
    [3] http://commonmusic.sourceforge.net/
    [4] https://github.com/jasonlevine/The-Nature-of-Livecoding


Also check out heliosphan recreation in Sonic Pi YouTube

https://www.youtube.com/embed/bgPpyfRk3rw


Super awesome. Here's my take on Arcade Fire's The Suburbs https://gist.github.com/hamin/1d3f45623b38504d72c8


I've been listening to a lot of Daft Punk lately and have been building some toy synths lately with the Web Audio API in JavaScript. I think I'll take the weekend to mess around with Sonic Pi.

Great work, this is super cool!


Awesome articles, thanks for writing them. I used to fool around with FL and Reason about 10 years ago but wasn't very good at all. I think I could have a lot of fun with this.

I especially liked how the author visualized sounds to be able to replicate them. I always assumed you just had to have a good ear.

Really impressive stuff, thanks.


I'll agree. The visualization helped me quite a bit. Never heard of Sonic Pi before this. It seems really cool. I also love that they have a dedicated "how to contribute" page with examples for non-technical stuff as well: https://github.com/samaaron/sonic-pi/blob/master/HOW-TO-CONT...

Prewrite some code, play around with it live...love it. Is there a curated list of people using this for live performances? I'd love to attend one of those.


For more on the live coding scene, check http://toplap.org


I've just learned about sonic pi yesterday and today this is first on HN ... wtf who tracks me? :P


The other day I was searching about the best way to automatically detect patterns in data. Then I checked my email and I had received a "Quora Session recap" type of newsletter (I don't get or read those often). The content of the newsletter was a series of questions about machine learning with Pedro Domingos, (I didn't know him). I was amazed by his knowledge about computer science. The next day I went casually to Amazon and it recommended me Pedro Domingo's book 'The Master Algorithm'. I tried a sample and it blew me away: it's a great exposition of what machine learning is. I bought the book and keep reading. This is a bit meta, but I think machine learning helped me find information to understand machine learning better. It chose me :)


Baader & Meinhof, Private Eyes.


Does anyone know of similar programs where the output is MIDI notes that you can save to files or use directly in more traditional audio software like Ableton Live?


If you want to live-code directly in a DAW from a plugin, there is Lua Protoplug. It's completely centered on being a plugin, so your script processes blocks of sound/MIDI as requested by the host (here's a MIDI in/out example [1]). And yes, I know, I haven't pushed a commit in a few months. I still have many ideas and improvements for when I get back to it though!

[1]: http://www.osar.fr/protoplug/api/examples/midi-chordify.lua....


Sonic-Pi can play the file as MIDI you would then just record the notes into Ableton Live.

Just looked at the cheat sheet. I would image all of these programs would be playing out MIDI.

http://www.cl.cam.ac.uk/projects/raspberrypi/sonicpi/media/s...


They seem to concentrate on generating sound, not MIDI, even if they can read MIDI or use it internally. I'd like to generate my melodies algorithmically and then use the MIDI output of these programs as input in Ableton Live, Logic etc.

Also check out alda: https://github.com/alda-lang/alda


SuperCollider has a series of classes called Patterns for sequencing and algorithmic composition. The output can be sent to the embedded server for live synthesis or to a midi output (or directly to Ableton with a virtual midi cable) http://doc.sccode.org/Tutorials/A-Practical-Guide/PG_Cookboo...


GRACE,(Graphical Realtime Algorithmic Composition Environment) built on top of Common Music, can save the output to WAV or MIDI or using OSC, and it also interfaces nicely with LilyPond for a musical score printout. A lot of the livecoding programs allow you to direct the output to a file other than just audio out. I own Ableton, and you can use Max inside of it if you have the studio version, and now CSound can be used, but not for Livecoding if I understand it correctly.


That's really cool, I know nothing about music composition , where can I learn the basic theory/concepts/etc ?


bbc how music works will make a nice basic inctroduction

https://www.youtube.com/playlist?list=PLA7obRrq8OgRX3y0yO0PI...


Are you using your tool here https://github.com/aimxhaisse/dummy-wav2pi for this song ?


Unfortunately no, I did this code while trying to reproduce the bell sound, I thought it'd be easier to reproduce it by extracting the active frequencies of the original sample, but I could merely obtain the timbre of the instrument. I guess it needs more work (it doesn't take into account the envelope of the bell) and more tweaks.


Move all moving parts to config (maybe inside this file even) to avoid jumps between code parts.

Very impressive - not only work itself but also your feeling of rhythm.


The only thing that bothers me is that the bass track is off by a note. There's a phrase in there that is supposed to go down the scale, but instead it plays the same note three times. Otherwise, this is truly an amazing recreation of the song.

You should tweet this to Daft Punk and see what they think!


This is awesome!

Knowing nothing about this space, is there already a program that uses (wave based) sound simulation to allow music programming in an arbitrary 3d environment? This could go beyond mono and stereo :)


Something like Ambisonics https://en.wikipedia.org/wiki/Ambisonics perhaps?

I once worked with a brilliant audio developer/wizard who used Ambisonics (among other things).

Watching him position his desk when we switched to a new (large and open) office space is something I'll never forget. He walked around in the middle-ish of the room, snapping his fingers and listening. Suddenly he said "This is it, I'll sit here", and so he did. :)


This one seems to be static to the listener position and require more hardware. Assuming only headphones are used, something like WAVE (https://www.youtube.com/watch?v=ibM3fz-P0Ac) could be used to simulate reflection, refraction, diffraction and interference effects. If the head is accurately tracked, I could imagine 3d directional sound programming in VR. Enter the Matrix rave ;)


I'm not sure I get what you mean by arbitrary 3d environment, but I know BT's album "This Binary Universe" is composed for 5.1, and the first track "All That Makes Us Human Continues" is written entirely in CSound.


A language with which the programmer can specify the static or dynamic three dimensional position of the sound sources, absolute or relative to the listener in a specified environment. The geometry of the environment can affect reverberation, amplitude etc.


Supercollider has this kind of stuff


For Daft Punk buffs like me, this is gold. Awesome stuff, kudos!


I totally love this! Great job man!


Awesome ! Added to my playlist :)


Glad you like it! Feel free to ask questions!


Impressive!


Did he put out source, id love to edit.


Yes, it's available here at the end: https://mxs.sbrk.org/aerodynamic-everything-en.html


Nice!


Aerodynamic in 1 line of bash:

firefox https://www.youtube.com/watch?v=Mjpu0-o9iek


That's not portable bash though. Could you maybe provide a Docker image with the dependencies?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: