Hacker News new | past | comments | ask | show | jobs | submit login

My question is why bother with learning a special 'synthesis' application?

Why can't folks make good sounding music from just code?

One of my favourite genres is EDM, but surely whatever Flume does in Ableton could be done in pure code?

This is like authoring a web page in Macromedia Dreamweaver vs. HTML/CSS in vim.




Why bother with learning a special "image editing" application ?

Why can't folks make good looking pictures from just code ?

Algorithms implemented in soft synths need to take certain things like aliasing into account and if every musicians had to bother with that, there would be much less EDM for you to enjoy.


There is a scene for live coding music. I've always like this example using SuperCollider and Common Lisp:

https://www.youtube.com/watch?v=xzTH_ZqaFKI

But against a synthesist with a nice hardware synth or computer synths, there's no way coding can compete with the speed and dynamic interaction of the interface. I think this is a nice example that shows just how fast crafting a simple tune can be with a dedicated interface such as the Elektron Analog Four:

https://www.youtube.com/watch?v=Mm0WuPltadM

Then there's things like AudioMulch that really blur the lines between coding, synthesis, and something else:

https://www.youtube.com/watch?v=7K_RTS73jSY

A lot of music and synthesis is about experimentation and just playing, and that's something that text-based code doesn't quite capture well.


People probably could make good sounding music from code, but it's harder. When you're making a new patch on your synth, you don't want to think about complicated matters like code-as-text, you want to be able to turn a knob and hear a difference, with each change getting closer and closer to the wanted sound. Having to write even one line of code instead of sliding a slider up/down would instantly put me out of the flow when I'm creating new patches, no doubt. Changing values to some specific value instead of kind-of-here-sounds-good-to-me would slow down the entire flow as well. Switching between computer keyboard and my synth keyboard just to make a small change would do the same.

All in all, if you know exactly what you want you could probably do it in code, but part of the fun in creating a patch, and what helps with creativity, is experimentation, and I feel like that would easily get lost when coding with strict values and other "computer" syntax that fiddling with a real synth just flows together with the rest of the music making process.

There is no comparison really between the UX of sitting with a computer keyboard in front of a screen, writing/editing syntax to hear changes and hearing real-time sounds coming from something like Novation Summit together with a good pair of headphones, with zero screens involved.


Have a look into supercollider and tidalcycles


You could do that, just like you could write a web app in assembly.

stuff like https://sonic-pi.net/#examples does exists, but what you start to realize is that, is even the "code first" approaches to making music, you are really just generating configuration on the fly that's being processed buy some other engine underneath, in this case it's Super collider.


Synthesizing audio, and by extention entire songs with code is really hard, try it some time. Sonic-PI seems like a good lib for it but its still really difficult - lots of threading needed to make layers of different "instruments"

https://sonic-pi.net/


The same reason people use Photoshop instead of command line ImageMagick.


Have you ever used Ableton?


no


The same reason OpenSCAD is less popular than graphical CAD softwares.


This is a tragedy!

Check out https://github.com/curv3d/curv


I want a slider I can change in realtime to hear the sound change.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: