Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems their goal is to make a full-fledged DAW in the browser, but they are not there yet (and far from it from what I can tell). How they would be able to fund all that for $2k is beyond me.

That said, it's a really clever idea to have all the source editable, since js is an interpreted language anyway. One can imagine a future where the community not only shares songs and sounds, but the DSP units themselves. Unlike desktop DAWs, which rely on dylibs (dlls) to supply external synths and effects, a js-based DAW can load, re-compile and hotswap units on the fly.



Realtime audio DSP seems like about the last thing that in-browser JavaScript was designed to do well.

Janky audio is unnacceptable in a way that janky visuals aren't. It totally kills the experience.

Without very strong guarantees about GC latency and thread scheduling that are neither available nor on the horizon for in-browser js, it won't work for anything beyond fun hacks.


Are you aware of the Web Audio API? It's an API designed especially to address those concerns, and achieves low latency and high performance where it is available (Chrome, Firefox and some versions of Safari currently).

https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specifica...


Yes, and it works by mostly avoiding realtime DSP in javascript. It provides a high-level API for wiring together prebuilt audio processing nodes. Much like using browser animation APIs vs programatic animation. My point is about the feasibility of JS signal processing, not the feasibility of using JS to glue together signal processors.

The WebAudio API does provide a ScriptProcessorNode for JS processing, but it appears to suffer from the problems I described. The spec seems to warn against using this for realtime processing, fwiw.


"The WebAudio API does provide a ScriptProcessorNode for JS processing, but it appears to suffer from the problems I described."

Yup. That is being worked on by W3C and the WebAudio committee as we speak. WebWorker based JS processing is something being considered.

https://github.com/WebAudio/web-audio-api/issues/113

http://lists.w3.org/Archives/Public/public-audio/2013OctDec/...

I am expecting this to be resolved in a few months. (hopefylly?)

If the threading issues are resolved, then real time audio rendering doesn't take that much processing power and can be easily done in JS.

Infact if there is nothing else contending for the EventLoop (like in this example of WavPot) the ScriptProcessorNode is able to meet the real-time guarantees pretty well, while still doing decent processing.

Also going forward, with Technologies like SIMD.js and asm.js more can be done within a single onprocess callback.


Meeting real-time guarantees pretty well isn't good enough for a DAW. Not without a lot of buffering (and therefore latency), anyway.

The WebWorker proposal is interesting. If the worker has its own heap and realtime scheduling with a correctly configured GC, it would be a big improvement.

Getting the different implementations to work consistently enough will be challenging, though. Meeting realtime deadlines is difficult enough when working in C with a known OS and audio stack.


We're working on a test suite for implementations, which will hopefully help with some of the basics, but it will be challenging to get full coverage.

https://github.com/w3c/web-platform-tests/tree/master/webaud...


If you don't care about latency, and in this "DSP sandbox" application you probably don't, JavaScript is fine for real-time DSP. JS under V8 and similar engines can be quite fast.

You aren't going to digitize 100 MHz of RF spectrum and build an SDR in JavaScript, but for audio rendering work it'd be fine. Cool hack.


I'm assuming that a full-fledged DAW cares a lot about latency. Live performance becomes difficult when latency creeps much above 5ms, for example. A lot of things that you normally don't think about (like when to allocate a buffer of memory) become critically important with low latency requirements.

That isn't an issue for the demoed app, but it is for the hypothetical longer-term goal under discussion.


Yeah, I think they're making a mistake by using the term DAW. That term already means something, and this isn't it.


"You probably don't care about latency" — famous last words...


Seems $2k is just one of their milestones...

"development is split into milestones on which the features are discussed and decided upon in the mailing list with the help of the community.

a funding campaign is then setup for each next milestone which supports development and keeps the project up and alive."


This kind of thing has existed in desktop/native form for some years. Look up supercollider.


& max/msp even included js but it operated at control rate, not audio, so you couldn't write synths directly as js


Max now have gen[1] so you can write audio/sample rate code.

[1]: http://cycling74.com/products/gen/


yea want to try it but i'm stuck on max 5... max upgrades are more expensive than they're worth & now its splitting into multiple products (max4live,gen) so i'm kindof on an every other generation plan unfortunately.

especially annoying when features you like get deprecated (pluggo, ms. pinky) though i understand gen & max4live both kinda serve as pluggo replacements. all in all its just hard for me to pull the trigger on a $250 upgrade for a few extra integration points on a tool that should be mostly open source anyway.

http://puredata.info/


And I should point out that I personally feel kinda guilty... I met Miller Puckette and he was so enthusiastic about PureData and how much more the community could grow. I realize the biggest reason Max is thriving is because of marketing and now integration with Ableton.

I feel like if something were done to prevent Pd from being wiped off the map we could have had an open source community by now rather than Cycling 74. hm.... the media production world is complicated for open source tho.


> I feel like if something were done to prevent Pd from being wiped off the map we could have had an open source community by now rather than Cycling 74.

But we got another repeat of the scenario: the OSS project was unable to muster decent UI while the commercial variant was able to both fund the staff and draw out the needs from its user community to make continual UI improvements. (Speaking specifically to the vast improvements over time culminating in Max 6.) If I sound a bit testy here, it's because I've seen too many promising OSS projects killed by this same problem: terrible user experience.

> I realize the biggest reason Max is thriving is because of marketing

I strenuously disagree. It's because PD absolutely failed to deliver a sane user experience, period. Sure, marketing is important, but Cycling '74's work is what makes Max at all relevant today. About all that PD ever had going for it versus Max was being open source.

As someone who used Max/FTS back on the ISPW[1], even that old UI was vastly better than PD's. I enthusiastically gave PD a go when it first came out, even going back from time to time to check in on it. Every time I found it nigh unusable due to huge UI issues. Especially for what amounts to a visual programming language, this is the death knell. For comparison, it's not like early ISPW Max didn't have its pain points.

Max users owe Miller Puckette a substantial debt for his contributions to this excellent visual signal/event programming environment, so it's unfortunate that PD was never able to pull it together.

[1] https://en.wikipedia.org/wiki/ISPW


yeahhhh i mean i guess part of the "guilt" i was expressing is a regret that I was too young/inexperienced to be able to contribute and I too just handed money over to Max/MSP when they were still on version 4, which IIRC was not very different from Pd at the time.

I think when Max 5 introduced "Presentation Mode" I knew it was over, and then Max for Live was the nail in the coffin, but prior to that I'd seen most students using Max because they believed it was more feature-rich even though most of our professors were using Pd to do more complex work than my Max-toting peers. Similar to Matlab/Octave... I've seen a few Octave users outpacing Matlab users simply because they were free to experiment without having the roadblocks of toolkit purchases.

Cycling 74 has done a good job over time for sure I'm just saying that as recently as 5-10 years ago there wasn't such a clear dichotomy, and the release of Max for Live really struck me because it was a proprietary integration, huge departure from pluggo's VST philosophy.

I think I just have to get used to the fact that despite the great work being done on projects like Pd/Jack/Ardour, audio technology is probably drifting further away from OSS than towards it

It's tough, I want to contribute to to some audio projects but most of the ones that are gaining traction seem to be putting up paywalls or are too rooted in platform-specific code. I'm toying with some audio stuff on the JVM, will see how that goes....


There's a reasonably active community around PD, but it has the same problem as a lot of other open source software - it's hideously ugly. IIRC the user interface is done with TCL/TK. Given the large number of competing products in the same space, people who are not heavily committed to open source have little incentive to use it, unfortunately. I'd say a visual/UI makeover is a higher priority right now than extending the PD library or other tasks.


PD and Max-MSP are from the same stable, and PD is the poor man's Max-MSP. I feel that PD is an excellent learning tool if you want to learn about synthesis or DSP more generally, but as a musician's tool it's just not ready. A lot of the music made with PD is pure crap, or even a type of anti-music, or puts forward some kind of alternate musical theory of its own which nobody can understand. PD music is usually too detached from normal conventions such as 12 TET, harmonies and progressions to be listenable. Typically the PD user will produce something with weird bell sounds, stuttering percussive noises or massive pad washes going in and out. Not conventional music. The reason is that PD doesn't provide enough built-in function shortcuts. If you want to make music, PD is a practically useless interface. By the time you've programmed a very basic synth, any inspiration has long gone. An instrument is something that should need no foreplay beyond just turning it on and picking it up, before actually playing it. If you want to make something recognizable as music with PD, you will need to first download someone else's synth patch or copy a subtractive or FM synth from a book first. And then you'll find it has no patches you can modify (unless you make them yourself). The time between turning on the computer, setting up your synth and playing your piece is simply too long.


Sharing Supercollider snippets was a thing for awhile on twitter. THE FUTURE IS NOW.


>a js-based DAW can load, re-compile and hotswap units on the fly.

REAPER does just this. It has its own audio scripting language, JesuSonic (also abbreviated JS, which can get confusing), which is interpreted. It's not incredibly popular, but people do share code and modules for it. There are some who sell VSTs/AUs/etc (dlls), but give away the same plugin in a JS format in order to support Reaper.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: