This isn't really a DAW. For those unfamiliar, a DAW is a fully-featured music creation environment including sound design, composition, recording, arrangement and mixing in one interface. Examples include logic, cubase and ableton.
While I suppose this does allow for all of those things, as much as any programming language with access to an audio API does, this would be better described as a web-based DSP livecoding environment.
Apart from the naming quibbles, it looks excellent! I wonder what's generating the sound? I'm aware of the oscillator/filter primitives in the HTML5 audio API from the minimoog google doodle, but this seems more elaborate than that.
EDIT: for fun times, load "need more 303," scroll down to the bottom, and change some of the numbers around. Setting the slide() call to 1/1024 yields a nice FM-ish sound. You can even overdrive the filter. Reach for the lasers!
It seems their goal is to make a full-fledged DAW in the browser, but they are not there yet (and far from it from what I can tell). How they would be able to fund all that for $2k is beyond me.
That said, it's a really clever idea to have all the source editable, since js is an interpreted language anyway. One can imagine a future where the community not only shares songs and sounds, but the DSP units themselves. Unlike desktop DAWs, which rely on dylibs (dlls) to supply external synths and effects, a js-based DAW can load, re-compile and hotswap units on the fly.
Realtime audio DSP seems like about the last thing that in-browser JavaScript was designed to do well.
Janky audio is unnacceptable in a way that janky visuals aren't. It totally kills the experience.
Without very strong guarantees about GC latency and thread scheduling that are neither available nor on the horizon for in-browser js, it won't work for anything beyond fun hacks.
Are you aware of the Web Audio API? It's an API designed especially to address those concerns, and achieves low latency and high performance where it is available (Chrome, Firefox and some versions of Safari currently).
Yes, and it works by mostly avoiding realtime DSP in javascript. It provides a high-level API for wiring together prebuilt audio processing nodes. Much like using browser animation APIs vs programatic animation. My point is about the feasibility of JS signal processing, not the feasibility of using JS to glue together signal processors.
The WebAudio API does provide a ScriptProcessorNode for JS processing, but it appears to suffer from the problems I described. The spec seems to warn against using this for realtime processing, fwiw.
I am expecting this to be resolved in a few months. (hopefylly?)
If the threading issues are resolved, then real time audio rendering doesn't take that much processing power and can be easily done in JS.
Infact if there is nothing else contending for the EventLoop (like in this example of WavPot) the ScriptProcessorNode is able to meet the real-time guarantees pretty well, while still doing decent processing.
Also going forward, with Technologies like SIMD.js and asm.js more can be done within a single onprocess callback.
Meeting real-time guarantees pretty well isn't good enough for a DAW. Not without a lot of buffering (and therefore latency), anyway.
The WebWorker proposal is interesting. If the worker has its own heap and realtime scheduling with a correctly configured GC, it would be a big improvement.
Getting the different implementations to work consistently enough will be challenging, though. Meeting realtime deadlines is difficult enough when working in C with a known OS and audio stack.
If you don't care about latency, and in this "DSP sandbox" application you probably don't, JavaScript is fine for real-time DSP. JS under V8 and similar engines can be quite fast.
You aren't going to digitize 100 MHz of RF spectrum and build an SDR in JavaScript, but for audio rendering work it'd be fine. Cool hack.
I'm assuming that a full-fledged DAW cares a lot about latency. Live performance becomes difficult when latency creeps much above 5ms, for example. A lot of things that you normally don't think about (like when to allocate a buffer of memory) become critically important with low latency requirements.
That isn't an issue for the demoed app, but it is for the hypothetical longer-term goal under discussion.
yea want to try it but i'm stuck on max 5... max upgrades are more expensive than they're worth & now its splitting into multiple products (max4live,gen) so i'm kindof on an every other generation plan unfortunately.
especially annoying when features you like get deprecated (pluggo, ms. pinky) though i understand gen & max4live both kinda serve as pluggo replacements. all in all its just hard for me to pull the trigger on a $250 upgrade for a few extra integration points on a tool that should be mostly open source anyway.
And I should point out that I personally feel kinda guilty... I met Miller Puckette and he was so enthusiastic about PureData and how much more the community could grow. I realize the biggest reason Max is thriving is because of marketing and now integration with Ableton.
I feel like if something were done to prevent Pd from being wiped off the map we could have had an open source community by now rather than Cycling 74. hm.... the media production world is complicated for open source tho.
> I feel like if something were done to prevent Pd from being wiped off the map we could have had an open source community by now rather than Cycling 74.
But we got another repeat of the scenario: the OSS project was unable to muster decent UI while the commercial variant was able to both fund the staff and draw out the needs from its user community to make continual UI improvements. (Speaking specifically to the vast improvements over time culminating in Max 6.) If I sound a bit testy here, it's because I've seen too many promising OSS projects killed by this same problem: terrible user experience.
> I realize the biggest reason Max is thriving is because of marketing
I strenuously disagree. It's because PD absolutely failed to deliver a sane user experience, period. Sure, marketing is important, but Cycling '74's work is what makes Max at all relevant today. About all that PD ever had going for it versus Max was being open source.
As someone who used Max/FTS back on the ISPW[1], even that old UI was vastly better than PD's. I enthusiastically gave PD a go when it first came out, even going back from time to time to check in on it. Every time I found it nigh unusable due to huge UI issues. Especially for what amounts to a visual programming language, this is the death knell. For comparison, it's not like early ISPW Max didn't have its pain points.
Max users owe Miller Puckette a substantial debt for his contributions to this excellent visual signal/event programming environment, so it's unfortunate that PD was never able to pull it together.
yeahhhh i mean i guess part of the "guilt" i was expressing is a regret that I was too young/inexperienced to be able to contribute and I too just handed money over to Max/MSP when they were still on version 4, which IIRC was not very different from Pd at the time.
I think when Max 5 introduced "Presentation Mode" I knew it was over, and then Max for Live was the nail in the coffin, but prior to that I'd seen most students using Max because they believed it was more feature-rich even though most of our professors were using Pd to do more complex work than my Max-toting peers. Similar to Matlab/Octave... I've seen a few Octave users outpacing Matlab users simply because they were free to experiment without having the roadblocks of toolkit purchases.
Cycling 74 has done a good job over time for sure I'm just saying that as recently as 5-10 years ago there wasn't such a clear dichotomy, and the release of Max for Live really struck me because it was a proprietary integration, huge departure from pluggo's VST philosophy.
I think I just have to get used to the fact that despite the great work being done on projects like Pd/Jack/Ardour, audio technology is probably drifting further away from OSS than towards it
It's tough, I want to contribute to to some audio projects but most of the ones that are gaining traction seem to be putting up paywalls or are too rooted in platform-specific code. I'm toying with some audio stuff on the JVM, will see how that goes....
There's a reasonably active community around PD, but it has the same problem as a lot of other open source software - it's hideously ugly. IIRC the user interface is done with TCL/TK. Given the large number of competing products in the same space, people who are not heavily committed to open source have little incentive to use it, unfortunately. I'd say a visual/UI makeover is a higher priority right now than extending the PD library or other tasks.
PD and Max-MSP are from the same stable, and PD is the poor man's Max-MSP. I feel that PD is an excellent learning tool if you want to learn about synthesis or DSP more generally, but as a musician's tool it's just not ready. A lot of the music made with PD is pure crap, or even a type of anti-music, or puts forward some kind of alternate musical theory of its own which nobody can understand. PD music is usually too detached from normal conventions such as 12 TET, harmonies and progressions to be listenable. Typically the PD user will produce something with weird bell sounds, stuttering percussive noises or massive pad washes going in and out. Not conventional music. The reason is that PD doesn't provide enough built-in function shortcuts. If you want to make music, PD is a practically useless interface. By the time you've programmed a very basic synth, any inspiration has long gone. An instrument is something that should need no foreplay beyond just turning it on and picking it up, before actually playing it. If you want to make something recognizable as music with PD, you will need to first download someone else's synth patch or copy a subtractive or FM synth from a book first. And then you'll find it has no patches you can modify (unless you make them yourself). The time between turning on the computer, setting up your synth and playing your piece is simply too long.
>a js-based DAW can load, re-compile and hotswap units on the fly.
REAPER does just this. It has its own audio scripting language, JesuSonic (also abbreviated JS, which can get confusing), which is interpreted. It's not incredibly popular, but people do share code and modules for it. There are some who sell VSTs/AUs/etc (dlls), but give away the same plugin in a JS format in order to support Reaper.
In my head, a DAW is exactly what it says. A workstation for digital audio. That said, this has the potential to evolve to something more similar to existing professional DAWs and maybe even surpass them.
I'm glad you like it. The sound is generated purely by a JavaScript function returning sample values that are then handled by the Web Audio API and buffered. That's how you hear sound.
On edit: It's really easy to remix tracks that way, even small changes to values yield quite different results. It's fun!
Yes that's a good delay filter but it's also a good example why this type of user-programmable synth apparatus, which looks similar to CSound, won't catch on. There is much more going on in the typical delay sound itself than just a delayed iteration of a sample. A pure delay is boring. There is the possibility to model all kinds of analog and hardware digital delays with additional coding, but people have been doing that for years and it's no surprise really that commercial companies do it best (and they won't bother unless there's a way to protect proprietary code such as VST).
There's more to digital audio processing than code? I don't get it. Once the collaborative module system is in place (see milestone I) it'll allow for more complex stuff, as you'd be joining components and moving upwards in the abstraction levels, I don't see how big companies can compete with that.
No that's not the point I was making. I was saying that to make a good delay in software it can't just be a delay. To sound good, it needs additional algorithms to carry out at least 2 other DSP functions on the return signal.
> I'm aware of the oscillator/filter primitives in the HTML5 audio API from the minimoog google doodle, but this seems more elaborate than that.
An oscillator and a filter modulator are pretty much all you need to create this particular app. There's no reverb, and any delay effects are being "hard-coded" with simple degrading velocities. All synthesizers are built from these ultra-simple components, maybe 2 or 3 more and you have everything you need to build a digital synth purely with web technologies.
This is great! I teach high school math and science, and students often ask how sound waves can be turned into music. I give them a big-picture overview of sine waves and superposition, but I've never had a tool that lets students play with the math easily.
I look forward to showing this to students in the fall.
> it is quite contrary to normal raves due to the fact that the music can be very stop-and-start and the ingestion of MDMA, ketamine or other illicit drugs does not usually happen
I'd love to! You guys have some amazing things going on there. I have some live performance features planned for wavepot, so yeah, not quite there yet! But keep an eye on it I'm sure you'll enjoy what I have in mind.
If you like this, you might also be interested in a newsletter I write called Web Audio Weekly. I link to interesting projects that use the Web Audio and Web MIDI APIs as well as more general stuff of interest to musicians and developers.
This is such a great example of a great experience changing behavior. I haven't thought of making music in years, and I've seen other projects kinda similar to this in the past- but this reacts so quickly, so immediate to my inputs, that it's hard to step away from.
Cool site. You need more genre categories however; Synth and/or Electronic would be a good start since you are missing anything along those lines. Probably a EDM category as well otherwise that's what either electronic and/or synth will end up being.
Also, Find Musicians doesn't seem to be working, at least not in Chrome on osx.
we do have some mixing support in beta - ping me if you want early access osi (at) getbandhub (dot) com
re: programming - we are trying to figure out what's the best model. One thing we are trying to avoid is complex timelines to make it easy for the average recreational musician.
This is really cool, really nice level of abstraction.
The 303 example in the environments in which I learned computer music fundamentals (PD, Max, CSound, Supercollider) would require either much more or less understanding and/or 3rd party implementations of components to generate something similar. Most importantly the sound is really nice.
Btw the categorisation of it being a DAW is dependent on the user perspective, I've created what I would consider to be DAW equivalents in Max/Pure Data, that facilitate audio/midi recording and playback; just because a toolkit doesn't do stuff for you automatically doesn't mean it isn't capable.
DAW in my opinion is just a marketing term not a descriptive one.
Thanks! I have background in Music Technology but I'm just starting with DSP myself. Most of the stuff I use are ported from code I found online so I can't really take credit for those though.
A modular piece of software like this can achieve pretty much the same as professional DAWs. Consider modules that add UI elements, automations, etc. It won't be long until the abstraction is high enough so everyone just assembles their own custom DAWs on the fly.
Is there a way to contribute without using bitcoin? This is awesome and I want to support it, but I'm not really interested in getting involved in bitcoin.
Paypal donations have been added to the crowdfund. Thank you people, I appreciate your enthusiasm, this is a project I am in love as well and I wanna see it going places.
It goes into loop when I want to switch a file
"You've made some edits!"
Also
yntaxError: Unexpected token *
at Function (native)
at t (blob:http%3A//wavepot.com/576aa761-8f7c-4918-b4e7-8bdd7c77ff39:2:16401)
at Function.<anonymous> (blob:http%3A//wavepot.com/576aa761-8f7c-4918-b4e7-8bdd7c77ff39:2:16779)
at Function.i.emit (blob:http%3A//wavepot.com/576aa761-8f7c-4918-b4e7-8bdd7c77ff39:1:2656)
at DedicatedWorkerGlobalScope.i.isMaster.self.onmessage (blob:http%3A//wavepot.com/576aa761-8f7c-4918-b4e7-8bdd7c77ff39:2:12723)
As a heavy Ableton user and coder I would like to have a coding environment that also has powerful gui libraries so I can quickly implement faders and knobs and stuff when needed but also be able to change code on the fly. Kind of get the best of both worlds for the ultimate hacking music environment.
This is something similar and very interesting but for visual stuff based on openFrameworks:
https://media.usfca.edu/app/plugin/embed.aspx?ID=mjWcGbE4DUK...
Yeah, I'd really like to see a DAW that incorporates SuperCollider in a transparent way along with traditional audio & MIDI tracks, a bit like how Max is integrated with Ableton.
It's the melody of the song 'Popcorn'.
While this might not be special, I implemented a first draft of a (going-to-be) multi-track sequencer.
</shameless self plug>
What a great idea. I want to make synthesized sound effects for a game I'm making, and building them algorithmically in Javascript would be great. My alternatives have involved twiddling a zillion predefined knobs in a overwhelming UI, but as a developer this is far more appealing.
This is amazing, but I wish there was a clear way of knowing when there is an error causing your changes to not be output.
I fiddled for a couple of minutes without any changes before realizing i had forgotten to initialize a variable. (the x checkmarks don't seem to work for unassignad variables..)
Hi stagas, great job on wavepot! Is there any way we could get in touch off of this thread? My startup company is working on a very similar idea, and I think it would be great to know each other. If you'd like I can send write down my email onto this thread. Hope to talk soon!
This is really, really impressive. Cool range of sounds. Have you thought about attaching a user-friendly, knobby interface to the modules? I could see it as a really easy way for non-programming musicians to learn how to use this and other music generation systems like it.
As someone who's completely ignorant to the concepts behind algorave, could anyone more experienced outline the helpful prerequisites to know before I experiment with a tool like Wavepot?
I assume it's mostly the (basic?) physics of sound waves that I'd need to understand?
Slightly related question: Are there any courses I can take specifically aimed at making music by programming? Lynda or similar preferred, articles / books also work as long as they have examples using some sort of software (not just algorithms)
Unfortunately the course materials aren't available any more, but you should watch for the next offering of this course by the California Institutes of the Arts on Coursera: https://www.coursera.org/course/chuck101
This looks way more user-friendly than stuff like CSound. Would be awesome to see a real DAW-like GUI built on top of it...I've always wanted a DAW that would let me flip between GUI editing and code.
If you could use any help coding, please let me know.
I love it! I want to see something like this for the iPad. I could also imagine a Soundcloud + Spotify + Github hybrid playlist site where you could actually present your creation for people to play in a playlist format.
can you tell more about how you coded the fundraiser modal? is it just checking the balances on those addresses then converting them to USD? im sure its a simple solution but i'd be happy to hear more about it - i was thinking of making a similar one for my own project, but it would be good if it displayed different addresses per visitor for better obfuscation...lets say have half of a wallet full of addresses, then also check server side whether the displayed is a valid one
[edit: using something like coinbase would solve this, but how could you work it out without a 3rd party and any fees?]
It is using 3rd party APIs server-side to check wallets' balances, converts doge to btc and then converts the sum of btc to USD. I like the obfuscation idea but I'm not sure it's worth the complexity in this particular use-case.
this is really cool... it is really hard to see how the math ends up making music though. I almost would have an example of a scale or a 4 beat measure. very interesting stuff.
using super for the keyboard shortcuts is problematic, at least for me - super-enter for me opens a new terminal, and that appears to take precedence over the browser shortcut
Would someone kindly explain why I and two other users (msane and jrlocke) were downvoted for simply expressing delight? Is that somehow unwelcome? I don't see anything against that in the guidelines: http://ycombinator.com/newsguidelines.html
Note, this isn't a complaint, it's merely a request to understand why my comment is inappropriate and to understand how I can express my approval of a submission without losing karma.
Usually, when someone bothers to tell you, it's because the comment 'doesn't contribute to the dialog' or some such reason. Aka, too obvious or perhaps seems pandering for +karma type thing.
Anyway, I didn't downvote you btw, just my 2 cents from what I've read in the past.
It's not against the guidelines, but it doesn't create conversation. Many people find that fairly useless, and prefer comments with substance that advance the community.
The best way to express your approval of a submission? Upvote it.
>The best way to express your approval of a submission? Upvote it. //
Except with hidden vote scores it doesn't express anything at all (except changing a number on the OP's interface; the OP doesn't even know that you upvoted them).
If you want to express your approval to anyone other than the parent/OP or have anyone know who is expressing approval [that is occasionally relevant] then you have to comment.
I once thanked someone for submitting something interesting after their article had been up for a few hours and nobody commented. Someone downvoted me and took the time to condescendingly explain why my comment was worthless. I really don't understand that attitude. :|
While I suppose this does allow for all of those things, as much as any programming language with access to an audio API does, this would be better described as a web-based DSP livecoding environment.
Apart from the naming quibbles, it looks excellent! I wonder what's generating the sound? I'm aware of the oscillator/filter primitives in the HTML5 audio API from the minimoog google doodle, but this seems more elaborate than that.
EDIT: for fun times, load "need more 303," scroll down to the bottom, and change some of the numbers around. Setting the slide() call to 1/1024 yields a nice FM-ish sound. You can even overdrive the filter. Reach for the lasers!