Audio output of modern computers is high precision. 16 bit resolution is pretty damned precise...even once converted to analog. I'm certain the mechanical parts being driven by it is a much, much, larger source of errors than any decent computer output would be.
That said, there is measurable and audible differences between a stock audio output and a high end (or even prosumer) audio output. A nice 24 bit audio box could, possibly, be a useful upgrade in the future...when the mechanics finally catch up. But, I suspect it'll be years before the mechanical side of this design is precise enough to even show the errors in a 16 bit output. If ever...
I'm not referring to 16 bit precision per sample, more to 44khz and the potential for multi-millisecond audio glitches on most modern PC configurations. Glitch-free audio is pretty tough, still.
That's not true at all and hasn't been for a long time. Maybe if you're mixing more than 8 tracks on a low-power CPU, but even a chromebook can do that sort of thing. Even my phone (several years old) can do that. Playing back stereo audio (which all that's required here) is something that can be done from a device that fits on your keyring. What you're arguing hasn't been true since the 90s.
Certainly, it would be bad if this thing relied on every last available bit of audio bandwidth all the way up to 22.05 Khz, but there's no particular reason to do that.
I'm not talking about 'can you play back audio', I'm talking about 'can you play back audio with zero glitches'. This becomes a problem of scheduling precision, whether or not scheduling will 'stall' if a driver hangs while processing an IRQ, whether or not the graphics driver has a potential to hang due to a rendering operation, etc, etc. These are all a reality on modern desktops even if they're fairly rare (I don't think I see audio glitches on my desktop more than maybe once a month).
The people I talk to who do professional audio recording and mixing insist that it is still a difficult problem; it was back when I did mixing (early to mid 2000s, not 'the 90s'). The prevalence of custom APIs for glitch-free low-latency audio playback like JACK and ASIO suggests to me that you still can't rely on just feeding samples to the OS and having them come out of the sound card at the appropriate rate 100% of the time.
FWIW, Windows introduced an entirely new audio stack in Vista that was tuned specifically to address these problems. Their first iteration managed to cripple ethernet bandwidth because of the scheduling requirements imposed for glitch-free audio.
I'm an audio nerd; I went to school for it, and I'm back to working professionally in the field (after a 15 year hiatus where I did it as a hobby). Serious audio problems do still exist on Linux systems, and it is a source of huge annoyance for me that my system still causes my (very expensive) studio monitors to pop when switching between some sound sources in some situations. But, these are mostly to do with the sound server and switching sources rather than with glitches in actual audio.
I still can't use Linux as my primary OS for audiovisual work because there are still issues with getting many channels of low latency, high resolution, audio playing and recording reliably.
But, the issues you describe as they apply to this task are mostly a long solved problem, at least for very simple tasks, like reliably playing 16 bit, 44.1 kHz, stereo audio. Honestly, there's never been a time that I can remember when I couldn't reliably play a stereo 16 bit, 44.1k, file back on a Linux system...going back to 1995, or so, when I first started using Linux. The problems come when you demand a little more of the system.
Once again, errors of the level that exist today (which are completely inaudible to even trained ears in most cases), in terms of latency, jitter, dropouts, etc. are so tiny and insignificant that they would be impossible for the current hardware to replicate. The hardware is so much slower than the input signal that noise from the audio signal will be lost in the much larger noise of the hardware. You can see the hardware noise in the printed objects. It's vastly larger than anything the computer output is going to screw up, and it'll be many generations before that stops being true (and, by then, maybe Linux will finally have its act together on audio).
Windows, BTW, finally does have its act together on audio, and has for five+ years. I can reliably play a couple dozen 96k 24 bit tracks while recording more on slightly high end laptop. Obviously Mac is also solid on this front.
It's only a problem if you need very low latency. This system doesn't, so you can just set a massive interface buffer size. Scheduling is a complete non-issue if you have several seconds worth of output buffer.
When professional audio people talk about the difficulties of glitch-free playback, they're talking about running their CPU at near 100% utilisation with 64 or 128 samples of buffer. Playing back a clean and glitch-free audio stream is trivial and has been for years.
I am a professional sound recordist for film and I also make electronic music; my office is a studio. I do not consider this a problem for this application.
Latency and glitching can become an issue when you're trying to record and monitor in real time while also operating plugins and multiple tracks. That's not the case here.
In passing, I'll tell you what is really a problem on current iterations of Windows: MIDI over USB. Audio is OK, MIDI is atrocious.
I've had good luck with my Focusrite Saffire USB device. Both Windows and Linux. I recall orneriness in older devices...my old Firewire Focusrite had problems.
But, it's sad that any MIDI device can possibly have problems anywhere so long after its invention. I have a MIDI interface on a Commodore 64 that works reliably! Likewise, I used to do sequencing on an Amiga...worked fine for MIDI. Multi-track 16 bit audio was more of a challenge on such a small machine (started doing digital multitrack work with an Amiga 2000 with a 7.14Mhz CPU).
I was having a converstion with some friends about this the other day - people still miss their Atari STs and decades-dead products like Opcode's StudioVision. Microsoft just does not seem to get MIDI, which is one reason (among several other non-MS related ones) that I do almost all my sequencing in hardware.
There's still a large community of people making music on those old machines. Mostly chiptunes, rather than MIDI-oriented stuff (as far as I know)...but, some folks are driving their old machines via MIDI and simply using them as sound devices.
Boots Riley of the Coup talked about using an ST for many, many years after it was out of date. I think he was using it up until Party Music (released in 2001, but it might have been the prior record. Atari Teenage Riot also keep pounding away on their STs well past their prime.
I use a C64 and a Game Boy for music production these days; but I mostly play the 64 live (using the keyboard...not even a MIDI keyboard of piano overlay) and the Game Boy just exists as an independent music device. I've never tried combining its output with anything else. But, probably will eventually. If I were to find an Amiga 1200 or one of the latter day STs, in really good shape for not a lot of money, I'd probably pick it up. But, there are so many collectors of the old machines now, and so few remain in working condition, that it's not common to find them. I need to acquire a backup C64, though, for sure. I don't think there's good resources for repairing them the way there were when I was a kid.
People making hardware focused on audio typically use real-time operating system versions, because of the schedule requirements imposed by the audio sampling.
I disagree on the premise that audio glitches can and do happen on non-realtime operating systems due to the lack of guaranteed scheduling. For this 3D printer, it doesn't really matter, but for a lot of other things, I would feel uncomfortable trusting a desktop PC soundcard, OS, and drivers to perform glitch-free audio I/O. I would much rather send self-clocking commands from a PC to a hardware device which takes care of the realtime task on its own.
But wasn't he talking about controlling it with your laptop? On Linux, this'd be no problem. But on Win7 I have this weird issue that causes my audio to go BZZZZPPT every few minutes or so (I think it's something to do with USB driver interrupts, but I haven't been able to dig up the root cause). It's usually not very noticeable small glitch (though annoying enough that it makes me rewind videos sometimes), anyway, but that would probably make for some "interesting" glitch lines in 3D printed models.
It depends on how fast the resin cures. I have the idea that it sends (and repeats) the same layer for a bit longer than milliseconds, so I don't see it mattering.
Perhaps, with slightly more hardware, you could rig it up to kill the power to the laser if the sound level dropped to zero (which is typically how computer audio hiccups manifest). Printing would stop for a moment, but you wouldn't have the laser curing anything it shouldn't.
This doesn't really work, though, because to the 3D printer, 0 volts is a valid signal in the domain of the input. Never use a value in the domain of your input as a null value. Additionally, it is difficult to define what the sound level of a continuous signal really is, since it is usually computed using an averaging, integrating, or low-pass filter (not sure which.) Just like an old-fashioned VU meter, it takes a moment for the needle to fall down to 0. By the time this happens, the glitch is over.
Also, consider that not all audio glitches are of the type that cause zeroes to be emitted. Some audio glitches are known as buffer underruns, where the ring buffer holding audio is not filled up in time, and eventually runs out, causing the same few milliseconds of audio to play over and over again until more audio makes it into the buffer.
I'm thinking you would throw in a cheap microcontroller and basically use it as a modem to control the printer. If the carrier signal is lost, then it stops printing. This would retain most of the benefits of using the audio jack, including being able to re-compute prints and play them back with something like an mp3 player instead of a proper computer.
This does make it a bit more complex though, because the microcontroller, though simple, still needs a board and some kind of power supply, and a reasonably accurate DAC for controlling the galvanometers. The microcontroller would have to spend the time in between DAC timer interrupts to perform the "modem" function. Not too bad since for this project the DAC only needs a frequency of 8kHz or less, but still mildly challenging. You could just use an RS-232 or USB interface instead of trying to create a modem since that would be easier and perhaps more reliable.
It may not require low-latency dynamic audio at all. It could be that you could transfer many ms of signal to the hardware in advance to ease your timing requirements.
That said, there is measurable and audible differences between a stock audio output and a high end (or even prosumer) audio output. A nice 24 bit audio box could, possibly, be a useful upgrade in the future...when the mechanics finally catch up. But, I suspect it'll be years before the mechanical side of this design is precise enough to even show the errors in a 16 bit output. If ever...