Hacker Newsnew | past | comments | ask | show | jobs | submit | michaelrmmiller's commentslogin

I live nearby and drive past it nearly every day. It’s been fascinating watching its construction. They’ve done a very good job not impacting traffic and only closed down the highway late at night for a week or two.


Yeah! The first time seeing it was funny, a bridge with no roads. Excited for it to open. It’s a bummer that it cost $100m


At those costs, very very few will be built.

More modest projects should be considered. Like simple bridges, or large culverts running underneath the highway.


I wonder why they didn't opt for your suggestion for multiple modest projects.

The article doesn't mention it, it only says:

> National Park Service began a decades-long study of the region’s mountain lion population that the 101 freeway was deemed “the most significant barrier to the ecological health of the region.”


A big reason (this is from the project site):

> NEPA Completed 04/12/2018: Finding of No Significant Impact (FONSI), Liberty Canyon Wildlife Habitat Connectivity Project

> In compliance with CEQA, Caltrans held a 47-day public scoping period to allow the public and regulatory agencies an opportunity to comment on the scope of the IS/EA and to identify issues that should be addressed in the environmental document. A scoping report documents the issues and concerns expressed during the public scoping meetings held on January 14, 2016 and the written comments received from the public, community organizations, and governmental agencies during the public scoping period from December 14, 2015 through January 29, 2016. The release of the Final Environmental Document with responses to comments included was completed in the summer of 2016. NEPA/CEQA was completed in April 2018. A total of 8,859 comments were received in response to the draft Environmental Document, with only 15 opposed.

When any infrastructure project requires nearly a decade of preliminary work before shovels hit ground, said work becomes impossible to accomplish for smaller-scale builds. Even if ten smaller bridges would be a better solution, 10x-ing the review process would likely be impossible at current staffing / budget levels.

It's a big problem.


If it's too small/unsafe you'll end up with wildlife not using it or being funneled in for other for predators. The larger and more skittish the animals you want to use it, the larger you're going to want to make it. Since they're calling out mountain lions, which are quite skittish, they're going to need a large (and probably partially covered) pathway for them to want to use it--as opposed to treating the freeway like the barrier it is to wildlife.

I'm sure there's more to it than that, but making it desirable, accessible, and for it to feel safe for the target animal population is important for it to accomplish what it's intended to do.

There are smaller culverts all over the place for smaller wildlife. I've even seen some called out in small walking paths--there for the frogs and other amphibians.


That’s truly an unbelievable amount. I suppose there is some breakdown of this available. Would love to hear from someone who knows on the actual reasonableness of the costs. Seems exceptionally out of line from the sidelines.


Those offerings exist as well. See East West Composer Cloud (https://www.soundsonline.com/composercloud) and Musio (https://musio.com/) (full disclaimer: I did some work on the latter). There are also related offerings like Splice for loops and Output Arcade (https://output.com/get-arcade/).


WASAPI has been available since Windows Vista. It isn’t its own set of drivers but rather a unifying layer for the WDM driver and the preceding mishmash of Windows audio APIs (MME, DirectAudio, etc). WASAPI supports low ish latencies with Exclusive Mode and then something like 10ms buffering in Shared Mode through the Windows audio server, I recall.

Put another way: any Windows audio device supports WASAPI unless it only ships with an ASIO driver which is unlikely, even in the pro audio space.


Orchestration also means something different these days for most film, TV and game composers compared to the traditional definition. Traditionally, orchestration would be taking a piano or short score and expanding it to be played by an orchestra. Nowadays, almost all media composers create full digital mock-ups of the music first with the entire orchestra and then some. Orchestration then is mostly a process of transcription, typesetting and adjustments for live ensemble so the recording and performance accurately conveys the intent. John Williams and Howard Shore are two of the old guard who write short scores with notes for the orchestrator about what to do. Nearly everyone else is writing for the full ensemble and orchestrating as they go. Then the orchestrators translate that to traditional music notation so it can be played.


Yeah

The actual act of orchestration (translating to the different instruments) is an exercise in "backwards compatibility" and musical knowledge. (Because every instrument has a range, a clef, and they play in a range but read on another range, etc, etc because of some weird thing that happened in the 17th century)


So who is actually writing the sheet music nowadays?

Is it a specialized company or a part-time contractor?


If you look carefully in the endless credits scroll you can see the credits for the orchestrator(s), if the composer doesn't do the orchestration themselves. Herbert Spencer, for instance, is credited for orchestration of Star Wars Ep. 4, A New Hope. He worked as orchestrator on a lot of other John Williams movies and was a film composer in his own right as well.


generally, as in many things in music, orchestrators work freelance. Music is a low paying field, so the pay isn't great, but its not terrible


If you have specialized in a sub-discipline, it can be an entry point without prior game dev experience or shipped titles. For me, my background in and love of audio and music programming made me desirable despite a lack of game dev roles. Turns out there is a severe lack of audio programming talent across the entire industry (we are looking for more at Naughty Dog as well!). Other specializations within graphics, physics and AI may also be ways to bypass the typical, Catch-22-like requirements for prior game dev experience.

The old generalizations about lower salaries and higher hours are no longer true across the board, by the way. As the industry is maturing, a lot of that is changing and the culture around things like crunch is changing for the better.


I feel I might be exactly the type of individual the company is looking for. For someone like me, without a strong tech resume, but lots of time spent with audio/audio programming, an audio programming/production role would be amazing. Is this something you'd be willing to talk to me about?


Absolutely! My email is in my bio. Reach out and we can find a time to chat via whatever medium is comfortable.


That chip is responsible for much more than HRTFs too. It can handle a huge amount of 3D audio-related DSP effects and decoding which are all way more compute intensive than the HRTF, which is performed once at the very end of the signal chain for the headphones.


How would HRTF be 'performed once at the very end of the signal chain'? Dont you have to transform every individual signal/position before mixing? On the other hand I read somewhere ATMOS is encoded as an array of filters with positions, so decoding is merely a fourier decomposition, would love to learn more about that.


There are a few different models at play: surround sound like 5.1, 7.1, ambisonics and the 7.1.4 Atmos static bed; and object-based where mono point source sounds are attached to a location. The former traditional models can be interpreted as individual objects positioned at the speaker locations and folded down to stereo passing through the HRTF that way. It’s a mixed signal so it really is at the end of the chain. For object-based, those are more precisely located but have other downsides (e.g. they break our mixing concepts for things like compression and reverb) and each object would need to be upmixed to binaural stereo through the HRTF.

Higher order ambisonics strikes a pretty good balance in terms of spatial resolution while still being a mixed signal. You can then pair it with objects for specific highlights. Atmos is a 7.1.4 static bed plus dynamic objects, so similar idea. In either case, most of these 3D sound systems support very few dynamic objects. For example, Windows Sonic only supports 15 dynamic objects on Xbox: https://docs.microsoft.com/en-us/windows/win32/coreaudio/spa...


Thank you. Do you know if Sony has publicly released any more technical documentation about it? I know Sony put out that video with Cerny around the time of the PS5’s release, but I don’t know if there has been anything else.


Nothing public I’m aware of, unfortunately. I wish they talked about it publicly in more technical detail.


Very true for PCs but it’s starting to shift with both consoles and receivers with Atmos decoders. For example, the PS5 has a custom audio DSP chip with 3D sound capabilities for reverberation, spatialization and more.


To add onto this very good explanation from Paul, in practice, audio processing graphs always rapidly decay into serial processing after starting out massively parallel. Most of the time, we are writing to a single output (i.e. speakers via your audio interface or a file). On top of that, some of the heaviest weight DSP happens on this final output stage (mastering chains with multiband compressors, linear phase equalizers, limiters etc.) So every 5.3ms (256 sample buffer @ 48kHz sample rate) we start massively parallel processing all the leaf nodes (audio tracks with sound files, virtual instruments and synths) and end up bottlenecking as the tree collapses into a line. Then we are stuck doing some of the most CPU intensive work on a single core since we can’t process the limiter DSP plug-in until the EQ finishes its work, for example.

We need to meet our real-time deadline or risk dropping buffers and making nasty pops and clicks. That mastering stage can pretty easily be the limiting (hah) step that causes us to miss the deadline, even if we processed hundreds of tracks in parallel moments before in less time.

The plug-in APIs (AudioUnits, VSTs, AAX) which are responsible for all the DSP and virtual instruments are also designed to process synchronously. Some plug-ins implement their own threading under the hood but this can often get in the way of the host application’s real-time processing. On top of that, because the API isn’t designed to be asynchronous, the host’s processing thread is tied up waiting for the completed result from the plug-in before it can move on.

Add on that many DSP algorithms are time-dependent. You can’t chop up the sample buffer into N different parts and process them independently. The result for sample i+1 depends on processing sample i first.


With good reason, too. A good portion of the Native Instrument software packages won't install because the installers themselves rely on 32-bit helpers. A savvy technical user can work around it by unpacking and excising the problematic portions from the installer scripts.

On the plus side, I've been running Kontakt 5 along with lots of other audio software since 10.15 beta 3 every day and most everything is working alright.

Some other observations:

• Pro Tools' QuickTime video plug-in prevents it from launching because it's a 32-bit only subprocess meant to interact with the now-defunct QuickTime framework. You can delete the plug-in from the application bundle and it will proceed and seems to be working normally.

• EastWest has a nasty crash in PLAY 6 that can be worked around temporarily by removing their internal word builder plug-in.

• Pretty much every iZotope plug-in as of September had a 32-bit non-pkg installer. The software works fine if you copy all the relevant parts from an install on another computer.

• MOTU hardware drivers for anything but the latest Pro Audio line won't work as the drivers are 32-bit at the moment (MIDI interfaces, CueMix-based audio interfaces).


As a former MOTU engineer who worked on the software side of the new interfaces, the built-in web server was really about cross-device and multi-user access, not avoiding CoreAudio. We still had to write CoreAudio server plug-ins for Thunderbolt and proxy the web server through the driver as well.

I hear they are planning to update their 32-bit drivers for the older MIDI and audio interfaces with the exception of the PCIe devices. (There are no existing Macs that can run Catalina and the old PCIe interfaces with the exception of the new Mac Pros, but there will be no driver so...)


MOTU just released a statement indicating that they’re only working on new drivers for the hybrid and newer devices. Firewire gear like the popular 828mk3 classic aren’t getting drivers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: