Hacker Newsnew | past | comments | ask | show | jobs | submit | jmileham's commentslogin

Just to note that at least once delivery is the best case scenario if you configure Delayed::Job correctly, we just made it impossible to configure otherwise in our fork. The only alternative is at-most-once delivery, which puts you in Sidekiq territory with the potential for silent job loss. The semantics of exactly once delivery can only be achieved by domain-aware code, and the way to do that with any form of background job system is building jobs that are internally idempotent.


As you say, looks like redis-raft is capable of solving durability and consistency in a replicated environment as of 2020, which is welcome news: https://jepsen.io/analyses/redis-raft-1b3fbf6

At Betterment we use our OSS mostly-compatible fork of Delayed::Job referenced elsewhere in the comments to enqueue and work millions of jobs a day and sleep much better at night with the at-least-once delivery semantics if-and-only-if the related transaction commits which you can’t get without some form of integration with your primary database.


Because this research exists, there can't possibly be a market for a security product whose threat model might not include a malicious hardware manufacturer?


Admittedly its less of a risk with your SSH keys than with say your bitcoin wallet. (There's a clear economic incentive there and most bitcoin hardware wallets have pretty low levels of assurance).

But nonetheless, the number of people who are security conscious enough to lock their keys into their hardware, but not worried about malicious hardware seems quite limited.


>But nonetheless, the number of people who are security conscious enough to lock their keys into their hardware, but not worried about malicious hardware seems quite limited.

Maybe I'm wrong but it seems like you're misinterpreting these people. TouchID is an ease of use feature that you feel good about because you also get to improve your security (save for malignant hardware manufacturing). Its very easy and it improves your security. You don't have to be excessively security conscious to be interested in that. I like TouchID but I'm not a security obsessed person (although I'm not quite on the same level as your average joe), and im pretty sure its easy to sell this and any TouchID people to anyone regardless of how security conscious they are on the basis that using TouchID is even safer.

I just don't like your view that people who like TouchID must be obsessive about security and understand it inside and out. Most people do things regardless of how much they understand.. you won't be an expert in everything.


But explaining something in simple terms is a necessary step to teaching the concept. Only once the concept is understood can you put a name to it and keep climbing the ladder of abstraction. This article doesn't devalue proper terminology, it values fluency with abstractions.


This. I tend to think a key to really understanding something, is the ability to put that "something" in to contexts at a wide variety of abstraction levels.

That's what I hated the most with uni: Some professors could write 10 blackboards full of formulae, but could hardly put together a few words to explain what we were actually doing. Unsurprisingly, I don't remember much from these lectures. Fortunately that is not all professors, but it worries me how large a percent of them were like this.


> It's really cool hearing what they heard in the studio control room for the final mix. And often surprising.

But that's not quite what you're hearing - you're typically hearing what happens after the final mix is shipped to a mastering engineer who listened to the recording on a variety of intentionally flawed sound systems (probably including the "car test" - playing the tune on a car stereo with road noise, which is about as hostile an environment as people will expect to enjoy music in). Then the engineer threaded the needle to come up with the most pleasing sound they could muster for the intended market.

In the process the recording will have been compressed and EQed quite a bit, and likely will sound a good bit richer at a given loudness than it did when the mix was done - you should be able to "hear through" the mix better than before, unless the mastering engineer was simply going for loudness-at-all-costs, in which case, it might just be loud.

Anyway, not to take away from your point - good headphones, or even just headphones with different frequency response than you're used to, will open up different details of a mix, for sure, and flat response will give you the best chance to hear any details that weren't pushed to the fore intentionally, which can indeed be eye-opening.


As long as there's recorded audio up there, you might as well reproduce it well, and even standard CD-audio goes up beyond 20KHz.

Also there's plenty of individual variance in people's hearing. No need to fit to the lowest common denominator even if the majority can't hear it. As a tall person, I appreciate that airplane seats aren't pitched to cut your kneecaps off after 6'0" (ok, maybe that's generous to the airline industry, but you get the point).


I wonder if there's any effect that in-ear headphones are cheaper to produce but have advantages in accurate low frequency response?

Of course all this is confounded by the fact that music will tend to sound best on speakers/headphones with a response curve most like the speakers/headphones that the mastering engineer used (or more accurately, the set of speakers/headphones that the engineer compromised among). You will probably tend to have the best experience listening to music with the popular devices within a given musical subculture, because mastering engineers will be targeting those devices.


I definitely don't see it as hypocritical. Humanely designed systems are thoughtful in where they deploy implicitness. Implicitness can cut down on boilerplate for experts while simultaneously cutting down on confusing minutiae for newcomers.

On baked-in conventions, the implied design of a thoughtfully designed framework likely has wisdom that you shouldn't too quickly dismiss, and should at least fully wrap your brain around (along with the complementary design decisions in the rest of the framework) before diverging. The most baked-in assumptions in a well-designed system are the ones that exist for the strongest reasons.



This is a great way to spin not having a query planner as a feature, but I'm glad to have one every day that I write and compose semantic bits of SQL that can and should have different execution plans depending on the context in which they're evaluated.


The whole article is like one long spin job. "Explicit" is somehow the happy medium between the query engine putting together a plan based on data statistics and... I'm not even sure what space this claims to be the middle of. Keeping the examples simple enough so that there is an unambiguously right order of operations - filter before you sort, that's genius! There's no way your granddad's RDBMS could have figured that one out! What if the optimal order depends on the values, or changes over time? Just rewrite and recompile?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: