It's being posted as a gotcha because he fought against firearm control and he was killed with a firearm. His death, like many firearm-related others, would have been significantly less likely to occur if firearm possession was properly regulated and curbed, like it is in many other countries.
>I understand your point. But even if he said otherwise would still be posting this?
>Point is it just seems like a giant gotcha and it’s not fair
Who says life is fair? Was life fair for those school kids in Minnesota? The kids murdered in Uvalde? And on and on and on. Where's the fairness for them?
And why is it more important for Kirk to be treated fairly than those children? That's not a rhetorical question.
I'm not condoning murder. Full stop.
Whoever killed Kirk -- for whatever reason(s) -- should be prosecuted to the full extent of the law by the state of Utah.
To be clear, I didn't know Kirk or anyone in his family. I don't celebrate his death either.
But while it's sad, and even tragic, why is his death more important or relevant than the thousands of other deaths by gun in the US just this year?
All that said, there is a certain irony here -- as he explicitly allowed for exactly this outcome as acceptable in support of the Second Amendment.
And if, as he explicitly said, a certain number of deaths are acceptable (I don't agree, BTW) in support of a broad interpretation of the Second Amendment, why isn't his death also an unfortunate, but necessary offshoot of that?
One could argue that advocating against firearm control and regulation has resulted in significantly increased societal harm, which could also be identified as not fair, if not even evil/hateful, especially from those who have directly suffered from it.
Of course two wrongs don't make one right, and people can be more classy than this, but it's a totally understandable sentiment and response.
None of my claims disagree with what you just said. People posting the "gotcha" also likely don't disagree with you.
In fact, I suspect that most hate firearm-related violence and have worked to stop/curb it, and were opposed by Kirk who undeniably unfairly got a taste of his own medicine.
IMHO the incentives are disproportionately in favour of everyone doing something that hurts consumers (= "something that I don't like"), thus regulation in favour of consumer rights is appropriate.
There isn't a scenario where, at scale, someone can offer a product that respects consumer rights and is successful, because it's too profitable to not respect consumer rights just like it wasn't in many other cases.
I would be very surprised if bit flipping and ML were really used here, do you have any source?
While for sure there's a lot of signal and value in monitoring auth rates per BIN per payload, flipping bits can be extremely disruptive and counterproductive. From doing the wrong operation to being fined by the schemes, it's a lot of risk for not a lot of gain when these fields can be tuned ad-hoc for the few card issuers that deviate from the standard/norm.
Find it strange to focus on what that article says when 10 years ago we were using CUDA in a professional context for real world work and AMD didn't have anything competitive at all in the field till very recently.
If the tech was comparable maybe we could entertain the idea but Nvidia was just so absurdly ahead in tooling than AMD that the better dev team won.
Yes, the article focuses on GPP, which is more on the gaming side rather than the compute side. CUDA was clearly ahead and I think AMD still hasn't quite caught up, however, call me old fashioned but I don't like arbitrarily hardware-locked proprietay software frameworks like CUDA (and the same applies for all other nvidia stuff imho in the same category: rtx, dlss, gsync, etc).
For sure the better dev team won there, but on the long run, especially once CUDA becomes the only way to do "professional real world work", I'd like the hardware company to sell the hardware and the software company to sell the software, to avoid a dominant market position that hurts consumers and the industry, which is forced to pay premiums to monopolists.
I'm a bigger fan of the approach that AMD had over the years, their software frameworks are open and hardware agnostic, which resulted in improvements for everyone and not just their customers (e.g. Vulkan which came from Mantle, games with FSR or TressFX run well on all hardware, those with DLSS or Hairworks don't) and enable competition that brings prices down.
>"I want to be clear: best practice, ideologically-pure end-to-end apps like Signal absolutely face the same ratchet. What I’m mostly trying to understand here is why Telegram and Blackberry get more publicy targeted."
IMHO it's mainly due to the popularity of the service/product. The concentration of bad actors and the vastness of the audience/userbase make the difference. If Signal was used in the same way, it would get the same attention.
There are claims that Signal has already been compromised by the Five Eyes Intel Agencies, albeit through bribery rather than the overt coercion we see here. The key change is that Signal can no longer guarantee end-to-end encryption based on a passphrase tied to the app itself, and known only to the user.
For a while I wanted Signal to get popular so I wouldn't have to use other less private and secure apps, but now... I use it with close friends and close family... and that's it. I don't even mention it to most... I fear that popularity would bring more attention to the app and, with it, political and legal issues.
No, although it used (not sure if it still does) to encourage people to enable backups. On Android I believe the default was Google Drive, so you'd have people send their chats to Google in plain text.
iMessages is another example of a secure service that lets users "break" encryption. As soon we enable cloud features for it to work across devices, the key is uploaded to iCloud, essentially making chats plain text to Apple.
The main "backdoor" to Signal is that having access to the phone can leak all of Signal's data. If the phone OS is backdoored, then Signal is already compromised. Anyway, the point is not to make it impossible to exfiltrate data, but to make it as hard as possible.
Yeah, I almost put in a sentence or two acknowledging that -- as well as the fact that Durov is far more unprotected by a state from a geopolitical point of view. Would the French police arrest Mark Zuckerberg or another Facebook employee? It's not hard and fast (Italian and Brazilian courts have both put warrants out for the arrest of executives at major foreign tech companies), but it surely factors into how much political capital one would burn to pursue the case.
I can't find a description of an arrest warrant, but the case I was thinking of was this one from 2010 where three Google execs were found guilty and given suspended jail sentenced by an Italian court. https://www.theguardian.com/technology/2010/feb/24/google-vi...
It's quite handy that all the things that pass QA never fail in production. :)
On a serious note, we have no way of knowing whether their update passed some QA or not, likely it hasn't, but we don't know. Regardless, the post you're replying to, IMHO, correctly makes the point that no matter how good your QA is: it will not catch everything. When something slips, you are going to need good observability and staggered, gradual, rollbackable, rollouts.
Ultimately, unless it's a nuclear power plant or something mission critical with no redundancy, I don't care if it passes QA, I care that it doesn't cause damage in production.
Had this been halted after bricking 10, 100, 1.000, 10.000, heck, even 100.000 machines or a whopping 1.000.000 machines, it would have barely made it outside of the tech circle news.
> On a serious note, we have no way of knowing whether their update passed some QA or not
I think we can infer that it clearly did not go through any meaningful QA.
It is very possible for there to be edge-case configurations that get bricked regardless of how much QA was done. Yes, that happens.
That's not what happened here. They bricked a huge portion of internet connected windows machines. If not a single one of those machines was represented in their QA test bank, then either their QA is completely useless, or they ignored the results of QA which is even worse.
There is no possible interpretation here that doesn't make Crowdstrike look completely incompetent.
If there had been a QA process, the kill rate could not have been as high as it is, because there'd have to be at least one system configuration that's not subject to the issue.
I agree that testing can reduce the probability of having huge problems, but there are still many ways in which a QA process can fail silently, or even pass properly, without giving a good indication of what will happen in production due to data inconsistencies or environmental differences.
Ultimately we don't know if they QA'd the changes at all, if this was data corruption in production, or anything really. What we know for sure is that they didn't have a good story for rollbacks and enforced staggered rollouts.
While I agree that under-engineering can be a problem, it's generally very easy to fix by doing what's now clearly identified as missing/needed. Fixing over-engineering is always a nightmare and rarely a success.
IHMO the experience, the pragmatism and the soft skills needed for a successful/productive informal architectural meeting are too rare for this solution to work consistently.
Personally I've abandoned all hope and interest in software architecture, while on paper it makes a lot of sense, in practice (at least in what I do) it just enables way too many people to throw imaginary problems in the mix, distracting everyone from what matters with something that may eventually be a concern once/if a system hits a top-1% level of criticality/scale.
Yes, this happens too easily. It's the crux of Ward Cunningham's original observation on tech debt discussed recently [1]. He basically said: all of you thinking you can use waterfall to figure it all out up front are deluded. By getting started right away, you make mistakes and you avoid working on non-problems. I can fix the mistakes with refactoring but you can't ever get your time back.
Most teams live in his world now. Few do too much up-front design, most suffer from piled up tech debt.
I hope you give architecture another chance. Focus on the abstractions themselves [2] and divorce that from the process and team roles [3].
[3] JESA section 1.5, https://www.georgefairbanks.com/assets/jesa/Just_Enough_Soft... "Job titles, development processes, and engineering artifacts are separable, so it is important to avoid conflating the job title “architect,” the process of architecting a system, and the engineering artifact that is the software architecture."
>Moreover, I noticed that some merchants refuse my payment when I use e.g. Google Pay with my Amex instead of my MasterCard.
In my experience this is normally due to either how the card machine provider has set up the device or due to the lack of certification of the mobile wallet functionality on the "acquirer" backend ("host") that speaks to the card schemes.
It's annoyingly tricky to get the end-to-end transaction working properly across all schemes, all payment methods and devices. Different card schemes support different "payment kernel" parameters and have different certification requirements.
It could also be an attempt to save money on transaction fees, amex is generally significantly more expensive for merchants.
Historically, Amex always required a separate retailer relationship and to act as its own acquirer. I don't know how true that is any more. They've just always been the awkward one, with higher fees and special relationships. Also they used to use ANSI standards on some stuff where everyone else used ISO... but that's going back 20 years!
Yes, that is still true – AmEx own their own payment processing network, and they do not allow outsiders into it, even banks they have the brand sharing agreements with, hence a separate retailer relationship.
>Different card schemes support different "payment kernel" parameters and have different certification requirements.
Those certification requirements are one of the biggest hurdles because they can change quite often, and unless you are a high-volume gateway, there may be no leniency for you, making simply refusing the transactions cheaper than processing them and being fined.
The digital cryptogram requirements for visa caused some major engineering expenditures for a few payment processors I'm aware of.
When I briefly had my own store I blocked amex because of their ridiculous fees. And they are pretty merchant hostile re: chargebacks too. The overhead and headache wasn't worth dealing with them. That was a while ago so maybe they have improved, but I still occasionally run into places that don't take them so I guess not.
I think this bit in the baseline section applies to the Java one too
>Note that that’s a best-of-five measurement, so I’m allowing the file to be cached. Who knows whether Linux will allow all 13GB to be kept in disk cache, though presumably it does, because the first time it took closer to 6 seconds.
Yea, I assumed that. Which makes the parallel version improvements still interesting but surely it's very artificial. You can't processes all the data at the same time if you don't have it all yet.
I think that enforcing what you're suggesting is incredibly hard and I don't think can scale, it's what PCI-DSS and similar are meant to tackle, it really doesn't work in my experience.
This is a protocol/product problem, it's wild that to make a payment all the crown jewels need to be put on the wire. It's about time that payment devices and the whole ecosystem adopts some sensible cryptography that, at minimum allows signing payment requests, and ideally keeps its keys private.
Although this whole problem is kind of already solved by 3DS2, albeit not in a great way.