But they didn't really stay single primary. They moved a lot of load off to alternate database systems. So they did effectively shard, but to different databases rather than postgres.
Quite possibly they would have been better off staying purely postgres but with sharing. But impossible to know.
The short-lived requirement seems pretty reasonable for IP certs as IP addresses are often rented and may bounce between users quickly. For example if you buy a VM on a cloud provider, as soon as you release that VM or IP it may be given to another customer. Now you have a valid certificate for that IP.
6 days actually seems like a long time for this situation!
Yes, the same way is that Fortran is faster than C due to stricter aliasing rules.
But in practice C, Rust and Fortran are not really distinguishable on their own in larger projects. In larger projects things like data structures and libraries are going to dominate over slightly different compiler optimizations. This is usually Rust's `std` vs `libc` type stuff or whatever foundational libraries you pull in.
For most practical Rust, C, C++, Fortran and Zig have about the same performance. Then there is a notable jump to things like Go, C# and Java.
> In larger projects things like data structures and libraries are going to dominate over slightly different compiler optimizations.
At this level of abstraction you'll probably see on average an effect based on how easy it is to access/use better data structures and algorithms.
Both the ease of access to those (whether the language supports generics, how easy it is to use libraries/dependencies), and whether the population of algorithms and data structures available are up to date, or decades old, would have an impact.
This makes it better but not solved. Those tokens do unambiguously separate the prompt and untrusted data but the LLM doesn't really process them differently. It is just reinforced to prefer following from the prompt text. This is quite unlike SQL parameters where it is completely impossible that they ever affect the query structure.
The problem with SIMs is that they aren't just credentials and config. They are full applications. Imagine if you needed to run a custom program to connect to every wifi network. It is bonkers. It is absurdly complex and insecure.
A "SIM" should just be a keypair. The subscriber use it to access the network.
It’s more complicated because it has to include logic about which network to connect to and how to tunnel back to the original provider (or partner) while roaming.
So it’s more like: which network to connect to, keys, fallback network selection logic and tunnel logic to get authorisation on a non-home network
That's a good point. That is what I meant by "and config" in my first sentence.
IIUC if the keypair was a certificate with a few other fields foreign networks could give you some basic communication with your provider and decided if you should be allowed to use this network and if/how to tunnel you back to the home network.
But the main point is that it should just be data that the user can port around to different devices as they see fit and that they can trust not to do malicious things.
It’s not just config though (unless you consider logic to be config). When you’re roaming, the sim applet has to generate a path back to its home network based on request/responses with the networks it can see and their partners (and their partners’ partners etc.)
It’s effectively multi-hop peer discovery and I don’t think you can encode the general case logic for it as just config.
Edit: as a (rather niche) example, FirstNet sims run a different applet to AT&T sims despite nominal running on the same network because they have special logic to use more networks if they are in an emergency area.
So for people who don't plan to roam, what's the point of a SIM card (embedded or not)? Credentials and a few lines of config should be enough. Do the carriers benefit when users use a SIM card?
Do you have any more details on this? I always thought that once the PDP context is established (which is based on the phone providing an APN and optional credentials, not the SIM), the "tunneling" (if any - local breakout is a thing apparently) is handled by the network and is completely transparent and invisible to the phone.
> It's certainly better than calling everything a div.
It's not. For semantic purposes <my-element> is the same as <div class=my-element>. So on the surface they are equivalent.
But if you are in the habit of using custom elements then you will likely continue to use them even when a more useful element is available so <my-aside> rather than <aside class=my-aside> so in practice it is probably worse even if theoretically identical.
Basically divs with classes provide no semantic information but create a good pattern for using semantic elements when they fit. Using custom elements provides no semantic information and makes using semantic elements look different and unusual.
> But if you are in the habit of using custom elements then you will likely continue to use them even when a more useful element is available
This article is written for web developers. I’m not sure who you think you are addressing with this comment.
In any case - the argument is a weak one. To the extent people make the mistake you allege they can make it with classed div and span tags as well and I’ve seen this in practice.
That is a strawman. I never said everyone who uses classes perfectly uses semantic elements.
My point is that if you are using <div class=my-element> you don't have to change your .my-element CSS selector or JS selection code to improve your code to <p class=my-element>. If you are using <my-element> it is much more work to change your selectors and now you have two ways of doing things depending on if you are using a native semantic element or a div (either a tag selector or class selector). You have made your styling code depend on your element choice which makes it harder to change.
When I was younger these type of machines were great for me. I usually used them at home but sometimes in my bedroom (aka office) and sometimes in the living room (group games, playing music, just watching TV with the roommates). I would also occasionally take them to school or other people's house (projects, LAN parties).
So it was used primarily like a desktop, and as my only system having power was useful. But the fact that I could put it in my backpack and transport it was super valuable.
Now I do have a more portable laptop and a full desktop setup. But at the time that wasn't the best option.
BoringTun is unmaintained. There are various forks being developed.
I work at Obscura VPN and faced with boringtun bugs a few years ago we evaluated a few of the forks and switched our client to be based on top of NepTUN (https://github.com/NordSecurity/NepTUN).
I am curious why Mullvad started their own fork rather than building on top of one of the existing ones. It would be nice if there could be reconsolidation somewhere.
I'm not sure that is necessary a bad reason. You need to factor in a lot of concerns to determine what "too expensive" means.
But if you are going to spend billions of dollars to develop a drug that only treats about 2 people a year it is likely too expensive even if it is 100% effective. That money would be better spent on treatments that have wider applicability.
Of course this is not simple to measure. Costs aren't known upfront and the research may end up proving invaluable to more widely applicable treatments.
So it is a judgment call and not necessarily a bad reason.
Agreed it isn't necessarily a bad reason. In some cases it's a good reason for failure (like the one you describe).
In other cases it's a bad reason for failure: it's also incredibly expensive to prove your drug works even if it does work for a lot of people.
That's bad! It'd be better if it were cheaper.
Actually counterintuitively, due to a weird drug approval and payor reimbursement policy arbitrage, pharma companies are highly incentivized to produce drugs for tiny populations.
One of my hobby horses is railing against this specific dynamic.
If it has a five year start and we've seen almost zero hardware shipping that is a pretty bad sign.
IIRC AV1 decoding hardware started shipping within a year of the bitstream being finalized. (Encoding took quite a bit longer but that is pretty reasonable)
Yeah, that's... sparse uptake. A few smart TV SOCs have it, but aside from Intel it seems that none of the major computer or mobile vendors are bothering. AV2 next it is then!
Quite possibly they would have been better off staying purely postgres but with sharing. But impossible to know.
reply