I think the implication is that you should own multiple client devices capable of SSHing into things, each with their own SSH keypair; and every SSH host you interact with should have multiple of your devices’ keypairs registered to it.
Tuna-Fish said that instead of backing up the keys from your devices, you should create a specific backup key that is only ever used in case you lose access to all your devices.
This is indeed best practice because it allows you to alert based on key: if you receive a login on a machine with your backup key, but you haven't lost your devices, then you know your backup was compromised. If you take backups of your regular key then it would be much more difficult to notice a problem.
Why should that change TSMC decision making even a little?
The reality is that TSMC has no competition capable of shipping an equivalent product. If AI fizzles out completely, the only way Apple can choose to not use TSMC is if they decide to ship an inferior product.
A world where TSMC drains all the venture capital out of all the AI startups, using NVidia as an intermediary, and then all the bubble pops and they all go under is a perfectly happy place for TSMC. In these market conditions they are asking cash upfront. The worst that can happen is that they overbuild capacity using other people's money that they don't have to pay back, leaving them in an even more dominant position in the crash that follows.
I believe this advantage is currently mostly theoretical, as the code ultimately gets compiled with LLVM which does not fully utilize all the additional optimization opportunities.
LLVM doesn't fully utilize all the power, but it does use an increasing amount every year. Flang and Rust have both given LLVM plenty of example code and a fair number of contributors who want to make LLVM work better for them.
Normal lithium-ion batteries have a liquid electrolyte. It's not water, but some carbohydrate. During draining and charging, ions travel between the electrodes through the electrolyte.
They definitely didn't. Apple was starting from scratch in the market, and the original iPhone did a whole bunch of things it shouldn't have, making it needlessly expensive to build for the hardware it included. This is partly why Nokia initially dismissed it, as soon as it was on the market, teardowns showed that it was basically an amateurish prototype that was pushed to production, internally much worse than you'd expect from a mature company that was used to building consumer electronics. The N95 could be sold for less because it was legitimately a lot cheaper phone to build.
Then only a year later the iPhone 3G came out, and it was a rough wake-up for Nokia. Because that one was actually a well-built sane design.
That's a weird way to describe "enough democratic senators dissented from the party line to let a CR pass".
Unlike the republicans, the democrats have never been able to maintain that kind of tight control over members. The CR didn't pass because "democrats" chose to let it. It passed because the republicans were able to individually influence 5 additional democrats to change their votes, in addition to the 2 who had always voted for it.
The kind of tight control that the republican party has had recently is very new and hasn't really happened before in the US.
The ones that voted for it were all magically the ones that were either not seeking re-election or ones that are not up for election the next term.
This is a hell of a coincidence.
I don't mean to call out the Democrats as the only one who do this (on HN you simultaneously can't point out a party for something because then somehow you're being partisan, but you're also damned if you don't give an example, so it puts you in a tough spot). Just a most recent thing I've noticed.
Up until recently even on HN Schumer was nearly universally damned for letting it happen or being behind it in his capacity as a minority leader. Perhaps without evidence, and perhaps baselessly. But it's telling that as soon as I point it out in a slightly different context, then suddenly it's an opinion worthy of greying out.
>Senator Chuck Schumer, the minority leader, continued to face criticism from members of his own party after he reversed course and allowed the stopgap spending bill to come to a vote.
It's obviously not a coincidence. I don't see how it is any kind of evidence for taking orders from above. People who don't have to face their voters any time soon (or ever) obviously have more leeway on making deals they might not like.
Passing a CR has required 60 votes in the senate since 1974. Despite this, and 60-vote majorities being very rare, shutdowns remained rare and typically very short for a very long time. This was not because the parties got together and made a deal; it was because it was common for senators in both parties to make side deals across the aisle to support their own pet projects. Having the discipline to force the senators of a party to not make such deals is something that only the republicans have managed, and only very recently.
People are angry at the democrats for being weak and a mess, but that is the normal state of affairs in US party politics.
If you have overcommit on, that happens. But if you have it off, it has to assume the worst case, otherwise there can be a failure when someone writes to a page.
The fundamental problem is that your machine is running software from a thousand different projects or libraries just to provide the basic system, and most of them do not handle allocation failure gracefully. If program A allocates too much memory and overcommit is off, that doesn't necessarily mean that A gets an allocation failure. It might also mean that code in library B in background process C gets the failure, and fails in a way that puts the system in a state that's not easily recoverable, and is possibly very different every time it happens.
For cleanly surfacing errors, overcommit=2 is a bad choice. For most servers, it's much better to leave overcommit on, but make the OOM killer always target your primary service/container, using oom-score-adj, and/or memory.oom.group to take out the whole cgroup. This way, you get to cleanly combine your OOM condition handling with the general failure case and can restart everything from a known foundation, instead of trying to soldier on while possibly lacking some piece of support infrastructure that is necessary but usually invisible.
There's also cgroup resource controls to separately govern max memory and swap usage. Thanks to systemd and systemd-run, you can easily apply and adjust them on arbitrary processes. The manpages you want are systemd.resource-control and systemd.exec. I haven't found any other equivalent tools that expose these cgroup features to the extent that systemd does.
I really dislike systemd, and its monolithic mass of over-engineered, all encompassing code. So I have to hang a comment here, showing just how easy this is to manage in a simple startup script. How these features are always exposed.
Taken from a SO post:
# Create a cgroup
mkdir /sys/fs/cgroup/memory/my_cgroup
# Add the process to it
echo $PID > /sys/fs/cgroup/memory/my_cgroup/cgroup.procs
# Set the limit to 40MB
echo $((40 \* 1024 \* 1024)) > /sys/fs/cgroup/memory/my_cgroup/memory.limit_in_bytes
Linux is so beautiful. Unix is. Systemd is like a person with makeup plastered 1" thick all over their face. It detracts, obscures the natural beauty, and is just a lot of work for no reason.
This is a better explanation and fix than others I've seen. There will be differences between desktop and server uses, but misbehaving applications and libraries exist on both.
reply