Every job on an HPC cluster should have a memory and CPU limit. Nearly every job should have a time limit as well. I/O throttling is a much trickier problem.
I wound up having a script for users on a jump host that would submit an sbatch job that ran sshd as the user on a random high level port and stored the port in the output. The output was available over NFS so the script parsed the port number and displayed the connection info to the user.
The user could then run a vscode server over ssh within the bounds of CPU/memory/time limits.
Arch user here. These things work much nicer than any of the previous alternatives. Sure, kernel signing is a bit of a mess, but that's more of a product of how key-signing at a low-level works than anything. Cryptsetup, cryptenroll, unified kernel images, and systemd-boot worked for me out of box.
They very much did not for me. I beat things into shape with sbctl but it was very much an uphill battle.
idk why Arch seems allergic to packaging shim-signed (it's an AUR, why would I trust such a key component to essentialy a stranger?), but here we are I guess.
you can inspect the PKGBUILD file very easily. it's same as alpine's abuild and various other build file formats from distros. don't just blindly build it
> Frontend is a mess because all you people are a mess.
As a backend guy who considers himself extremely fortunate that nearly all of his users/customers are technical, this got an audible chuckle out of me.
> If systemd is the reason, there are several good distros without systemd
I totally get avoiding systemd, I don't myself, but I get it. The author on the other hand talks about the problems doing this in a professional setting. This I do not get. As far as management of large fleets of servers goes, systemd is quite nice. Yeah, it's odd for some things but as far as automation is concerned it's the way to go.
With systemd the same file syntax and management works for services, timers, mount points, networking, name resolution, lightweight containers, virtual machines. You literally have to write one parser and serialize to ini. Then you get distribution generic management. Upgrades? No problem? Moving to CentOS to Debian? Ubuntu? arch? Whatever? No problem. It. Just. Works.
Yeah, if you're in the know you can do better for specific circumstances, but in this day and age OS's are throw away and automation you don't have to refactor is paramount. For professional work, this flame war is over.
>With systemd the same file syntax and management works for services, timers, mount points, networking, name resolution, lightweight containers, virtual machines. You literally have to write one parser and serialize to ini.
There is no "syntax", it's all just key=value pairs, and all the subsystems have their own set of keys/directives, and the values have their own mini-DSLs. Things that end in "Sec" (for "seconds") take duration labels. The only directives that are shared is the inter-unit dependency directives. Some keys/directives can be specified multiple times.
I don't know why you'd be parsing unit files or serializing something else to unit files. Just drop them into place. The hard part is knowing all the details of how the directives interact and what their values can be.
> There is no "syntax", it's all just key=value pairs,
This, with the sections, is INI. Duplicate keys included. Loosely defined spec, but INI none the less
> I don't know why you'd be parsing unit files or serializing something else to unit files. Just drop them into place
It's common to store information in a DB, or some other format that is easy to merge/override programmatically. Even configuration management tools like puppet, salt, ansible do this with JSON/YAML
My Cousin Vinnie is an example that holds up still. No current events, no racist jokes, just typical social interactions that are still relevant.