running multiple game servers in docker is a multi-tenant environment, because docker is not a serious security boundary unless you're applying significant kernel hardening to your kconfig to the tune of grsecurity patches or similar
I found the deluge (web?) ui becoming unusable after adding tens (or hundreds?) of thousands of torrents.
Not sure about the details, but a decade ago I used to seed all files below 100MB on many private trackers for seed bonus points, and yea, deluge ui (might have been the web ui, not sure) became very slow. :D
Same, deluge and qbittorrent would start to have issues with very large or lots of torrents. Ended up with transmission with the trguiNG UI and its handled everything. It's not perfect and often slow but it hasn't crashed.
I ran into slowdowns in the remote control after just a few hundred. I switched to transmission shortly after. I had a great time using Deluge for probably like 6-7 years but Transmission is more performant has more tooling support.
I get the argument, but if that is more than a strawman argument to you, I am bewildered. Making a network connection is infinitely less problematic than having root level access to a kernel (translate to windows language for NT)
The secondary effect is that business will stop using processes and chemicals which require them to carry this warning. You've effectively created a new market segment.
Are the labels annoying to the point of comedy? Sure, but it's not /your/ behavior we were trying to modify.
> encrypted sessions (and/or EK cert verification) without PIN are not much more then obfuscation
this is completely incorrect, encrypted sessions defeat TPM interposers when there is a factory burned-in processor side secret to use. lol at being just "obfuscation" because you can spend $5m to decap and fetch the key then put the processor back into working order for the attack.
that just requires a vertically integrated device instead of a consumer part-swappable PC.
What you are saying is sound, and I agree it could be done.
But there are multiple caveats:
- How do you hide the secret so that only "legitimate" operating systems can use it for establishing their sessions and not "Mate's bootleg totally not malware live USB"?
- And unfortunately current CPUs don't implement this.
- Additionally don't be so smug to think you need to decap a CPU to extract on-die secrets. Fault injection attacks are very effective and hard to defend against.
I agree the security of this can somewhat be somewhat improved, but if you are building a custom CPU anyhow, you might as well move the TPM on-die and avoid this problem entirely.
before the popularity of ARM SoCs that contain everything on-die there were much fewer choices for vertically integrated devices. it's a different segment.
if you look at apple's vertically integrated devices, they chose a cryptography coprocessor that was not on die originally. with a key accessible only by both pieces of silicon's trusted execution environments, rather than the operating system directly, encrypted comms are established in a similar fashion as the TPM2.0 proposal.
> But what this simple experiment demonstrates is that Llama 3 basically can't stop itself from spouting inane and abhorrent text if induced to do so. It lacks the ability to self-reflect, to analyze what it has said as it is saying it.
> That seems like a pretty big issue.
what? why? an LLM produces the next tokens based on the preceding tokens. nothing more. even a harvard student is confused about this?
200mb of data is not a large file, and chromium tabs have a memory limit of something ridiculously low so actual large 20-100gb datasets render this useless.
This echoes my thoughts exactly. Right now, we're actually more limited by the JS UI so a couple 100 MBs is the most you can do in a browser otherwise the UI becomes really slow. There's a lot of room for improvement - we're using React and that's causing a bunch of un-needed re-renders right now that we don't need. We probably need to create our own DAG based task management system and use Canvas to render everything - with all that, workflows on much larger files will hopefully become usable.
This is certainly true - I'm not saying "large file" in the colloquial sense of the "big data" but rather as in - a file you might want to open in Excel/Google Sheets. I've worked actual large datasets before - upwards of 500GB - pretty often before an I really wouldn't think about using my laptop for a such a thing!
We are thinking of making data connectors to major DBs though so you should be able to do a similar style visual analysis while keeping the compute on your DB.
I looked up the limit and as of 2021, tabs seem to have been limited to 16GB which is moderate in size for an in-memory dataset. However, I know WASM has a hard limit of 4GB without Memory64. Data size is all relative.
authenticated sessions are practically useless on anything but a fully integrated device, because there is no guarantee of the SRK's identity - MITM is still possible.