This has been replaced with a permissions feature that still provides both delete and overwrite protections. The difference is the underlying store needs to implement it rather than running a server that understands the permission differences. You can read more about this change here: https://github.com/borgbackup/borg/issues/8823#issuecomment-...
Isn't this "no-delete permission" just a made-up mode for testing the borg storage layer while simulating a lack of permissions for deleting and overwriting? In actual deployment, whatever backing store is used must have the access control primitives to implement such a restriction. I don't know how to do this on a posix filesystem, for example. Gemini gave me a convoluted solution that requires the client to change permissions after creating the files.
at first it was implemented to easily test permission restricted storages (can't easily test on all sorts of cloud storages).
it was implemented for "file:" (which is also used for "ssh://" repos) and there are automated tests for how borg behaves on such restricted permissions repos.
after the last beta I also added cli flags to "borg serve", so it now also can be used via .ssh/authorized_keys more easily.
so it can now also be used for practical applications, not just for testing.
not for production yet though, borg2 is still in beta.
Currently, you can either provide the `BORG_REPO_PERMISSIONS` env var to borg [0] or `--permissions` flag to `borg serve` [1]. You can then enforce this as part of your `authorized_keys` command, for example.
Ah, I was searching borgstore for no-delete, but it gets exploded into itemized permissions in borg. Documentation seems to be non-existent, as the only mention seems to be the changelog where it suggests this only exists for testing. But I suppose it's not released yet.
The old append-only mode was a hack that wasn’t very useful in practice anyway, because there were no tools to dissect changes in a repository and the datastructures wouldn’t support that anyway.
Making e.g. snapshots on the backing storage was always the better approach.
Thanks for that link.
That issue somehow didn't come up when I researched the removal of append-only.
The only hint I had was the vague "remove remainders of append-only and quota support" in the change log without any further information.
Thanks, I've come across sysbox before. But it seems it's become relatively quiet since it's gotten acquired by Docker? Moreover, I've yet to hear of anyone who has been using it in production.
We're actually fully open source and all development occurs in the open! Here's the repo https://github.com/picosh/pico and you can find us on Libera IRC
Perhaps giving a bit more information than throwing out random acronyms related to SSH would be a bit more fruitful in terms of responses.
What about TOFU and MITM would you like them to respond to? TOFU isn't inherently a bad thing. Neither is MITM. It depends on the threat model, the actors involved, etc.
Your comment (and the snarky followup) imply they're doing something wrong, but it's unclear what.
There is nothing that can be done beyond what they are doing?
You can receive their public keys out-of-band through an https-authenticated connection. Which means their approach to "the initial trust problem" is _not_ "trust on first use".
I don't know what other solutions there are to TOFU, but maybe it's nice if there's something like a standardised /.well-known/ssh-keys.json path for public ssh servers like github and pico.sh.
There’s SSHFP, but it’s off by default and assumes an attacker can’t modify dns, though most mitms would be executed with dns and dnssec deployment is generally a disaster.
Currently their host key page is only linked once at the bottom of their page and isn’t referenced in any onboarding docs, so effectively onboarding encourages “yolo”, and if users aren’t savvy they’re likely putting other things at risk, whatever their keys happen to also have access to.
The other argument that comes up here then is “well mitms are rare so this doesn’t seem like a big problem in practice”, however there are actually great targets here, for example you go to a conference and hijack the WiFi, then spend your time in hallway track advertising these services to your targets. This kind of thing has a high success rate.
The web improves on this problem with PKI, though similar phishing tactics exist in a similar situation where you encourage people to sign up explicitly guiding them to an incorrect domain, but propensity for using search in address bars strongly helps resist this too.
SSH is terrible for this use case, no matter how it makes people feel.
I am not sure how you avoided collisions (network namespaces?) on the localhost port space, but for things like this, you would be better off forwarding to/from UNIX domain sockets. It is more efficient as local tcp sockets have several times the overhead. You probably would want to set StreamLocalBindUnlink yes and StreamLocalBindMask 0117 in sshd_config. Then use UNIX groups with the group sticky bit set on the directory where the unix domain socket is made to allow multiple users access. The directory would be owned by that group while each user with access would be added to that group. It reduces some network overhead and is highly secure. I recently used this trick to connect a bunch of machines to a remote service through a jump host.
Also, take it from someone who has been running services over port forwards for years. You want to set ClientAliveInterval and ClientAliveCountMax in sshd_config on the server (if you have not already). Users should be encouraged to set ServerAliveCountMax and ServerAliveInterval In ssh_config on their machines. Furthermore, it would be best if the tunnels were run by daemon tools and had ExitOnForwardFailure set as part of the command that is run. The ssh command used at the client side likely also should set -nNT. It is also good practice for the machines running ssh to have dedicated accounts for the tunnels such that their daemon tools scripts are essentially two lines, a shebang followed by exec setuiduid user ssh -i ...
Finally, if people want to do very low overhead and highly secure setups, they should bind the services that they reverse forward to unix domain sockets locally and reverse forward the local unix domain sockets over ssh to remote unix domain sockets. They can use a file mode sticky bit on the parent directory to make the local Unix domain socket accessible by the ssh command running on its own user, which locks things down locally fairly nicely. A typical process running on the machine will not be able to talk to the reverse forwarded service thanks to the Unix file permissions. Lastly, using ed25519 or ecdsa ssh keys would make the initial connection process very quick compared to using RSA.
We’re actually using Unix sockets as the underlying transport layer for this. We’re also not using sshd, we custom wrote our own daemon that’s entire job is tunneling. If you’re curious about this, you can find the project here: https://github.com/antoniomika/sish
sish was actually my first foray into SSH apps. It was a lot of fun to write and pretty much implements tunnels with a routing system on top. It manages connectivity, routing, and reverse proxying all within user space. No namespaces required!
tuns can actually even tunnel UDP traffic over SSH, also entirely in user space. Docs for that can be found here: https://pico.sh/tuns#udp-tunneling
I'd actually highly recommend taking a look at vaxis (https://github.com/rockorager/vaxis). We've moved away from wish/bubbletea and have really enjoyed working with vaxis!