I'm currently in the endless email loop because someone named Raymond used one of my Gmail names to register with State Farm. One of their agents even emails me directly when he gets really behind on his payments but won't do anything when I tell them it's the wrong email.
In the past when this happens I usually reset the password and change the email to some anon throwaway but I can't do that without Raymonds DOB (don't quote me on that, been a while since I tried).
This exact thing happened to me with a State Farm agent.
After a few months, I told them I was concerned about the privacy ramifications and would have to report it to their state insurance regulator, and it was very quickly fixed.
Environment variables are -by far- the securest AND most practical way to provide configuration and secrets to apps.
Any other way is less secure: files on disk, (cli)arguments, a database, etc. Or about as secure but far more complex and convoluted. I've seen enterprise hosting with a (virtual) mount (nfs, etc) that provides config files - read only - tight permissions, served from a secure vault. A lot of indirection for getting secrets into an app that will still just read them plain text. More secure than env vars? how?
Or some encrypted database/vault that the app can read from using - a shared secret provided as env var or on-disk config file.
Disagree, the best way to pass secrets is by using mount namespaces (systemd and docker do this under /run/secrets/) so that the can program can access the secrets as needed but they don't exist in the environment. The process is not complicated, many system already implement it. By keeping them out of ENV variables you no longer have to worry about the entire ENV getting written out during a crash or debugging and exposing the secrets.
How does a mounted secret (vault) protect against dumping secrets on crash or debugging?
The app still has it. It can dump it. It will dump it. Django for example (not a security best practice in itself, btw) will indeed dump ENV vars but will also dump its settings.
The solution to this problem lies not in how you get the secrets into the app, but in prohibiting them getting out of it.
E.g. builds removing/stubbing tracing, dumping entirely. Or with proper logging and tracing layers that filter stuff.
There really is no difference, security wise, between logger.debug(system.env) and logger.debug(app.conf)
While this is a good feature, I fear most people aren't aware of git archive. Of the more basic CI tools I have looked at, I didn't notice any of them using git archive. Capistrano is the first I now know of that does this. Are there any others?
There is also export-subst that is also used by git archive to create an output similar to git describe directly in a file.
I'm not very familiar with deploy tools other than Capistrano, but I would think you also do not want to have the .git directory with your entire repo inside the working directory on the production server, so I assume some kind of "git export" must happen at some stage on most deploy tools? (Or perhaps they just rm -rf the .git directory?)
tangential, but deploys/builds that involve worktrees happen to neatly sidestep this since then .git is just a pointer to the real one. i use this to avoid having to otherwise prevent docker from wasting time reading the git info into the build context (especially important for latency if feeding local files into a remote image build)
The author makes a very common mistake of not reading the very first line of the documentation for .gitignore.
A gitignore file specifies intentionally untracked files that Git should ignore. Files already tracked by Git are not affected; see the NOTES below for details.
You should never be putting "!.gitignore" in .gitignore. Just do `echo "*" > .gitignore; git add -f .gitignore`. Once a file is tracked any changes to it will be tracked without needing to use --force with git add.
The point of that line is to robustly survive a rename of the directory which won't be automatically tracked without that line. You have to read between the lines to see this: they complain about this problem with .gitkeep files.
The \n won't be interpreted specially by echo unless it gets the -e option.
Personally if I need a build directory I just have it mkdir itself in my Makefile and rm -rf it in `make clean`. With the article's scheme this would cause `git status` noise that a `/build/` line in a root .gitignore wouldn't. I'm not really sure there's a good tradeoff there.
If you have a project template or a tool that otherwise sets up a project but leaves it in the user's hands to create a git repo for it or commit the project into an existing repo, then it would be better for it to create a self-excepting .gitignore file than to have to instruct the user on special git commands to use later.
I think I'd prefer to have all ignores and un-ignores explicitly in the file and not have some of them defined implicitly because a file was added to tracking at some point.
But ignore files are only for untracked files anyway. Maybe you want them to specify what can be in the repo and what not, but this is not how Git works.
I did a full security system replacement for my previous employer in our data center. Replaced all the old IP cameras that connected directly to a small black box nvr with UniFi camera recording onto a UniFi Video server writing to a NAS cable locked to the rack in our locked data center. Two months later UniFi Video was discontinued and stopped receiving updates or support. If we wanted a supported platform we had to purchase a UniFi Protect NVR with less storage and less power/network redundancy than what I built. Plus all access to UniFi Protect would run through their cloud portal.
This makes me wonder if it's inevitable for every hardware/software provider to be tempted by the candy now. Makes me ask myself if I could even resist it if I had a customer base with sunk costs who I could take advantage of. My feeling is that I could resist it, on principle, but most people wouldn't. And this is leaving out pressure from investors.
So such a company selling these solutions as locally run widgets - which we understand are under not just pressure to increase revenue, but also relentless pressure from governments to share their data - would definitely need to be completely self-funded, immediately profitable, and the solutions they sold would have to be permanent and not susceptible to any external market or government forces.
Zero updates and zero tracking of installations would be the goal.
[edit] but this is also not that hard. All the company needs to provide is a piece of software that stitches together existing hardware. The only updates would be when hardware updates, and those would be included in the price. If "NEVER CLOUD" was the company's entire corporate identity, then preserving that ethos would be a mandate.
[edit2] nevercloud.com is currently on sale for $8350. I'd suggest building the prime directive into the name, but that much money has better uses.
>all access to UniFi Protect would run through their cloud portal.
I have a unvr and protect and nothing runs through their portal, I connect directly to the ip address of the unvr. You can cut internet access off on the vlan and everything works fine.
I have a fairly recent DS920+ and never had issues with containers - I have probably 10+ containers on it - grafana, victoriametrics/logs, jellyfin, immich with ML, my custom ubuntu toolboxes for net, media, ffmpeg builds, gluetun for vpn, homeassistant, wallabag,...
Edit: I just checked Grafana and cadvisor reports 23 containers.
Edit2: 4.4.302+ (2022) is my kernel version, there might be specific tools that require more recent kernels, of course, but I was so far lucky enough to not run into those.
While gluetun works great, there are other implementations of wireguard that fail without the kernel modules. I've also ran into issues from containers wanting the kernel modules for iptables-nft but Synology only has legacy iptables.
I know there are userspace implementations, but can't remember the specifics rn and don't have my notes with me.
> kernel modules for iptables-nft
I think you meant nftables. The iptables-nft package is meant to provide iptables interface for nftables for code that still expects that, afaik. I didn't run into that issue yet (knock-knock). According to docs nftables is available since kernel 3.13, so in theory it might be possible to build the modules for Synology.
However, I don't think I will be buying another Synology in the future, mainly because of other issues like they restricting what RAM I can use or what I want to use the M2 slots for, or their recent experiment with trying to push their own drives only, etc. I might give TrueNAS a try if I am not bored enough to just build one on top of a general purpose OS...
I had to look it up and I think it was a mix of user error and a bad container. At one point I had been trying to use the nicolaka/netshoot container as a sidecar to troubleshoot iptables on another container and it is/was(?) missing the iptables-legacy package and unable to interact with the first containers iptables.
As great as containerization is, having the right kernel modules available goes a long way and I probably wouldn't have run into trouble like that if the first container hadn't fallen back to iptables because nftables was unavailable.
All of these NAS OSs that include docker work great for the most popular containers, but once you get into the more complex ones strange quirks start poping up.
Some Tesla models have backup manual releases on the inside that are hidden behind panels you have to remove. I believe one of the manuals even says you should inform all passengers of the location of the emergency manual releases, because well they are hidden and you wouldn't know where to find them without instructions.
In the past when this happens I usually reset the password and change the email to some anon throwaway but I can't do that without Raymonds DOB (don't quote me on that, been a while since I tried).
reply