E-banking security. Configured a dedicated hardware laptop with default network policy outgoing to denied. Manually configured a very limited set of IPs for the banks sites used (no DNS server allowed, static resolution in /etc/hosts) and OS packages. Second step (or factor) done on a dedicated phone hardware too (no sim card used). Automatic browser startup at session open with tabs open for the banks website.
Been operational for a few years. Minimal maintenance. Great peace of mind.
Worth to mention in the same area regarding authorization engine with APIs is OPA [1] which is relying on a Datalog inspired language: Rego.
I agree with you that authorization is lacking a set of standards allowing interoperability. The only known practical one XACML, has not seen wide adoption. OPA through its design and API allow useful feature for Enterprise use cases for which Styra [2] (founder of OPA) is selling a solution based on those APIs.
Your quote seems to be describing what Microsoft observed from the attackers on the compromised machines with SolarWinds Orion not how the software was compromised in the first place.
> In actions observed at the Microsoft cloud, attackers have either gained administrative access using compromised privileged account credentials (e.g. stolen passwords) or by forging SAML tokens using compromised SAML token signing certificates.
> Although we do not know how the backdoor code made it into the library, from the recent campaigns, research indicates that the attackers might have compromised internal build or distribution systems of SolarWinds, embedding backdoor code into a legitimate SolarWinds library with the file name SolarWinds.Orion.Core.BusinessLayer.dll.
> the attackers might have compromised internal build or distribution systems of SolarWinds
It should probably become a requirement (for both open and closed source software) that any updates be not just signed but have their hashes available in a Binary Transparency log[0].
When you first install a piece of software, you might need to calculate the hash locally and manually search for it in a log's web interface, but after that, its software-update routine should check that the new version it is downloading has had its hash published in a known place. That way, software publishers can check an append-only independently-run log to see what has been signed with their keys.
I suppose there is a risk that an attacker could prevent users from receiving security updates by DoS'ing the transparency logs, but that should be harder than just DoS'ing the servers that host the software updates themselves. Large organisations could also maintain mirrors of these logs on their internal networks, which would help with privacy/latency/availability, and the logs should ideally be available as Tor hidden services too.
For non-critical updates, the log checking routine should require that the update's hash had been in the log for a certain period of time, long enough for the software publisher to notice and raise the alarm to their users. Updates marked as critical should default to stopping the software from running until the necessary period had elapsed, for which the workaround would be a fresh install of the newer version by whomever has the admin privileges to do that.
Notary is a signing scheme from the publisher. It is an improvement over GPG signing + a better scheme for signaling to clients the next version to update.
Asset Transparency doesn't require the publisher to be involved at all and can work on any URL on the internet that is publicly accessible. It also complementary to signing schemes.
Here is the Asset Transparency CLI fetching and verifying the contents of a notary release for example:
tl get https://github.com/theupdateframework/notary/releases/download/v0.6.1/notary-Linux-amd64
Or if you are curious hit the service’s lookup endpoint directly:
Thanks for the links. Do you know how this toolset helps to mitigate/prevent what is called in the GitHub blogpost "Supply chain compromises".
Quickly checked around and couldn't find anything that applies to the dependencies of applications/binaries before they land into the target runtime (i.e k8s).
They walk through one of the workflows (end state is deploying to k8s).
Grafeas is a metadata store, Kritis is a policy engine that plugs into k8s as an admission controller- blessing the "admission" (running) of an image in a namespace.
There are existing tools for each language/runtime that produce known vuln lists for individual artifacts in the language ecosystem. These you feed into Grafeas. And you have your CI pipeline providing manifests for each of your built images that contain all upstream dependencies (these produced from each app's build tool). Then at deploy time, Kritis checks the manifest on the image, and for each artifact in the image, checks for vulns and determines whether the vuln should keep the image from being deployed.
Hope that helps. There are many other workflows but that one is the most direct.
The point is that it's unsafe to allow tenants ClusterRole / admin on a shared cluster, but this is needed for many CRDs and Operators.
The Operator pattern is getting more and more popular, and most of then need ClusterRole.
As the service provider (internal team, or SaaS provider), this is a liability. The aim, from reading the README.md, is to provide the ability for each tenant to be ClusterRole / admin within their own cluster, hosted in a larger real cluster.
Gardener project focus is orchestration of multiple Kubernetes clusters on IaaS cloud providers, whereas k3v focus on running a dedicated control plane over an existing Kubernetes cluster.
Some overlaps but different project's goals.
Yeah, Gardener looks like it's meant to address one tenant on multiple k8s clusters, and k3v looks like it's meant to address multitenancy in a single k8s cluster.
You can find a lot of resources on his website among which lectures on the topic of Origami folding and algorithms. http://courses.csail.mit.edu/6.849/fall20/lectures/