> "good small old red wooden English book" have to come in that order or it sounds very peculiar.
Interesting. As a native English speaker (from the US), I'd say that "good small old" felt a little awkward for me to say out loud. Personally, I'd probably say "good old small ...", but to your point, there isn't exactly a "right" answer, just one that sounds right. I'm assuming you're also a native English speaker from the UK, so maybe we've discovered a funky difference between the English in our two countries. It would be a fun study to give native English speakers a list of those adjectives, and the noun "book", and tell them to order them.
As a native English speaker from England, I'd always keep "good" and "old" together, and probably put them at the beginning of the sentence. I'd also use "little" rather than "small" in such a context: "my good old little red wooden English book." To me that would sound just right.
Yeah, but "good old" has an independent phrasal meaning, as in "good old Charlie Brown". That's fine if that's what you mean, or if you want to play with the ambiguity between the two interpretations - but if that's definitely not what you mean, then best use the standard phrasing.
I don't think it's independent at all. I think it assigns the quality of good oldness to things that are good but not old. Or it refers to things that are good and familiar.
Yup. That's probably a better way to say exactly what I meant. "Good old" can mean something that's good but not old. "Old, good" means both old, and good. Thank you.
I suppose a comma might disambiguate, within a list of qualities, but I think my point stands.
- Outbound internet access over port 53 is blocked for everything on the network, other than the Pi-Hole/Unbound server
- IpTables rule in place to force all outbound traffic over port 53 to go thru the Pi-Hole. This prevents devices from circumventing the Pi-Hole filtering by hard-coding public DNS servers
- Cronjob that polls http://public-dns.info/nameservers-all.txt regularly, and updates an IpTables rule to block all outbound internet traffic over any port/protocol to servers in that list. This is my attempt to block things that try to circumvent DNS filtering by doing DNS over HTTPS
- Unbound makes it possible to bypass DnsCrypt for specific zones, as needed. It also is configured to prefetch records before expiration, which generally eliminates the latency introduced by DnsCrypt
---
This is overkill, but I tried to address privacy concerns as well as ad-blocking with this setup, and it's also been fun to tinker with
- ubiquiti edge router x from ~2019.. there's a bash script on the box for updating the blocklist, the rest of the configuration can be done in the GUI
- pihole and unbound are running in a VM on an old intel NUC with an i5 and 18GB of RAM. The NUC is running Proxmox, and is connected to the edgerouter over ethernet
- Separately, there's a ubiquiti WAP and a standalone modem, but there's nothing special about their configuration
Their older stuff did not really supported it as well..
you could do it, but just because the USG software was a fork of Vyatta that had a way for doing it and Ubiquiti never put the effort to block it..
So while there was a way of doing it, it was never really officially supported..
But this is why when it came time to upgrade my USG3 i choose to migrate to Opnsense (pfsense fork) instead of upgrading to the latest Ubiquiti router.
Yes, the re-writes are done on a ubiquiti edge router. The re-write rules count the number of hits, as well as basic connection details like src port/addess, dst port/address, protocol. The biggest offender is the roku, which tries to use 8.8.8.8
edit: to be honest though, I don't look at the logs often to see what else gets caught, or why
What's stopping you from running your own registry? Or keeping images on a build machine and moving them around with some file sharing mechanism? You don't need a docker account to pull public images from dockerhub, and you don't _have_ to push your images to dockerhub
Docker stopped publishing builds of their own registry many years ago. So if you want to run the official registry, you need to build it from source. This leads to a fun and exciting bootstrapping process; to launch the registry, you have to pull it from somewhere. Since your registry, which is where you'd like to store it, isn't running, you can't pull it from there. So you have to use some third-party registry to bootstrap. Or do what I did, and give up, and just watch their registry crash randomly when it receives input that confuses it.
People will make fun of me if I go into the great details of the workarounds I have to make a DigitalOcean managed Kubernetes instance pull images from a registry that's hosted in the cluster. But it's fun, so here we go. I use a DigitalOcean load balancer to get HTTP traffic into my cluster. (This is because the IP addresses of Kubernetes nodes aren't stable on DigitalOcean, so there is really no way to convince the average browser to direct traffic to a node with any predictable SLA.) I configured the load balancer to use the PROXY protocol to inform my HTTP proxy of the user's IP address. (I don't use HTTP load balancing because I want HTTP/2 and I manage my own letsencrypt certificates with cert-manager, which is not possible with their HTTP load balancer. So I have to terminate TLS inside my cluster.) Of course, the load balancer does not apply the PROXY protocol when the request comes from inside the cluster (but the connection does come from the load balancer's IP). Obviously you don't really want internal traffic going out to the load balancer, but registry images contain the DNS name of the registry in their own names. The solution, of course, is split-horizon DNS. (They should seriously consider renaming "split-horizon DNS" to "production outage DNS", FWIW.) That is all very easy to set up with coredns. It is easy to make registry.jrock.us resolve to A 104.248.110.88 outside of the cluster, and to make it resolve to CNAME registry.docker-registry.svc.cluster.local. inside the cluster. But! Of course Kubernetes does not use cluster DNS for container pulls, it uses node DNS. Since I am on managed Kubernetes, I cannot control what DNS server the node uses. So the DNS lookup for my registry has to go through public DNS. I created an additional load balancer, for $5/month, that is only for the registry. That does not have the PROXY protocol enabled, so when someone DoS's my registry, I have no way of knowing who's doing it. But at least I can "docker push" to that DNS name and my cluster can pull from it. This is all fine and nice until you make a rookie mistake, like building a custom image of your front proxy and storing it inside your own registry. What happens when DigitalOcean shuts off every node in your cluster simultaneously? Eventually the nodes come back on and want to start containers. But your frontend proxy's image is stored in the registry, and to pull something from your registry, it has to be running. This results in your cluster serving no traffic for several hours until you happen to notice what happened and fix it. (I do have monitoring for this stuff, but I don't look at it often enough.)
And that's why I have 99.375% availability over the lifetime of my personal cluster. And why smart people do not self-host their own docker registry.
> building a custom image of your front proxy and storing it inside your own registry
But do the images have to be co-located with their registry?
Can't the images be somewhere else, and the registry replicated among the nodes, so any node can find its image through the registry and fetch from that location?
you seem to be a big proponent of ansible-pull. are any of your use cases/implementations publicly available, I'm really interested to see how people are using ansible-pull in production.
I'd like to do something similar, currently we use a mix of ansible tower (which I don't love) and ansible runs from local machines to manage the infrastructure. I'd rather it all be tied into terraform though, so that we have a single place to manage changes from
We don't have anything publicly available unfortunately, but we call ansible-pull the instance userdata to configure the host on startup. IAM Policies and Vault integration are used to grant the host access to certain secrets needed by the ansible run.
> All this with a single command from my computer without the need of anything else! =P
This is my primary problem with ansible. I find that it's been really great for managing things from my local machine, but that model breaks down a little once you have a medium / largish fleet of machines in some cloud provider's space. On top of that, if you have strict security boundaries between different environments/resources, then running ansible scripts that touch a ton of machines becomes more of an exercise in key management than anything else. I know that there are tools out there like AWX and rundeck, which wrap a lot of ansible functionality, but I've found the push model to be a little hard to manage at scale.
We're using ansible almost exclusively for config mgmt tasks, and I'd like to find a way to make it work better for us, but the agent model used by puppet/chef/salt sounds really appealing, especially when I want to role a change out to a large set of machines
this is the first time I've heard of airtable. It looks sort of neat. I'm curious if it's a thing you use for personal organization or if it;s something you use for work?