America was under a fascist ruler, but not under a fascist system of government.
Trump tested American democracy by consolidating power and was not successful, so we avoided being under a fascist rule
The fear is that we might get to test democracy again, and most of America doesn't seem to mind that. Maybe it's due to lack of understanding, not caring, or genuinely wanting fascism, I don't know.
Neon seems really great to me, but I wish I could easily run it locally via Kubernetes. I know there are some projects out there[0] but they are all supported by 3rd parties and things like branching and other features don't appear to be supported.
I'd love to be able to just use a helm chart to have Neon in my homelab.
And if you're pairing your infra-as-code with a gitops model then you can help prevent these kinds of issues with PRs.
You can also use your git history to restore the infrastructure itself. You may lose some data, but it's also possible to have destroyed resources retain their data or backup before destroy.
The problem with infra-as-code and gitops is that it's often nearly impossible to tell what will actually happen with a PR without running it somewhere. Which is 1. expensive. and 2. nearly impossible to get to mirror production.
Production and staging are the farthest you can get from pure immutable environment that you can get. They carry state around all over the place. It's their entire reason for existing in some sense.
This means that while git-ops can be helpful in some ways it can also be incredibly dangerous in others. I'm not entirely sure it doesn't all come out in the wash in the end.
GitOps is just like "DevOps" -- you don't really know what it means to a specific org until you talk to them, because people interpret it differently based on their own understanding (or if they have a horse in this race).
To me it always means describing the desired state of your infra in structured data, storing that in git, and run controller to reconcile it against the actual infra.
If your GitOps engine has to compile/run the "code" to uncover the desired state, that defeats the purpose of GitOps and is no better than running your hand crafted release bash script in a CI/CD pipeline.
It should have never been called infra-as-code, but infra-as-data.
This does not change my statement at all though? You fundamentally can't really predict the impact of some changes in a given environment until it's deployed. Just because you can obtain the current state of the environment and reconcile some stuff doesn't change this.
That's why you should call what you store in git the _desired_ state, not anything else. A git repository is not a live database. It's a collection of static text files that change less often than your live system. There will be bugs and misconfiguration, and sometimes the desired state is just technically not reachable, and that's fine. What the actual state is doesn't matter. Leave that to the controller. State drifting is a problem your gitops engine should detect, and should be fixed by the owner of controller code.
Some companies practice infra-as-code, point to their git repo and tell me "this is our single source of truth" of our infrastructure. And I have to tell them that statement is wrong.
This is correct. You need some kind of running check on the environment and when possible code that handle exceptional cases.
Sometimes that's as simple as a service that shoots other services in the head to restart them. Othertimes it's more complicated. But lot's of places can't afford to get more complicated than "alert a human and have them look at it".
> Probably even better is to ship a controller and a CRD for the config.
But how do you package the controller + CRD? The two leading choices are `kubectl apply -f` on a url or Helm and as soon as you need any customization to the controller itself you end up needing a tool like helm.
Agree. I'd recommend to start with static YAML though. Use kustomize for the very few customisations required for, say, different environments. Keep them to a minimum - there's no reason for a controller's deployment to vary too much - they're usually deployed once per cluster.
> You don’t need to use NAT. Which means you have to set up a firewall on the router correctly. Default-deny, while still allowing ALL ICMP traffic through, as ICMP is kinda vital for IPv6 because it’s used to communicate error conditions.
I do think using NAT in the form of NPTv6 is awesome for home use because it allows you to have a consistent address regardless of your ISP prefix assignment.
Think of NPTv6 as a kind of "stateless NAT" where the prefix is mapped 1:1 to your internal prefix. This means if your ISP changes your address, you only need to your external DNS versus all of your devices.
I really wish oxide had a Homelab/consumer centric offering!
Spec wise, some low power systems like an Intel NUC, LattePanda Sigma, or Zimaboard. You could fit 3/4 of them in a single 1u with a shared power supply. They could even offer a full 1u with desktop grade chips on the same sleds.
I have thought about building one myself, but it's a large investment of time that I can't seem to find lately.
It would be great if Oxide had something like Canonical's "Orange Box"/cloud-in-a-box for homelabs, evaluation, training (in the management bits) - and hobby work loads!
I'd imagine they'll get to that eventually, these types of companies generally start at enterprise level because that's the most profitable and requires closing smaller numbers of deals. Once the product is proven and their support infrastructure is in place they can go for other market segments to try and maximize revenue
It's not just about maximizing revenue, it's also about getting it into developer hands early (homelabs, side projects, college students, etc) so they can become familiar with it, and become an advocate for it within their company. Cloudflare is a good example of this.
Even just a medium business offering would be great. I'd love to not have to use Dell or HP gear-- anything to get away from the cobbled-together stack of legacy IBM PC compatibility and third-party ODM/OEM stuff glue-and-taped together by the vendor.
On prem. Reliable and inexpensive network connectivity they has any resemblance to a 10G LAN doesn't exist where I am.
I work with some businesses who need very, very reliable, high-bandwidth, and low latency connectivity to their data. The amortized cost of on-prem beats the cost of any off-prem offering as soon as the cost of the necessary connectivity is factoted-in.
AWS Outposts is the solution. I like Oxide but people seem to be blind to the actual competition when they focus on Dell as the competitor. AWS has been shipping Outposts racks for years. All prices are public on their website and you can order it today. Nearly every configuration is sub-$500k. Fully managed and AWS supports the entire stack; no buck-passing among vendors, same as Oxide.
I’m not sure where his customers are, but Outpost up/downlinks are supposed to to be at least 1gbit, and they don’t behave well in situations where the latency to the paired region is high. EBS lazy loading blocks is great in region but awful when your ping is 300ms.
I'm talking shops who spend $200-$500K on servers and storage, not north of $1M (which is where this Oxide gear lives). Something like a 1/4 scale Oxide rack, perhaps.
I work at SoftIron, another startup in this space. Our HyperCloud product might be interesting for you. I'm not in sales, so I can't comment on the prices, but I'd guess we're much more competitive since you don't actually need to buy an entire rack of our gear at a time.
That said, where this product-space gets tough is actually scaling it down. It's pretty challenging to create something that is remotely stable/functional in a homelab (space/power/money) budget. Three servers and a switch would probably be the bare minimum. We (and I'm sure Oxide :) scale up like a dream.
This all has me wondering, if I just want to play with stuff in this space as an individual homelabber who earns a tech salary and wants a nicely designed rack-mounted alternative to a mess of unorganized NUCs and cables and whatnot, what are my best options?
If you're willing to spend money on rack-mounted gear you definitely have options, and what you get sort of depends on what you're interested in playing with.
A lot of homelabbers (and even some small businesses) go for Proxmox as a virtualization distribution. I don't use it myself, but IIUC it's effectively a Debian distro packaged to run KVM/LXC, with support for things like ZFS, Ceph, etc. It has some form of HA, an API used by standard open source devops tools, handles live migration, etc.
So buy some used rack-servers on Ebay (or new, if you're ballin'). A lot of businesses sell their old stuff, so you can pick up a generation or two out of date for a good price. If you want to do fancy stuff like K8s, Ceph, etc you'll probably want at least three nodes, ideally more, and a bunch of disks in them. Networking gear is a sort of pick your poison thing. A lot of people love Ubiquiti gear; a lot of people hate it. TP-Link is another that's good and budget friendly. StarTech sells smallish racks (including on Amazon), if you want to start there.
It won't look exactly like SoftIron's HyperCloud or Oxide's Cloud Computer, but you can certainly get pretty sophisticated.
Not sure if this answers your question, but other great spaces to explore are the 2.5 Admins and Self-Hosted podcasts.
I'm really thinking mostly about the hardware part here, and maybe just enough layers of the stack to feel like an integrated hardware setup. Let the nerds play with whatever software they want above that.
To go ahead and dream a bit:
I'd hope for an online configurator like the one SoftIron's HyperCloud has [1] but instead of "talk to a sales rep", show a price for what you just configured, like you're configuring a macbook.
Relatedly, there should be a standard rack form factor in the size category of NUCs and Mac Minis, rather than having to go all the way to the 19 inch monster racks that medium to large businesses use. If it were nailed down to the point of being able to blind mate (just learned that term from Oxide's article here!) gear into it, that would be kind of perfect.
Unfortunately they are not planning home lab things anytime soon, per a recent podcast episode [0].
If you want to play around with their Hubris OS: "You wanna buy an STM32H753 eval board. You can download Hubris, and then you’ve got – you’ve got an Oxide computer. You have it for 20 bucks.”
Not 1U but perhaps a box design that isn’t noisy like a pizza box server.
Don’t know if oxide would want or be able to compete in the low cost market but a bigger a more expensive desktop/workstation as a mini homelab cloud could be a great option to get people trained on the oxide platform.
I agree with you, but I think the idea is that the underlying storage engine affords you functionality that you can't mimic yourself / elsewhere and therefore your capabilities to leave are limited.
You're always "locked in" to some degree and it's an almost worthless thing to invest into if you never actually need to actually migrate off.
I really wish there was a device like this that could be used as a type of "blade server" that could be inserted into a standard 1u/2u short-depth rack mounted chassis. That would let me re-use my existing 8u wallmounted rack instead of a shelf with zip-ties.
Trump tested American democracy by consolidating power and was not successful, so we avoided being under a fascist rule
The fear is that we might get to test democracy again, and most of America doesn't seem to mind that. Maybe it's due to lack of understanding, not caring, or genuinely wanting fascism, I don't know.