Hacker Newsnew | past | comments | ask | show | jobs | submit | geoctl's commentslogin

With modern CXL/PCIe, I guess it's not going to be that stupid to claim that RAM/memory controller is slowly becoming I/O on its own.


Old IBM's term for RAM was "storage."


I wonder whether the current huge funding in AI will ever lead to a revolution in computer architecture. Modern PCIe/CXL is already starting to blur the difference between memory and I/O. Maybe the future is going to be that CPUs, RAM, storage devices, GPUs and other devices are going to directly address one another like a mesh network. Maybe the entire virtual memory model will change to include everything to be addressed via a unified virtual memory space from a process/CPU perspective with simple read/write syscalls that translate into network packets flowing between the CPU and the destination (e.g. RAM/GPU/NVMe) and vice versa.


Don't we already almost (but not quite) have that? PCIe devices can talk directly to each other (still centralized AFAIK though) and from the programmers perspective everything you mentioned is mapped into a single unified address space prior to use (admittedly piecemeal via mmap for peripherals).

Technically there's nothing stopping me from mmaping an entire multi-terabyte nvme at the block level except for the part where I don't want to reimplement a filesystem from scratch in addition to needing to share it between lots of different programs.


Is it? I honestly kinda believe that etcd is probably the weakest point in vanilla k8s. It is simply unsuitable for heavy write environments and causes lots of consistency problems under heavy write loads, it's generally slow, it has value size constraints, it offers very primitive querying, etc... Why not replace etcd altogether with something like Postgres + Redis/NATS?


that touches on what I consider the dichotomy of k8s: it's a really scalable system that makes it easy to spin up a cluster locally on your laptop and interact with the full API locally just like in prod. so it's a super scalable system with a dense array of features. but paradoxically most shops won't need the vast majority of k8s features ever and by the time they scale to where they do need a ton of distributed init features they're extremely close to the point where they'd be better served by a bespoke system conceived from scratch in house that solves problems very specific to the business in question. if you have many thousands of k8s nodes, you're probably in the gray area of if using k8s is worth it because the loop of k8s will never be as fast as a centralized push control plane vs the k8s pull/watch control plane. and naturally at scale that problem will only compound


but it's also standard, you can hire for it, outsource it, etc.

and it's pretty modular too, so it can even serve as the host for the bespoke whatever that's needed

though I remember reading the fly.io blog post about their custom scheduler/allocator which illustrates nicely how much of a difference a custom in-house solution makes if works well


The other draw: Because k8s is open, you can easily hire employees, contractors, consultants and vendors and have them immediately solve problems within the k8s ecosystem. If you run a bespoke system, you have to train engineers on the system before they can make large contributions.


> Why not replace etcd altogether with something like Postgres + Redis/NATS?

Holy Raft protocol is the blockchain of cloud.


You can do leader election without etcd. The thing etcd buys you is you can have clusters of 3, 5, 7 or 9 DB nodes and lose up to 1, 2, 3, or 4 nodes respectively. But honestly, the vast majority of k8s users would be fine with a single SQL instance backing each k8s cluster and just running two or more k8s clusters for HA.

k3s doesn't require etcd, I'm pretty sure GKE uses Spanner and Azure uses Cosmos under the hood.


While WireGuard makes every sense for an FPGA due to its minimal design, I wonder why there isn't much interest in using QUIC as a modern tunneling protocol, especially for corporate use cases. QUIC already provides an almost complete WireGuard-alternative via its datagrams that can be easily combined with TUN devices and custom authentication schemes (e.g. mTLS, bearer tokens obtained via OAuth2 and OIDC authentication, etc...) to build your own VPN. While I am not sure about performance, at least when compared to kernel-mode WireGuard, since QUIC is obviously a more complex state machine that's running in userspace and it depends on the implementation and optimizations offered by the OS (e.g. GRO/GSO), QUIC isn't just a yet another tunneling protocol, it actually offers lots of benefits such as working well with dynamic endpoints with DNS instead of just using static IP addrs, it uses modern TLSv1.3 and therefore it's compliant with FIPS for example, it uses AES which can be accelerated by the underlying hardware (e.g. AES-NI), it currently has implementations in almost every major programming language, it can work well in the future with proxies and load balancers, you can bring your own custom, more fine-grained authentication scheme (e.g. bearer tokens, mTLS, etc...), it masquerades as just another QUIC/HTTP3 traffic that's used by almost all major websites now and therefore less susceptible to dropping by any nodes in between, and other less obvious benefits such as congestion control and PMTUD.


Why would anyone want to use a complex kludge like QUIC and be at the mercy of broken TLS libraries, when Wireguard implementations are ~ 5k LOC and easily auditable?

Have all the bugs in OpenSSL over the years taught us nothing?


FWIW QUIC enforces TLS 1.3 and modern crypto. A lot smaller surface area and far fewer foot-guns. Combined with memory safe TLS implementations in Go and Rust I think it's fair to say things have changed since the heartbleed days.


> I think it's fair to say things have changed since the heartbleed days.

The Linux Foundation is still funding OpenSSL development after scathing review of the codebase[1], so I think it's fair to say things haven't changed a bit.

1: https://www.openbsd.org/papers/bsdcan14-libressl/


Wireguard uses "modern crypto"


QUIC allows identities to be signing keys, which are used to build public key infrastructure. You need to be able to sign things to do web-of-trust, or make arbitrary attestations.

Wireguard has a concept of identity as long term key pairs, but since the algorithm is based on Diffie-Hellman, and arriving at a shared secret ephemeral key, it's only useful for establishing active connections. The post-quantum version of Wireguard would use KEMs, which also don't work for general purpose PKI.

What we really need is a signature based handshake and simple VPN solution (like what Wireguard does for the Noise Protocol Framework), that a stream multiplexing protocol can be layered on top of. QUIC gets the layers right, in the right order (first encrypt, then deal with transport features), but annoyingly none of the QUIC implementations make it easy to take one layer without the other.


"Have all the bugs in OpenSSL over the years taught us nothing?"

TweetNaCL to the rescue.


MASQUE[0] is the protocol for this. Cloudflare already uses masque instead of wireguard in their warp vpn.

[0]https://datatracker.ietf.org/wg/masque/about/


i was curious about this and did some digging around for an open source implementation. this is what i found: https://github.com/iselt/masque-vpn


I've recently spent a bunch of time working on a mesh networking project that employs CONNECT-IP over QUIC [1].

There's a lot of benefits for sure, mTLS being a huge one (particularly when combined with ACME). For general purpose, spoke and hub VPN's tunneling over QUIC is a no-brainer. Trivial to combine with JWT bearer tokens etc. It's a neat solution that should be used more widely.

However there are downsides, and those downsides are primarily performance related. For a bunch of reasons, some just including poorly optimized library code, others involving relatively high message parsing/framing/coalescing/fragmenting costs, and userspace UDP overheads. On fat pipes today you'll struggle to get more than a few gbits of throughput @ 1500 MTU (which is plenty for internet browsing for sure).

For fat pipes and hardware/FPGA acceleration use cases, google probably has the most mature approach here with their datacenter transport PSP [2]. Basically a stripped down per flow IPsec. In-kernel IPsec has gotten a lot faster and more scalable in recent years with multicore/multiqueue support [3]. Internal benchmarking still shows IPsec on linux absolutely dominating performance benchmarks (throughput and latency).

For the mesh project we ended up pivoting to a custom offload friendly, kernel bypass (AF_XDP) dataplane inspired by IPsec/PSP/Geneve.

I'm available for hire btw, if you've got an interesting networking project and need a remote Go/Rust developer (contract/freelance) feel free to reach out!

1. https://www.rfc-editor.org/rfc/rfc9484.html

2. https://cloud.google.com/blog/products/identity-security/ann...

3. https://netdevconf.info/0x17/docs/netdev-0x17-paper54-talk-s...


Is quic related to the Chrome implemented WebTransport? Seems pretty cool to have that in browser API.


Now that's an interesting, and wild, idea.

I don't believe you could implement RFC 9484 directly in the browser (missing capsule apis would make upgrading the connection not possible). Though WebTransport does support datagrams so you could very well implement something custom.


The purpose of Wireguard is to be simple. The purpose of QUIC is to be compatible with legacy web junk. You don't use the second one unless you need the second one.


QUIC isn't really about the web, it's more of a TCP+TLS replacement on top of UDP. You can build your own custom L7 on top of QUIC.


QUIC uses Web PKI and TLS. TLS is not a simple protocol and the main reason to use it over something simpler is if you need it to be compatible with something else that already uses it, like HTTPS.


The main reason to use TLS is that you can get a bunch of off-the-shelf implementations that are (post-Heartbleed) the most heavily scrutinized public cryptographic implementations in existence. Plus if anyone finds a practical exploit of TLS (or a major implementation), they’re more likely to go steal credit card numbers being typed into Amazon than to attack your particular use of it. Noise is cool but if you don’t need the same flexibility that Wireguard does (or have the expertise to implement a concrete protocol on top of it correctly), something built on TLS 1.3 is a better bet.


I'm not even convinced that a random TLS library would get non-trivially more scrutiny than Wireguard does, and on top of that it would need more scrutiny because it's significantly more complicated which is a synonym for attack surface.

And the "more valuable targets" argument is self-defeating because if there aren't as many high value targets using something then there aren't as many attackers looking for vulnerabilities in it either. Moreover, if someone finds one in TLS (or anything) then they can launch exploits against multiple targets simultaneously rather than waiting to move on to the second target until after the first investigates the attack and publishes a patch for everyone else to use.


Sure, they’ll get every credit card typed into Walmart’s website too. Cisco’s IKE implementation has had vulnerabilities (definitely still more widely deployed than Wireguard unfortunately), but almost nobody has heard about those. I don’t think they even had a cutesy name!

My point isn’t that Wireguard should’ve used TLS/QUIC. Is that if you want a connection oriented transport encryption, you should almost certainly use TLS 1.3 in some fashion even if web compatibility isn’t a concern.


You can build a custom L7 on top of anything, really. I think my favorite was tcp/ip over printers and webcams.

The question is what does QUIC get you that UDP alone does not? I don't know the answer to that. Is it because firewalls understand it better than native wireguard over UDP packets?


Mostly because WireGuard (intentionally) didn't bother with obfuscation https://www.wireguard.com/known-limitations/

> WireGuard does not focus on obfuscation. Obfuscation, rather, should happen at a layer above WireGuard, with WireGuard focused on providing solid crypto with a simple implementation. It is quite possible to plug in various forms of obfuscation, however.

This comment https://news.ycombinator.com/item?id=45562302 goes into a practical example of QUIC being that "layer above WireGuard" which gets plugged in. Once you have that, one may naturally wonder "why not also have an alternative tunnelling protocol with <the additional things built into QUIC originally listed> without the need to also layer Wireguard under it?".

Many design decisions are in direct opposition to Wireguard's design. E.g. Wireguard (intentionally) has no AES and no user selectable ciphers (both intentionally), QUIC does. Wireguard has no obfuscation built in, QUIC does (+ the happy fact when you obfuscate traffic by using it then it looks like standard web traffic). Wireguard doesn't support custom authentication schemes, QUIC does. Both are a reasonable tunneling protocol design, just with different goals.


I think maybe it's easier for an adversarial network admin to block QUIC altogether.


The hope with QUIC is encrypted tunnels that look and smell like standard web traffic are probably first in the list of any allowed traffic tunneling methods. It works (surprisingly) a lot more often than hoping an adversarial network/security admin doesn't block known VPN protocols (even when they are put on 443). It also doesn't hurt that "normal" users (unknowingly) try to generate this traffic, so opening a QUIC connection on 443 and getting a failure makes you look like "every other user with a browser" instead of "an interesting note in the log".

I.e. the advantage here is any% + QUIC%, where QUIC% is the additional chances of getting through by looking and smelling like actual web traffic, not a promise of 100%.


QUIC could be allowed, but only if it can be decrypted by the adversarial admin.

If the data can't be decrypted (or doesn't look like plain text web traffic) by the adversarial network admin, the QUIC connection can just be blocked.

Work laptops typically have a root cert installed allowing the company to snoop on traffic. It's not unfeasible for a nation state to require one for all devices either.


Are you arguing "QUIC has no more of a chance of getting through than Wireguard" or "QUIC doesn't stop all forms of blocking from working"? Nobody will disagree with the latter, regardless of protocol, but I'm not sure I follow on what these points have to do with the former.


If you work in a highly monitored environment, all HTTPS transactions are decoded -- because typically there's a root cert installed. That is one form of an adversarial admin, say. You can limit HTTPS traffic to port 443, and only allow it if the firewall can see the full TLS handshake. You can maybe see China doing this, e.g.

The next step is to block all connections that can't be decoded by the root cert. That's not really that far off when you think about it. If it's not typical HTTPS/HTML traffic, the adversarial network admin can simply drop packets and connections.

A similar thing is happening today in Spain when a soccer game is on. If anything looks suspicious they pretty much block the subnet, because it's easier to block entire subnets than to figure out how to block the protocols that transmit the pirate stream. This is acceptable in Spain, I guess. I'm not sure why.

I'm arguing if an adversarial network admin decides to nix QUIC on the network because they can't detect a VPN, don't be surprised when it suddenly happens worldwide until QUIC helps them (or Broadcom, e.g.) figure out which streams are VPNs and which aren't.


Blocking QUIC blocks a sizeable fraction of the web


Encryption and reliable transport.


You really don't want reliable transport as a feature of the tunnel unless you are _intimately_ familiar with what all of the tunneled traffic is already doing for reliable transport.

The net result of two reliable transports which are unaware of each other is awful.


I probably should have clarified that question.

What does QUIC get you that TCP over Wireguard over UDP does not?


Where is DNS on top of QUIC? Asking unironically.


There is actually. A way more interesting re-implementation of a popular L7 is SSH over QUIC. SSH has to implement its own mutual authentication and transport embedded in the protocol implementation since it operates on top of plaintext TCP, but with QUIC you can just offload the authentication (e.g. JWT bearer tokens issued by IdPs verified at L7 or automatically via mTLS x509 certs) and transport parts to QUIC and therefore have a much more minimal implementation.


“Offloading” authentication onto complex web tech isn’t really a feature unless you already need to be operating in the web space for some other reason.


I feel like fans of `mosh` would run with this.


It is already there. It is called DNS over HTTP/3 (DoH3).


That's DoQ, RFC 9250.


What legacy junk is QUIC compatible with? It doesn’t include anything HTTP-related at all. It’s just an encrypted transport layer.


It’s multi stream, reliable connections. WireGuard’s encryption over UDP is none of those things. WireGuard encryption is simpler and far more flexible, but also less capable.


I’m not advocating WireGuard’s transport be replaced with QUIC (they’re solutions for very different problems), but that doesn’t mean QUIC is saddled with legacy junk. Most applications want protocols that are connection-based and optionally offer retransmit - that’s not legacy junk, that’s just what is called for in most cases. L3 encryption is an unusual application in that it doesn’t call for these properties.


Mullvad offers exactly the combination of wireguard in QUIC for obsfucation and to make traffic look like Https -- https://mullvad.net/en/blog/introducing-quic-obfuscation-for...


WireGuard-over-QUIC does not make any sense to me, this lowers performance and possibly the inner WireGuard MTUs. You can just replace WireGuard with QUIC altogether if you just want obfuscation.


It's not about performance, of course. It's about looking like HTTPS, being impenetrable, separating the ad-hoc transport encryption and the Wireguard encryption which also works as authentication between endpoints, and also not being not TCP inside TCP.


You can just do that by using QUIC-based tunneling directly instead of using WireGuard-over-QUIC and basically stacking 2 state machines on top of one another.


TCP over Wireguard is two state machines stacked on each other. QUIC over Wireguard is the same thing. Yet, both seems to work pretty well.

I think I see your argument, in that it's similar to what sshuttle does to eliminate TCP over TCP through ssh. sshuttle doesn't prevent HOL blocking though.


TCP over WireGuard is unavoidable because that's the whole point of tunneling. But TCP over WireGuard over QUIC just doesn't make any sense, neither from performance nor from security perspective. Not to mention that with every additional tunneling layer you need to reduce the MTU (which is already a very restricted sub-1500 value without tunneling) of all inner tunnels.


> But TCP over WireGuard over QUIC just doesn't make any sense

Agreed, but that wasn't what I was saying. Read it carefully next time before downvoting.

If the argument is if wireguard is a state machine, well, TCP over wireguard is just fine. And that's exactly what I said.


Probably simplifies their clients and backends I'd imagine?


See also Obscura's approach of QUIC bridges to Mullvad as a privacy layer: https://obscura.net/blog/bootstrapping-trust/


The assumed mentality of “being flexible” is the very reason WireGuard was created to fight against in the first place, otherwise why bother? IPSec is already standardized and with wide-spread hardware implementation (both FPGA and ASIC) and flexible.


I think standards operate according to punctuated equilibrium so the market will only accept one new standard every ten years or so. I could imagine something like PQC causing a shift to QUIC in the future.


Why are you taking from people their will to experiment and design new stuff? Are they using your money or time? Is this just out of grumpiness, envy, condescension or what?


Quic is a corporate supported black hole. Corporations are anti-human. Its a wonder that there is still some freedom to make useful protocols on the internet and that people are nice enough to do that


[flagged]


Do you have examples of stable systems?


CompCert would be a good example, but everything I have done professionally is also stable; with exclusively people like me, bug tracking systems would not need to exist.

I also have made some software that is proven (meaning from a small 500 line proof kernel) to be correct relative to a trivial implementation (and yes, full correctness is difficult to achieve).


> When was the last time that the author of "grep" was recognized as a great programmer? Never.

Ken Thompson wrote grep, and he is definitely recognised as such.


man -T grep | grep 'Free Soft\|Thom'

  (Cop)108 348 Q(yright 1998-2000, 2002, 2005-2023 Free Softw)-.1 E(are F)
Sure, he wrote _a_ version of grep, and probably the first, but who cares? "The" (sure, you might run some bsd grep) current version of grep certainly doesn't.


No, he wrote grep. Before he wrote it there was no grep. And yes, he's recognized as a great programmer. With Multics, Unix, B, C, UTF-8 Plan9, Inferno and grep to his name (and probably others that I forgot) he has more than deserved that.

Future grep versions, including the FSF one, were all re-implementations.

Your statement in the GP is nonsensical.


I do not agree he was a great programmer. All of his programs are trivial from a computer science perspective.

In fact, you can quite easily check this by trying to let an LLM generate a program like grep. It can do that. Now, there also exist programs for which LLMs can't generate code, because it's too complex.


You have absolutely no idea what you are talking about. And that's fine but it is kind of adding a lot of noise and zero signal.


[flagged]


Yes, so you say, and I'm the pope on alternate Sundays. Appeals to authority are meaningless without identity. Meanwhile, I highly doubt you are qualified to polish Thompson's shoes, all I see is an AC novelty account making dumb claims with hindsight. Anyway, enough with you, off to the ignore list.


Not sure what kind of idiotic website this is that people more qualifed than the average idiot here are "flagged". Thompson is completely irrelevant to computer science. Any idiot can write a program, but only some people can make an actual contribution. Knuth actually did something useful in comparison. Also, Knuth was able to articulate.

You are the nobody here.


I'm just saying this is incorrect:

> When was the last time that the author of "grep" was recognized as a great programmer? Never.

He is recognised as that. Your opinion on him is nothing to do with anything.


I am working on Octelium https://github.com/octelium/octelium a FOSS unified zero trust secure access platform that is flexible enough to operate as a modern zero-config remote access VPN, a Zero Trust Network Access (ZTNA)/BeyondCorp platform, an API/AI/LLM gateway, an infrastructure for MCP gateways and agentic AI architectures/meshes, a PaaS-like platform, ngrok alternative, and even as a homelab infrastructure. It is basically a unified, generic, Kubernetes-like, zero trust architecture (ZTA) for secure access and deployment, that can operate in many human-to-workload, workload-to-workload, and hybrid environments.

I actually did a SHOW HN exactly 3 months ago and received lots of invaluable critique regarding how dense, overwhelming and unreadable the docs and repo README were. I've actually spent a lot of time trying to improve the quality of the docs and README since then. I'd love to receive any feedback, negative included, regarding the current overall quality of the docs and README from whoever is interest in that space.


Can you communicate the value of Octelium in 25 words or less?


Thank you really for your detailed comment. I will try to answer your questions and please don't hesitate to ask in the Slack/Discord channels or contact emails later if the answers here weren't clear enough to you.

1. Octelium Services and Namespaces are not really related or tied to Kubernetes services and namespaces. Octelium resources in general are defined in protobuf3 and compiled to Golang types, and they are stored as serialized JSON in the Postgres main data store simply as JSONB. That said, Octelium Services in specific are actually deployed on the underlying k8s cluster as k8s services/deployments. Octelium resources visually look like k8s resouces (i.e. they both have the same metadata, spec, status structure), however Octelium resources are completely independent of the underlying k8s cluster; they aren't some k8s CRDs as you might initially guess. Also Octelium has its own API server which do some kind of REST-y gRPC-based operations for the different Octelium resources to the Postgres via an intermediate component called the rscServer. As I said, Octelium and k8s resources are completely separate regardless of the visual YAML resemblance. As for managed containers, you don't really have to use it, it's an optional feature, you can deploy your own k8s services via kubectl/helm and use their hostnames as upstreams for Octelium Services to be protected like any other upstream. Managed containers are meant to automate the entire process of spinning up containers/scaling up and down and eventually cleaning up the underlying k8s pods and deployments once you're done with the owner Octelium Service.

2. Secret management in Octelium is by default stored in plaintext. That's a conscious and deliberate decision as declared in the docs because there isn't any one standard way to encrypt Secrets at rest. Mainline Kubernetes itself does exactly the same and provides a gRPC interface for interceptors to implement their own secret management (e.g. HashiCorp Vault, AWS KMS/GCP/Azure offerings, directly to software/hardware-based HSMs, some vault/secret management vendor that I've never heard of, etc...). There is simply no one way to do that, every company has its own regulatory rules, vendors and standards when it comes to secret management at rest. I provided a similar gRPC interface for everybody to intercept such REST operations and implement their own secret management according to their own needs and requirements.

3. Octelium has Session resources https://octelium.com/docs/octelium/latest/management/core/se... Every User can have one or more Sessions, where every Session is represented by an opaque JWT-like access token, which are used internally by the octelium/octeliumctl clients after a successful login, they are also set as a HTTPOnly cookies for BeyondCorp browser-based Sessions and they are used directly as bearer tokens by WORKLOAD Users for client-less access to HTTP-based resources. You can actually set different permissions to different Users and also set different permissions for different Sessions for the exact same User via the owner Credentials or even via your Policies. OAuth2 client credential flow is only intended for WORKLOAD Users. Human Users don't really use OAuth2 client credentials at all. They just login via OIDC/SAML via the web Portal or manually via issued authentication token which is not generally recommended for HUMAN Users. OAuth2 is meant for WORKLOAD Users to securely access HTTP-based Services without using any clients or SDKs. OAuth2 scopes are not really related to zero trust at all as mentioned in the docs. OAuth2 scopes are just an additional way for applications to further restrict their own permissions, not add new ones which are already set by your own Policies.

4. An Octelium Cluster runs on top of a k8s cluster. In a typical multi-node production k8s cluster, you should use at least one node for the control plane and another separate node for the data-plane and scale up if your needs grow. A knowledge with Kubernetes isn't really required to manage an Octelium Cluster. As I mentioned above, Octelium resources and k8s resources are completely separate. One exception when it comes to directly having to deal with the underlying k8s cluster, is setting/updating the Cluster TLS certificate, such cert needs to be fed to the Octelium Cluster via a specific k8s secret as mentioned in the docs. Apart from that, you don't really need to understand anything about the underlying k8s cluster.

5. To explain Octelium more technically from a control plane/data plane perspective: Octelium does with identity-aware proxies similarly to what Kubernetes itself does with containers. It simply builds a whole control plane around the identity-aware proxies which themselves represent and implement the Octelium Services, and automatically deploys them whenever you create an Octelium Service via `octeliumtl apply` commands, scales them up and down whenever you change the Octelium Service replicas and eventually cleans the up whenever you delete the Octelium Service. As I said it's similar to what k8s itself does with containers where your interactions with the k8s API server whether via kubectl or programmatically is enough for the k8s cluster to spin up the containers, do all the CRI/CNI work for you automatically without even having to know or care which node actually runs a specific container.

As for the suggestions:

1. I am not really interested in SaaS offerings myself and you can clearly see that the architecture is meant for self-hosting as opposed to having a separate cloud-based control plane or whatever. I do, however, offer both free and professional support for companies that need to self-host Octelium on their own infrastructure according to their own needs.

2. I am not sure I understand this one. But if I understood you correctly, then as I said above, Octelium resources are completely different and separate from k8s resources regardless of the visual YAML resources. Octelium has its own API server, rscServer, data store, and it does not use CRDs or mess with k8s data store whether it be etcd or something else.


Thank you really for your kind comment. Most of the links regarding how Octelium works, the quick management and installation guides, the examples (e.g. API/AI/MCP gateways, etc...) were actually included in the README itself. However, most of the criticism was supposedly coming from the terms used in the README. I was already assuming that the users are somewhat familiar with zero trust and zero trust architectures. Maybe that was the problem.


No, Octelium does not use a hub-and-spoke model. It's a distributed system that's a horizontally salable architecture on top of Kubernetes. This design is meant to provide seamless horizontal scalability and availability, among other things.


Octelium's author here. You don't give me access to anything. The project is 100% open source and designed specifically for self-hosting. I don't even know whether you're using the project or not since there isn't usage telemetry to begin with. As for the SSH part of your weird comment, I wonder whether you even understand what embedded passwordless SSH means in the first place.


Thank you. I'd advise you to read how it works https://octelium.com/docs/octelium/latest/overview/how-octel... and the quick management guide https://octelium.com/docs/octelium/latest/overview/managemen... to get a clearer idea about how it works and how it's managed. The docs are full of examples for specific use cases too such as https://octelium.com/docs/octelium/latest/management/guide/s... https://octelium.com/docs/octelium/latest/management/guide/s...


One more thing regarding the CRDs. Octelium resources and k8s resources look similar from a YAML perspective. However, Octelium actually use protobuf, all the resources are defined in proto3 and compiled to Go, then the Golang resources are serialized to JSON and stored as JSONB in the Postgres data store of the Cluster. I guess that's another reason you thought that Octelium resources might be CRDs but they actually are not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: