This only works if your environment is bare metal to begin with. In some cases you may be in an environment where a lot of applications are proprietary, virtualized, or otherwise constrained in some other way. It is not an environment where you get to pick and choose how everything is setup. In such a case, having a distinct small application that fits as "an OS" in a virtualized environment can fit a lot better in, than trying to retrofit a baremetal containerized environment into a large enterprise datacenter where everything is already running Windows servers and proprietary services, managed by other people than yourself.
> While recording the versions of all packages used is a step in the right direction, an even more comprehensive solution is to package up the entire environment using something like Docker, or to use online computation, such as Google’s colaboratory notebooks.
An even better way would be to describe the environment using Guix with channels or something similar accompanying the code, or a Nix flake or any similar environment descriptions with fully fixed dependency chains. Docker can be _forced_ to use a fixed version, but any `apt update` will ruin that completely, and both Nix and Guix are tools that on top of providing these environments for executing code with the same set and versions of tools, also provide the ability to generate container images that can be shared.
If you want some of the ergonomics and DX of Rust, but in a GC'd language, OCaml (the language also used to implement early Rust compilers) might be more the path to be taken. Great tooling, handlings errors in sensible ways, and pattern matching, allows you to move business logic faster and focus on shaping data rather than transforming the bytes of them.
The article doesn't make sense. It can technically work with a bunch of dynamic DNS systems. An IPv6 address isn't fixed to your device like a MAC address is.
Even assuming it would be doable, it would be a security nightmare, and we'd end up relying on some centralised systems anyway lest we burden the internet with absolutely insane amounts of continual p2p discoveries.
If you want a personal website, why not use a service like neocities? It's free, and just lets you go ham with static content. Don't feel like writing HTML pages manually? Make a TiddlyWiki and upload that.
To be clear, I am all for self hosting stuff, but it needs to be in a proper, affordable, standardized package that can be kept secure and useful. A phone is NOT that.
Why not? I download updates to apps on my phone every day. I fact, my old phone that has long stopped receiving updates still runs the latest browsers just fine. The problem isn't keeping the applications up to date.
There's a risk of kernel exploits, but I can't remember the last time the Android kernel had a bug that could be triggered by simply sending packets to it. Privilege escalation works, maybe, but getting root on Android is a lot harder than most Linux servers because of the very strict and isolated SELinux contexts.
I've installed termux on my phone and I can install nginx with a single command. Downloading a Debian chroot and launching a full, maintained Linux distro is two commands away.
Until remotely triggered Android kernel exploits become a thing, I don't think the updates are the problem here.
That seems exceptionally reasonable, nomad. That would be pretty easy to anonymize as well. It's a pleasure to meet you.
I'll add that Resilio Sync + singlefile Tiddlywiki (I think most people would be surprised what TW can accomplish) from a phone is quite workable (a filewatcher with ratox or may toxic, or IPFS, would do as well, but they aren't as performant or turnkey). You can automatically push with custom conditions (or manually do so) to those listening (the burden has to be shifted away from the phone to some degree). If you have persistent seeders in the mutable torrent swarm, it's even better. That would serve a very large number of people on the planet pretty well, imho. This is harder to anonymize on a phone, but also doable.
It's reasonable to do both, too.
Add a proper USB to boot with, and it would sometimes be easy enough to walk up to a random machine when you need more than a phone to work on your Tiddlywiki or other infrastructure. I admire trying to find ways to make sure almost anyone can participate in The Great Conversation with minimal material; it's an important problem.
Yeah I can't make sense of this article either. How is IPv6 supposed to solve this? IP is for routing which is tied to geography, not identity. And wouldn't dynamic DNS make your website unresolvable for at least a minute or two every time your IP changes?
Yeah but I'm asking how does that help? As the phone hops around, the IP address will change, whether it's v6 or v4. Are they expecting IPv6 will be stable no matter where in the world you travel?
> A DDNS service seems like a trivial add-on to make this work.
I responded to this earlier: wouldn't dynamic DNS make your website unresolvable for at least a minute or two (or much longer if your TTL is longer) every time your IP changes?
> I think the bigger issue is the number of mobile devices behind CGNAT.
Yeah I agree on that, they already mentioned that part.
SMTP is as much a protocol for sending as it is for receiving. IMAP isn't a "receive emails" protocol, as much as it is a "manage mails on a server" protocol.
I'm not sure if people working with these protocols have built applications implementing their specifications before, but they should. It affords you a much better picture of why these protocols are designed as they are, good and bad.
SMTP and XMPP do not exactly accomplish the same goal, nor have the same intentions. Sure, they both overlap in areas, but they're both interfaces (protocols) for implementation of tools. Tools that are suited for certain jobs.
If I was hiring a network technician, I definitely wouldnt hire someone who didn't know what ARP is. It's too easy and fundamental to the field. It took all of like 3 weeks at the trade school I attended to cover IPv4, MAC, ARP, basic routing protocols, TCP, and UDP, and we were definitely chilling. Understanding those things isn't complicated. You just need to know what computers are, and what "networks" are, then it all very easily clicks into place.