Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The hardware is there (RPi + USB storage). The server software is there (NextCloud, Plex, n8n, etc). What isn't there is the plumbing. The next logical step after this blog post is making your services accessible to your phone over the public net. You'll immediately find yourself mired in domain name registration, VPS management, TLS cert management, dyndns, port forwarding, hole punching, etc etc.

There are lots of great tools that solve some of these problems. I have yet to find one that solves all of them.

I think we need something like Namecheap + CloudFlare + ngrok, designed and marketed for self-hosters and federators. You simply register a domain and run a client tool on each of your machines that talks to a central server which tunnels HTTPS connections securely to the clients.

Mapping X subdomain to Y port on Z machine should take a couple clicks from a web interface.



> The next logical step after this blog post is making your services accessible to your phone over the public net. You'll immediately find yourself mired in domain name registration, VPS management, TLS cert management, dyndns, port forwarding, hole punching, etc etc.

You'll need don't need any of that with Onion Services. Tor does not only anonymize, but offers easily configurable services with NAT punching, an .onion Domain and e2e crypto for free. And setting them up is easy enough https://community.torproject.org/onion-services/setup/

You'll just need tor or a tor browser to access those services, but that shouldn't be a problem for many self-hosting setups


Isn't tor exceptionally slow though? I haven't used it in a few years - has anything changed?


Most slowness comes from exit nodes which you don't need to access onion services. There're also single-hop onion services, which should be a bit faster while sacrificing server anonymity if i understand correctly


Is there a safe service I can try out to see how fast it can potentially be?


Depends on your definition of "safe", but there's an official Onion service for Facebook for example: http://facebookcorewwwi.onion


You can watch videos on it fine these days - though I wouldn't recommend, for social ethical reasons, using it for streaming movies/music off your personal server.

Unless of course, you are running a node.


It is. It's not practical for this use-case.


It’s slow, with NSA hunting at exit nodes.


Exit nodes aren't used if one connects to an onion service.

And for all we know, the NSA snoops traffic at all major internet exchanges, so tor exit nodes might get extra attention, but so do e.g. people who's search history suggests they might be sysadmins (if i remember reports on xkeyscore selector correctly)


If this idea sounds great but the Tor part is a bit much for you, have a look at tailscale.com

It sets up WireGuard (also feat. NAT hole punching) in a mesh between your devices. You can static route things to it using standard firewalling/iptables/etc if you feel the need too.

It's basically having a LAN but you're on the LAN even when you're not at home.

Edit: Hahaha. I discovered Tailscale myself through this thread, left the tab open..


Wow that is interesting, never thought about this usecase for Tor before, that looks like a fun project to figure out


> I think we need something like Namecheap + CloudFlare + ngrok, designed and marketed for self-hosters and federators. You simply register a domain and run a client tool on each of your machines that talks to a central server which tunnels HTTPS connections securely to the clients.

PageKite (https://www.pagekite.net) is what it sounds like you're looking for. It'll set you up with a url, SSL, and a tunnel in about 30 seconds. Highly configurable for your own domain if you'd like, multiple ports etc.


That PageKite pricing slider is really great. https://pagekite.net/signup/?more=bw


It looks like it could be good, but doesn’t slide on Safari on my iPhone.


and there is inlets too https://github.com/inlets/inletsctl


With a couple of fiber providers I've been lucky enough (it wasn't luck, I chose housing based on ISP availability) to get a business class gigabit with a static ipv4 address and ipv6 for ~$100/mo, solves lots of problems.

Plex does the plumbing for you, I think NextCloud might too.

Doing DNS just seems like another thing to setup which is just fine. Pay namecheap, then pay a big boy DNS provider (I like dnsmadeeasy) then register some domains.

I don't really want a solves everything tool because I don't see a way for it not to be really opinionated and hide everything behind its own abstraction which isn't really any better than the interface it is hiding. Maybe a series of how-to whitepaper kinds of thing to build up the requisite knowledge to figure these things out.

I'm not a fan of the old school configuration hell where you have to spend hours/days/weeks trying to figure out the correct set of software and config options to do something right, but I'm equally not a fan of completely canned solutions that hide everything in favor of a single button to push. I'm not a technician, I don't need to have everything done for me, but I do appreciate tools where right configuration interface is provided. That is, sane defaults, well documented options, meaningful errors and sanity checking, and options given in the right way.


This is exactly why containers took off for self-hosters. Installing Plex went from a blog post to `docker run plex` (ok there is a little more nuance than that but you get my point).

Docker allowed me to be far more competent on a Linux box than my skill set should have permitted at the time. I no longer needed to know how an application ran, just that it did. Provide some persistent storage and you’ll probably never have to configure that app again. Amazing.


But if you have a package manager, 'yum install plex', 'apt install plex', etc. should all have the same experience. (actually the correct incantation of docker to get plex running correctly has taken hours of my life away). There are indeed many blog posts about getting plex to run in docker.

The problem is package managers are bad (well, apt is bad, rpm is pretty ok, freebsd ports are pretty good, there are many others) and package maintainers are bad, it always seems like the job they give the intern to figure out instead of making it a cornerstone of usability.

Seriously, spend a day setting up things running on freebsd with the packaging you have and it will be a breath of fresh air. Nearly everything you can think of all put together in packages in one place, and most of them start working in an expected way with zero configuration fiddling, install and start.

If package management were better, docker might not have existed at all, people keep confusing it with decent package management.


I've got IPv6 from my ISP - I gave my pi a static v6 IP, set up cloudflare and told it that v6 IP, and cloudflare handles making it available to the v4 internet. Has worked pretty well so far, just for tinkering stuff, nothing 'production'.


Most home routers have functionality to do port forwarding and dyndns. Certbot for TLS/cert. Just get a free domain from the dyndns service. No VPS needed if you got your own hardware/Rasbery. Make sure you have backups. Just Dont expect a $30 PC to be without issues.


Of course they do, but a lot of people are behind CGNAT and do not have a public IP address assigned to their CPE, and as such reaching a device in the LAN from the outside - without the device reaching out to you first - is impossible.

And, of course, it can't reach out to your phone because it's also behind CGNAT. So, you need a VPS to act as a bridge between your phone and the device, which would connect to the VPS on boot and route traffic through a tunnel.


CGNAT is a pain but if you have that type of internet connection you should just forget about hosting anything from home. Maybe ipv6 works if you're lucky, but your ISP clearly won't make it easy for you.

A VPS with a VPN works but there's alternatives. Some online services provide free port forwards/port forwards for a price (ngrok style) or if push comes to shove and you only need access yourself, you can probably host your services on a Tor hidden service and bookmark it.

I don't think I've seen that many ISPs do CGNAT though. Even mobile carriers often expose some ports to the outside world for IoT crap with a SIM card. Maybe CGNAT is more prevalent in other countries but that doesn't mean other people can make use of these guides.


In general in the Balkans there are cable ISPs (biggest one owned by United Group for example) that give out CGNAT IP addresses to all residential cable users. You can buy a static IP address for a 5€/mo, but it's a painful procedure with bureaucracy for some reason.

On mobile networks, you are assigned a CGNAT IP address per cell base station per device, and they are then all mixed into a few public IPs. There are no ports open, and they cannot be opened, becase it is not possible to assign ports to a mobile device and have the user know which ports to use.

Because hundreds or more users share a single IP address, you'd have to randomly assign them ports and keep track of devices entering and leaving the service area to delete the port mappings, which is not economical.

Ironically, one mobile carrier - Telenor, has a "feature" where on 3G only with a certain APN they assign you a public IPv4 address to your mobile broadband interface. The only catch is that it is reachable from Telenor's network only, except on ports >10000.


Do those ISPs with CGNAT also provide IPv6?


No, and they have no excuse.

The cable ISP recently switched to a DOCSIS 3.1 network in certain areas, and almost all customers have a modem/gateway that has IPv6 support perfected (IPv6 is quite an old thing, would be weird for it not to work properly now - on the CPE side), but nope, they don't want to.

They don't like their users being able to host content, given the fact they make it extremely difficult to pay for a static IP and get a modem or have your gateway switched into bridge mode (the newest models have had that removed from firmware, and the ISP downgrades you to 3.0 speeds if you want bridge and pay for a static IP). I am not sure why, but the whole company is extremely antagonistic to the idea of a user having a public IP address of any kind.

Mobile carriers do not support it either, and have no excuse at all, given the modern LTE Advanced networks they have deployed, with VoLTE, and the modernized core infrastructure by Huawei to the highest standards of the 4th generation of networks. Except IPv6 of course.

This is the same in most Balkan countries and in other parts of the world.


That is odd. I'm not involved in networking or ISPs, but supposedly a big motivation for IPv6 is to reduce load on CGNAT systems -- they're expensive, and problems with them generate support calls.

e.g. https://www.retevia.net/prisoner/


It is absolutely not logical, I know. That's the weirdest thing. I just don't know WHY, but the whole thing is oriented against letting the user host anything AT ALL COST.


> behind CGNAT and do not have a public IP address assigned to their CPE, and as such reaching a device in the LAN from the outside - without the device reaching out to you first - is impossible.

It’s pretty easy and free to get IPv6 from HE’s TunnelBroker.


I believe TunnelBroker can't work with CGNAT.


Huh? Why can’t you assign a port/s to the CPE? You can even implement a port knocking scheme if you’re worried about some service/s on your home network being wide open to the world.


The point is that outside traffic isn't even reaching anything you even have control over, because you don't have a public IP (i.e. the ISP won't set up port forwarding for you). Let's say you wanted to directly send a packet to my phone. There's no way we could make that happen even with both of our cooperation because my phone doesn't get a publicly addressable IP.


Nat:ed ip's are a PITA. But any decent ISP will give you a public IP if you ask. We are running out of ipv4 addresses.


> if you ask

And pay ;)

Getting a static IP on an internet plan here in Australia will typically cost around $5 a month, and not all ISPs offer it on residential plans.


I hope that goes away with ipv6


For me its not the 30$ pc that has issues but the 10$ microsd card the os is on. I had to replace it for the third time in about 6 years now.

But this is not something new of course, everybody will tell you to either use good uSD cards or put the os somewhere else, like an external hdd or so.


Many Raspberry Pi distro spend all the card’s TBW writing /var/log stuffs for no reason. It’s a known issue often attributed to “cheap microSD cards”.


Yes, just to explain why:

microSD cards (and all flash storage) has a limited number of writes. If you let your rpi write logs, you will soon run out of writable space and if you are lucky you end up with a card that is read only. If unlucky it just stops working all toghether.


This is purely an anecdote, but I had to replace a few SD cards in my Pi until I changed power supply. It's not even the official, it's just one that's rated for 5.0 V, 2.5 A. I've used the same card for a few years now.


It's possible to boot from USB with the Pi4, I wouldn't run anything important from the MicroSD.


Even if direct USB boot does not work out you can just use the SD card to boot the kernel and have the root fs on USB.

With the PI4 you probably can even get better performance from using a USB storage device as it has USB 3.0 Ports.


A common setup is to pull the OS over the LAN and just use usb storage for all your actual stuff. Skip the sd card entirely except maybe as an OS fallback.


Go with the high endurance microSD cards. They are only a tiny bit more expensive and last a lot longer. If you’ve got a Pi with USB3 ports, I’d also recommend using an USB3 SSD flash drive for the majority of your writes.


Like the Samsung Edge series for Dashcams/IoT devices? I wish I could get some.


May I ask what fs did you use? Maybe F2FS would extend the life of the flash memory


Use IPv6.

It doesn't have the address exhaustion that caused providers to implement CGNAT, and dynamic IPv4 addresses.

No need for VPS management, DynDNS, port forwarding, hold punching. You still need public DNS, but you can use public DNS as your internal zone as well (no need for split DNS). You also still need PKI, so maybe setup a reverse proxy for SSL termination with a wildcard certificate.


I look forward to the day everyone can have IPV6. For now many of us still have to deal with NAT sadly - especially if we're open sourcing our software for users to deploy on any network.


Everyone can have IPv6 today by using a tunnelbroker. I used the free tunnel from https://www.he.net/ in the past, when I didn't have native v6. Today I don't need it anymore.


There's a comment above that indicates tunnel brokering can't handle NAT situations (at least CGNAT).

RFC3053[0] seems to indicate this can be a problem as well:

> 3. Known limitations

   This mechanism may not work if the user is using private IPv4
   addresses behind a NAT box.

Are you saying it works even behind a NAT?

EDIT: According to HE's own FAQ[1]:

> If you are using a NAT (Network Address Translation) appliance, please make sure it allows and forwards IP protocol 41.

That doesn't sound like something most ISPs are likely to support. Not sure about home routers but if it has to be configured manually we're back to square one.

[0]: https://tools.ietf.org/html/rfc3053

[1]: https://ipv6.he.net/certification/faq.php


I don't know exactly anymore, because I'm now with a different ISP which natively supports v6. So can't reproduce.

I mean I (probably) could, but don't want to, because now I have IPv4 via CGNAT, but not with a private IP, a public dynamic one probably shared with who knows how many others.

But I can use IPSEC/OpenVPN/Wireguard to somewhere else with that. Though my CPE supports GRE.

Anyways, there are large implementation differences in CGNAT from ISP to ISP and even different access technologies within the same.


Wow, am I getting this right? It handles NAT traversal for you behind the IPV6 address for free??


What do you mean by that exactly? Initially it's just an outgoing tunnel to one of their many exits, to reach any site which is reachable via v6. How you integrate that into your setup is up to you. Since they are (one of?) the pioneers you have many scripts available on many platforms which support that.

When you mean incoming tunnel, it's no different from the many dynamic DNS solutions, where it's again up to you to integrate that. But even for this they have something:

https://dns.he.net/


Yeah, dynamic DNS but for an IPV6 address was what I was meaning. Very interesting.


Have fun. It's cool to have. If only to get acquainted with that v6y stuff.


Or just setup Tailscale, which takes about two minutes.


Wow, yeah Tailscale looks like it basically does everything you'd want for this: https://tailscale.com/blog/how-tailscale-works/

I didn't even realize this was possible: https://tailscale.com/blog/how-nat-traversal-works/

I had seen some of the people working there comment on twitter, but I don't think those blog posts were written when I last looked them up and I didn't understand what they were actually doing.

This looks like the answer for most people if you don't need to give public access to the stuff you're hosting.

If you do though, I'm still not sure what the thing to do is. If I wanted to host my blog from home instead of via github pages or digital ocean, what's the right way to do that? Is there a reason nobody does this?


When I was young I served my websites off my home network. Dynamic DNS would update my A record if my IP changed, but I managed to trick my ISP into effectively giving me a static IP. DMZ'd a host on my network and set up a firewall, and you're off to the races.

Nowadays I just pay for a $5 VPS somewhere -- my uptime is significantly better this way!


Do you use the $5 VPS as like a reverse proxy and you're still self-hosting at home? Or did you move your self-hosted applications to the VPS?

I am setting up a self-hosted lab and looking at (securely) setting up remote access. Was leaning towards OpenVPN as pfSense supported it, but have been considering a locked down VPS remote proxy too (at least for some services) and happy to hear thoughts.


[Tailscale founder] One thing you can do here is use tailscale to connect all your devices together, including that VPS, and then set up a reverse proxy on the VPS that forwards queries to your various devices over tailscale.


Tailscale runs on WireGuard and therefore requires elevated permissions on each client device. That shouldn't be required for simply proxying a local port.

Does Tailscale offer domain registration and TLS certs?

Also, is there any way to allow public access to certain ports on certain machines, ie if you wanted to run your personal blog on your RPi?


I mean I suppose it requires elevated permissions but frankly it doesn't require any more permissions than most software, so this feels like a weird point to pick on. You need elevated permissions to bind 80 and 443, etc., right?

You mentioned accessing your own devices from anywhere, and that's what I use Tailscale for. It was a dream to set up, and for my own services, I don't need TLS or custom domains, really. I have a few shortcuts on my phone that work everywhere, Tailscale IPs are static.

> Also, is there any way to allow public access to certain ports on certain machines, ie if you wanted to run your personal blog on your RPi?

This is sorta outside the scope of what Tailscale aims to solve, but one of the cool things you could do is just run a proxy somewhere publicly accessible and route requests to your RPi.


> I mean I suppose it requires elevated permissions but frankly it doesn't require any more permissions than most software, so this feels like a weird point to pick on. You need elevated permissions to bind 80 and 443, etc., right?

I think maybe you're misunderstanding what my goal is. If I have a local webserver running on my laptop on port 8080, I want to expose that via HTTPS on a public domain. The server that terminates the HTTPS connection needs root to run on port 443, but my laptop doesn't need root to start the upstream webserver on 8080, and it shouldn't need root to tunnel it to the public server either.


[Tailscale founder here] If you're using a mac, you can just install Tailscale from the app store, which does not require root (thanks to the "magic" of Apple's extension signing).

Another experiment we're doing is integrating a completely userspace network stack, which could someday be good for this: https://twitter.com/bradfitz/status/1301937179636068352


I don't use mac.

I haven't dug into the WireGuard spec yet, so this might be an ignorant question: Do you think it would be possible to create a client that can talk with WG servers normally, but on the local side it forwards to a specific port, rather than a network interface? That would avoid the root requirement. I'm guessing the answer is no since it sounds like you guys are working on integrating a custom non-WG solution.


I think there is a userspace version written in Go that shouldn't need root access.


Unless I'm mistaken, wireguard-go[0] only runs the WireGuard protocol code in userspace rather than the kernel. It still requires configuring network interfaces which requires root.

[0]: https://github.com/WireGuard/wireguard-go


My RPi 4 has been running Tailscale at home for some time, forwarding to my home network. Works great and very stable.

I think somebody even compiled Tailscale to run natively on my Synology NAS.


I tried running nextcloud on an rpi. It just doesn't cut it. I had the 4gb model and nextcloud runs but its a horrible experience. You go on the web UI and click a photo and it takes 10 seconds to load. Moved my server to a ryzen 5 based setup and now everything is instant. I'm not sure what the limiting factor on the rpi was because the ram and cpu usage was low. Perhaps it was memory or storage speed.


I have a Pi4, 4gb model running my NextCloud instance in a docket container, along with pihole and home assistant in another folder.

It’s always run perfectly fine for me and my needs, and I even tested having shared video calling in NextCloud and it continued to work great.

I’m not sure your configuration, but it might be worth trying on a Pi4?


I was using the pi4 with 4gb ram. Were you booting of an sd card? That might have been my issue.


Booting off an SD card. I need to change that, but I’m lazy.

I do treat it like I would a Dropbox. I store photos I want to save, documents I want to save, etc. I was using it for recording trips for a brief bit, as well.

I’ve used it to share pictures with friends from our hikes, and I’m on a very fast internet connection.

My usage loads might be sufficiently low to not be a problem. I’m not constantly streaming from it like LTT does their NAS. For major software projects, I might use it as a remote git repo.

I probably have a high speed SD card.

There are times when it’s slow, but not too often. I forget what I’ve done to resolve that.

Also, running the NextCloud app on my computer has never been slow and that’s my normal use case for file management on it.


On SD card or USB w\UASP enclosured SSD?


And if USB, make sure to test the speed. Some controllers need quirks enabled[1] to get speed, including a lot of popular JMicron ones. Mine went from ~20MB/s to 300MB/s for a Samsung 850 SSD.

[1]: https://www.raspberrypi.org/forums/viewtopic.php?t=245931


This was using an sd card.


I moved all my home cloud stuff to an old dell small form factor computer that I bought for 40 bucks from a company selling all their old inventory. It is an i5 4500-something that beats the pi4 in everything except 4k output. It also has an SSD and 8gb ram.


Agree, the pi is way too low power to handle nextcloud comfortably. In the same price range, it's much better to buy a used NUC (even with a celeron!) compared to the beefiest Raspberry Pi 4 out there.


TBF I think this says more about Nextcloud than the Raspberry Pi. I also suspect it can be helped with some setup - I imagine the difference between disk I/O performance is going to be greater than the CPU differences if you're comparing a Pi4 to a recent-ish PC.


I'm doing the same, my disks are much faster than what should be necessary. Exactly the same setup and observations.


I had the same experience with a pi 3 and didn't even try a pi 4. I set up a modest intel box and it was interactive.

The 4 might have a better change with gigabit internet and faster usb, but I was still using a fast samsung fit drive, but not fast enough.

To be honest though - the intel box was modest, but still 4x the price. Additionally it's hard to beat a pi for installation - just insert the sd card. (there is always configuration)


There's a lot of solutions sibling comments have already brought up, but I don't know if it should be this automagical. Keeping services up to date requires effort, money, or a big reduction in freedom of what you can do with your server.

There's a full-automatic mail server program, maininabox, that tries to be this instant "just make it work" system. The result of the project is that the host OS was severely outdated for years because upgrading configuration automatically is difficult and because the system manages DNS for you, adding a new subdomein to your server is more of a challenge than it should be.

Similarly, automatic service install and management tools like Plesk, cPanel, ISPconfig have been around forever but they always provide some limitation. I think Sandatorm.IO is a quite recent tool of this sort that runs Docker so you have a bit more control.

All of these still require occasional maintenance though. If you can't figure out how to point a DNS name and a wildcard to your IP, then I'm not sure if you should be exposing services on the internet like that. If you don't update for a while your nice, powerful server Raspberry Pi might suddenly be DDOS'ing random websites without you even knowing about it, and all you can do to prevent that is to keep your (limited) software stack updated.

All attempts to make this easy for the general public have so far shown that people don't like to press the update button; even rebooting Windows is a risk some people just aren't willing to take, which is why Microsoft had to force reboots in Windows 10. With that kind of risk out there, freely connecting whatever to the web and forgetting about it, I'm glad there's some technical requirements before you can host something.


Sandstorm(.io) is very cool, and it does make managing your self-hosted web apps very easy. But it does not run Docker containers and it only runs on Linux x86-64. (There have been some attempts at running Docker containers with Sandstorm, but they are not easy to use.) Instead, the web applications must be specifically packaged for Sandstorm.


Oh, I suppose I was mistaken. Perhaps I confused it with one of its competitors I can't remember the name of right now.


I've been going down the rabbit hole looking through different software in this space. I started an awesome-list to track what I've learned:

https://github.com/anderspitman/awesome-tunneling


https://cloudron.io can do most of this minus the hole punching . The port forward ing is very router specific. I think maybe there is some upnp interface for this but not sure how widely it is supported.



Caddy is great, and it'll take care of managing the TLS certs. There's a lot left on my wishlist above...


Cloudflare argo tunnels are exactly what you're looking for https://developers.cloudflare.com/argo-tunnel/quickstart


I'm aware of argo tunnels. Unfortunately:

* Argo smart routing is 5 USD/mo + 0.1 USD/GB. The 5/mo is fine, but the data charges could add up quickly for something like Plex.

* CloudFlare doesn't sell domains.


> CloudFlare doesn't sell domains

They do have a domain registrar intermediary [1] announced two years ago [2]. It's in cooperation with dount.domains and has competetive pricing. It could be counted as they do sell domains.

Not affiliated with them in any way.

[1]: https://www.cloudflare.com/products/registrar/ [2]: https://blog.cloudflare.com/cloudflare-registrar/


Looks like you can only port domains you already have. Announced 2 years ago and still no general availability isn't a great sign.


Check out KubeSail! Not affiliated in anyway. They make it super easy to do the plumbing, networking and have a kubernetes cluster on a raspberry pi.

If you ever wanted to learn k8s without spending $80\month on a cluster, best way to learn it!


Thanks for the shout-out! I wanted to post "we do exactly this!" but didn't want to be an advertisement, so I appreciate it :P (co-founder of KubeSail, if anyone has any questions!)


Cool tech. Does KubeSail integrate domain purchasing? Why should I be required to learn kubernetes just to tunnel a local webserver to a public domain name?


We offer free built-in (kubesail managed) domains, but don't offer domains for purchase - that would be nice to add eventually but we're a small company, so holding off on that for now!

Ideally, you don't need to learn Kubernetes any more than you need to learn Linux in that example - our Repo Builder will do it's best to guess how to host your app on Kubernetes - and ideally the UI makes the rest feel like any other cloud platform. The benefit of not being locked in, and of learning open source tech instead of walled-gardens, is hard to express!


> Mapping X subdomain to Y port on Z machine should take a couple clicks from a web interface.

route53 can work like that, it also has a cli version. (But you can't get the domain there).


> (But you can't get the domain there)

There's Amazon Domains now.

Additionally, https://github.com/crazy-max/ddns-route53 works well as a dynamic DNS configurator for Route 53.

For most home users, a Docker-supporting server is the best option.

Traefik has ACME and labels-based configuration for Docker hosts. It is a good choice for multiplexing HTTPS services by subdomain names.

In my opinion the biggest limitation is that there is no universal API for network routing appliances, whether it is your $30 home combo WiFi/router or your $20,000 Cisco device.

An access-key-authorized version of UPnP would be sufficient for the vast majority of users. Or even iptables commands over public key authenticated SSH.

But giant corporations - Google, Microsoft, Apple, Amazon, Facebook - they are in the cloud business, Microsoft doesn't ship a home server technology really anymore.

The most popular home server software, like Plex, is really purposefully disruptive to giant software and media companies. By contrast you're going to have a bad time running your own Dropbox competitor from home, because that sort of technology is engineered around cloud computing.


Can it tunnel to local devices like a RPi or just AWS VMs?


You have to own the IP, and map the RPi to the standard ports (80/443, likely have to set that up from router). Alternatively just do x.com:yyyy if you don't mind (though you probably do for an external facing website).


No, it's just DNS. It doesn't provide any additional routing.


I'm currently thinking of using a reverse proxy through a wireguard tunnel. That should work also for non-static home ip addresses. (I already habe the domain and VPS)


> Mapping X subdomain to Y port on Z machine should take a couple clicks from a web interface.

This is already the case. Routers have mostly easy webinterfaces nowadays en the same goes for DNS options at any domainname provider. What people need is a bit of knowledge. It takes me a few mouse clicks in a webinterface to do this because I know what I am doing. Yeah you could dumb down anything to a single button but I don't think we should want that.


I tunnel everything through webRTC. It's a bit exotic but it gets you a direct bidirectional data connection to the self hosted device. You can put all users' self hosted content through a single domain name & SSL cert or you could have subdomains automatically provisioned for each device.

I'm using this WebRTC method for 3D printers at https://tegapp.io


Can you provide more details on what software you're using for WebRTC tunneling?


Sure, right now I'm using a nodejs WebRTC datachannels implementation but there's an up and coming rust implementation which I'm quite excited to try:

- NodeJS DataChannels: https://github.com/node-webrtc/node-webrtc

- Rust DataChannels: https://github.com/lerouxrgd/datachannel-rs


HomeDrive ( https://www.homedrive.io ) is plumbing exactly this! We are currently only hosting Nextcloud, but we plan to support more apps and custom dockers. It is as easy as plugging the box into the home router.

There are still many features to implement, but we are working towards "easy self-hosting at home", and looking for early adopters.


How would you compare yourselves to KubeSail, mentioned above?

Why limit it to specific software, rather than simply port mapping?


HomeDrive features plug-and-use, with no system maintenance required. We target not only developers/hackers, but also end-users who would like to have a small server at home, to host their own data and services for their digital lives. As a result, HomeDrive also maintains the OS to be reliable and secure, which is why we picked the hardware. We are investigating supporting more hardware, such as raspberry pi's.

KubeSail feels like a more accessible k8s+docker apps to me. I am not sure what security model KubeSail is assuming for the operating systems it is running on (or does it assume the OS is out of the scope?). Also KubeSail seems to target mostly developers/hackers.


Cloudflare -> Router (only allows 80 443 trafic from cloudflare ips) -> nginx -> all selfhosted services (wiki, hass...)

Problems with the "easy" one click ones is that they tend to not be very secure. If they are supposed to be public access. Plex uses their cloud to secure the access and Synology to


I need this.

Bought a namecheap registry for a small nfp.

Been swamped with how-to’s and learning things just to learn what to search for...

I could use something else that does it all...

But I want a level of authority none of those offer.. without the technical insight of “is this everything/enough”.


I can relate, thought about setting up a Caddy server to route through the different services (also nginx would be fine). Have to try it out and probably make a list of services in a HTML document returned on port 80/443


This is a good option if your ISP doesn't block ports 80/443 and you don't mind setting up port forwarding.

EDIT: Oh and if you don't mind managing your DNS records manually, including dynamic DNS.


It's against cloudflare tos to be doing mostly video/images for free. It would be a great idea as it would solve the problem of IP exposure but if it gained any traction it'd have to be shut down.


In my experience, you have to be way above 1 TB per day to get banned - I know because I pushed around 5000 simultaneous HD streams on my account back when I ran a pirate streaming service (Google my name if interested, it's on the Verge), and it lasted for a few days before I shut it down due to MPAA, but didn't get banned by CloudFlare. I still have the account that I did that on, it's from 2011. and still used for a few of my sites.


Start by having a look at subspace https://github.com/subspacecloud/subspace


The original repo is receiving very little updates and maintenance, if any. I'd recommend the community fork at https://github.com/subspacecommunity/subspace instead.


I use a VPN + static IP or DDNS to get access to my home cloud/server (both the VPN and DDNS can be setup on my router). Also, there are free DDNS providers.


You can run zerotier and have your services on your own private network accessible from anywhere.


ZeroTier might solve some of this.


For internal only things you can use Wireguard VPN and any dynamic DNS provider.


Doesn't ngrok handle all of this?


Everything except domain registration AFAIK. Unfortunately it's closed-source and the pricing is confusing. It's not clear to me whether it would be a good choice for exposing a Plex server to your family and friends, for example.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: