That reminds me of the entertaining "I just want to serve 5 terabytes. Why is this so difficult?" video that someone made inside Google. It satirizes the difficulty of getting things done at production scale.
Nothing in that video is about scale. Or the difficulty of serving 5TB. It's about the difficulty of implementing n+1 redundancy with graceful failover inside cloud providers.
User: "I want to serve 5TB."
Guru: "Throw it in a GKE PV and put nginx in front of it."
Congratulations, you are already serving 5TB at production scale.
The interesting thing is there also paradoxes of large scale: things that get more difficult with increasing size.
Medium- and smaller-scale can often be more flexible because they don't have to incur the pain of nonuniformity as scale increases. While they may not be able to afford optimizations or discounts with larger, standardized purchases, they can provide more personalized services large scale cannot hope to provide.
On a related note, providers that have independent instances for each customer (so no multi-tenancy) typically get about 3 more nines than, say, AWS. On prem enterprise is a typical example of this, and it is still used in safety critical industries for this reason.
Eventually, all outages are black swan events. If you have 1000 independent instances (i.e., 1000 customers), when the unexpected thing hits, you’re still 99.9% available during the time when the impacted instance is down.
Also, you can probably permanently prevent the black swan from hitting again before it hits again.
Depends on what exactly you want to do with it. Hetzner has very cheap Storage boxes (10TB for $20/month with unlimited traffic) but those are closer to FTP boxes with a 10 connection limit. They are also down semi-regularly for maintenance.
For rock-solid public hosting Cloudflare is probably a much better bet, but you're also paying 7 times the price. More than a dedicated server to host the files, but you get more on other metrics.
> Hetzner has very cheap Storage boxes (10TB for $20/month with unlimited traffic)
* based on fair use
at 250 TB/mo:
> In order to continue hosting your servers with us, the traffic use will
need to be drastically reduced. Please check your servers and confirm
what is using so much traffic, making sure it is nothing abusive, and
then find ways of reducing it.
Backblaze B2 is only free with Bandwidth Alliance partners, otherwise $0.01/GB, individual transactions will also cost you, depending on how much you use.
That's if you use their CDN. Cloudflare R2 doesn't charge for egress bandwidth. If you have 100TB/mo to serve, try it and see what happens. I haven't heard of anyone being kicked off of R2 for using too much egress bandwidth yet.
At scale, you'll pay a couple thousand dollars for Class B operations on R2, and another bunch for storing the 10 TB in the first place, but that's relatively cheap compared to other offerings where you'd pay for metered egress bandwidth.
CF is not particularly fond of non-Enterprise customers serving more than a few TB/mo. Source: $corp serves 150 TB/mo via CF and pays somewhere north of 50k+/yr for it
The whole point of R2 is to remove predatory practices of egregious charging for egress, and if somehow they went back on this promise that would be a very bad PR.
I don't know about any special offers, but looking at standard pricing on rsync.net it would cost me $15/month for 1TB, while on Hetzner the same would cost me €3.94/month.
I'd suggest looking into "seedboxes" which are intended for torrenting.
I suspect the storage will be a bigger concern.
Seedhost.eu has dedicated boxes with 8TB storage and 100TB bandwidth for €30/month. Perhaps you could have that and a lower spec one to make up the space.
Prices are negotiable so you can always see if they can meet your needs for cheaper than two separate boxes.
> I'd suggest looking into "seedboxes" which are intended for torrenting.
Though be aware that many (most?) seedbox arrangements have no redundancy, in fact some are running off RAID0 arrays or similar. Host has a problem like a dead drive: bang goes your data. Some are very open about this, afterall for the main use case cheap space is worth the risk, some far less so…
Of course if the data is well backed up elsewhere or otherwise easy to reproduce or reobtain this may not be a massive issue and you've just got restore time to worry about (unless one of your backups can be quickly made primary so restore time is as little as a bit of DNS & other configuration work).
Yep, resellers of dedicated machines rent servers in bulk so you can often get boxes for way cheaper than you would directly from the host. Take a look at https://hostingby.design as an example.
I've been using a HostingBy.Design seedbox (formerly Seedbox.io) for years to distribute content to my patrons for 3 years. They have excellent uptime and their customer service is knowledgeable.
It's impossible to answer this question without more information. What is the use profile of your system? How many clients, how often, what's the burst rate, what kind of reliability do you need? These all change the answer.
"Impossible", yet many others have succeeded commendably... explore what they can do but you cannot. Or else offer examples wherein your constraints exist and drive another solution. "No solution without more info" is a cop-out.
I'm sorry, let me clarify since you seem to be very pedantic. It's impossible to answer well without a bunch more information. Yes, there are other answers in this thread, but I would argue they aren't particularly helpful to either OP or any other reader.
It's kind of like someone going to a group of doctors and saying "I'm in pain", and then the doctors start throwing out reasons the person may be in pain and solutions to that pain.
Sure, there may be some interesting ideas there, but it doesn't really do OP any good without describing where the pain is, when it started, if they have any other known conditions, etc. etc.
I know you think you were helping with this comment, but you really weren't.
The comment that is unhelpful is the one that has to be voiced but refuses to participate. You're just creating a clamor where a conversation used to be by adding your noise. If you aren't going to participate in the answer beyond saying "I'm not going to answer." Then just don't.
If we are talking about serving files publicly I'd go with the €40 server for flexibility (the storage boxes are kind of limited), but still get a €20 Storage Box to have a backup of the data. Then add more servers as bandwidth and redundancy requires.
But if splitting your traffic across multiple servers is possible you can also get the €20 storage box and put a couple Hetzner Cloud servers with a caching reverse proxy in front (that's like 10 lines of Nginx config). The cheapest Hetzner Cloud option is the CAX11 with 4GB RAM, 40GB SSD and 20TB traffic for €3,79. Six of those plus the Storage Box gives you the traffic you need, lots of bandwidth for usage peaks, SSD cache for frequently requested files, and easily upgradable storage in the Storage Box, all for €42. Also scales well at $3,79 for every additional 20TB traffic, or $1/TB if you forget and pay fees for the excess traffic instead.
You will be babysitting this more than the $150/month cloudflare solution, but even if you factor in the cost of your time you should come out ahead.
Exactly, and also you get to actually understand how it all works together, unlike a bunch of proprietary APIs that only tie you to their particular platform.
(for those not on the same page, I’m talking from a position of substantial experience with all 3 major clouds)
Plus, these days the maintenance burden of the OS layer is really heavily overstated. With certain self-updating open-source container OSes one doesn’t even really have to think about patches and all that ancient crap.
The real appeal of the big players in my mind is only in one use case - scale. If you need 10k servers for heavy “big” data processing like in genomics or ‘AI’ (whatever that means), only then they start to be indisposable. Otherwise, the considerable burden of training all personnel on proprietary APIs is just not worth it - it literally costs less to buy and configure your own system (or a traditional VPS or dedicated server). Cloud architects ain’t cheap!
> even if you factor in the cost of your time you should come out ahead
There is always the hidden cost of not spending time on activities that are core to your business (if this is indeed for a business) that would make multiples of the money CF costs you.
That, and also NixOS - I’m discovering it for myself now, and it’s been a revelation! Configuring absolutely everything declaratively from scratch, even the disk partitions - a dream for reproducibility. It even has configurable “micro-VMs”, which would not be as easy to do via Proxmox (not counting LXC), since they would have to be built manually. Though Proxmox does have some nice benefits over it as well, especially considering their ecosystem with PBS, mail server etc
Thanks for the intro to NixOS. I was trying to remember one I had seen & forgotten and I think this may have been it.
I have been playing more and more with UTM in the Mac world and it's encouraging how mature it seems already and hopefully can be picked up into NixOS, Sandstorm, etc.
I like proxmox more personally, however have changed my stance recently where nix and sandstorm could just be run in a proxmox vm and then provide more of an IaaS role. The newer versions of ProxMox are even easier and they were pretty OK the past 5-7 years.
I think it’s more about peace of mind, unlimited really means I won’t wake up tomorrow with a $10k bill, as it happened many times (not to me) on AWS and the like. That is the disgusting practice the big cloud providers like to impose, for no apparent reason but to keep you in their roach motel and pay up. Disgusting!
I mean to say, for general self-hosting services and apps, HDDs seem to have that performance problem and latency, which could lead to a negative experience?
Consider storing the data on Backblaze B2 ($0.005/GB/month) and serving content via Cloudflare (egress from B2 to Cloudflare is free through their Bandwidth Alliance).
(No affiliation with either; just a happy customer for a tiny personal project)
Man, thanks so much for this. I’m using Wasabi with a Yarkon front end right now and it’s great, but Backblaze/Cloudflare is looking like a serious contender.
That is exactly the use case, which is hosting the files on b2 (not cdn capable) and caching+serving from cloudflare. Unless the files in question are webpages or static webpage content (doubtful) then it would definitely be exactly the target of these new TOS updates.
BuyVM has been around a long time and have a good reputation. I’ve used them on and off for quite a while.
They have very reasonably priced KVM instances with unmetered 1G (10G for long-standing customers) bandwidth that you can attach “storage slabs” up to 10TB ($5 per TB/mo). Doubt you will find better value than this for block storage.
Honestly haven’t hosted anything important enough for me to track that. There is an unofficial site that tracks their uptime apparently:
https://www.buyvmstatus.com/
At some point you still need a seed for that 10TB of data with some level of reliability. WebTorrent only solves the monthly bandwidth iff you've got some high capacity seeds (your servers or long-term peers).
And they just added TCP client sockets in Workers. We are just one step step away from being able to serve literally anything on their amazing platform (listener sockets).
Only client sockets are available. So what you can do is build a worker that receives HTTP requests and then uses TCP sockets to fetch data from wherever, returning it over HTTP somehow.
It may depend on the makeup of data or something. They "requested" one of my prior projects go on the enterprise plan after about 50TB, granted the overwhelming majority of transfer was for distributing binary executables so I was in pretty blatant violation of their policy. This was 2015ish, so the limit could also have gone up over time as bandwidth gets cheaper too.
They don't really maintain the regular Sync client anymore, only the expensive enterprise Connect option. My wife and I used Resilio Sync for years, but had to migrate away, since it had bugs and issues with newer OS versions, but they didn't care to fix them. Let alone develop new features.
The 100TB was just an example. They don't want you using more bandwidth than your storage. If you're storing 500GB, then your bandwidth usage should be less than 500GB.
Wasabi isn't meant for scenarios where you're going to be transferring more than you're storing.
If price is a consideration, you might consider two 10 TB hard drives on machines on two home gbps Internet connections. It's highly unlikely that both would go down at the same time, unless they were in the same area, on the same ISP.
Just use two A records for the one DNS name, and let the clients choose.
The other way is to have two names, like dl1 and dl2, and have your download web page offer alternating links, depending on how the downloads are handled.
You very rarely can do multi-ISP bonding, often not even with multiple lines from the same ISP, unfortunately.
I would also like to ask everyone about suggestions for deep storage of personal data, media etc. 10TB with no need for access unless in case of emergency data loss. I'm currently using S3 intelligent tiering.
I like to use rsync.net for backups. You can use something like borg, rsync, or just sftp/sshfs mount. Its not as cheap as something like S3 deep (in terms of storage) but it is pretty convient. The owner is a absolute machine and frequently visits HN too.
S3 is tough to beat on storage price. Another plus is that the business model is transparent, i.e., you don't need to worry about the pricing being a teaser rate or something.
Of course the downside is that, if you need to download that 10TB, you'll be out $900! If you're worried about recovering specific files only this isn't as big an issue.
Wasabi is the best option for you. 10TB would be around 60$/month and they offer free egress as much as your storage. So you can download upto 10TB per month.
Glacier Deep Archive is exactly what you want for this, that would be something like $11/month ongoing, then about $90/TB in the event of retrieval download. Works well except for tiny (<150KB) files.
Note that there is Glacier and Glacier Deep Archive. The latter is cheaper but longer minimum storage periods. You can use it as a life cycle rule.
I think they'll charge me only when my current monthly statement is enough to charge. Pretty sure I've never been charged so far with my monthly statement being like 0.02€.
Some tens of gigabytes at this point? It's definitely not a lot. Mostly just some stuff that doesn't make sense to keep locally but I still want to have a copy in case a disaster strikes.
I helped run a wireless research data archive for a while. We made smaller data sets available via internet download but for the larger data sets we asked people to send us a hard drive to get a copy. Sneakernet can be faster and cheaper than using the internet. Even if you wanted to distribute 10TB of _new_ data every month, mailing hard drives would probably be faster and cheaper, unless all your customers are on Internet2 or unlimited fiber.
The answer to this question depends entirely on the details of the use case. For example, if we're talking about an HTTP server where a small number of files are more popular and are accessed significantly more frequently than most others, you can get a bunch of cheap VPS with low storage/specs but a lot of cheap bandwidth to use as cache servers to significantly reduce the bandwidth usage on your backend.
I always assumed having a raspberry pi with a couple HDs in raid1 with IPFS or torrent would be the best way to do this.
Giving another one of these raid1 rpis to a friend could make it reasonably available.
I am very interested to know if there are good tools around this though, such as a good way to serve a filesystem (nfs-like for example) via torrent/ipfs and if the directories could be password protected in different ways, like with an ACL. That would be the revolutionary tech to replace huggingface/dockerhub, or Dropbox, etc.
If you just want to be able to sync a directory between multiple devices with encryption options I'd recommend Syncthing. It's dead easy to set up, I've currently got it on a rpi backing up all my photos from my phone while syncing my Obsidian vault beteen my phone and desktop.
Yeah that's a good suggestion for that use case. I was thinking a bit more along the lines of 2 other use cases:
1) If you have a file locally and want to send a link to friend/family member (yourself or even some random person on the internet) for a 1TB or 1MB file to download, but for it to be optionally password protected to download.
2) You want to set up a package/script to automatically download a file when started (NN weights for example) and for the download to retrieve like IPFS torrent from everyone who has that file (I.e. is running the package).
The system in (2) works ok for downloading a Dockerfile that points to an IPFS file if you put the link there; however, a considerable number of things don't fit the suggestions in (2), such as not automatically becoming a seeder of that file when downloaded or when running the package. There is also a great amount of opportunity in making the process of uploading files to IPFS much simpler. One example for the code idea would be something like git hooks, such that any time a major version of a git commit was made, a set of files would be added to IPFS for this type of distribution.
Ultimately a 'plug-n-play' package to add in a specified way e.g. setup.py would be the best way to get something like that going. Then perhaps a simple program like synching or miniserve that operates on top of that functionality would allow for something more like (1).
The biggest problems in aware of is sync conflicts, which just make things a little difficult. If they're text files it's not so bad, since vimdiff can easily merge. But if they're encrypted or more complex formats ... :/
Hell, make me a fair offer and I'll throw it up on ye olde garage cluster. That thing has battery backup, a dedicated 5 Gbps pipe, and about 40 TB free space on Ceph. I'll even toss in free incident response if your URL fails to resolve. But it'll probably be your fault, cause I haven't needed a maintenance window on that thing in like three years.
Spend some time on https://www.webhostingtalk.com/ and you will find a lot of info. For example https://www.fdcservers.net/ can give you 10TB storage and 100GB bw for around $300....but keep in mind the lower the price you pay, the lower the quality...just like any other products.
OVH is probably your best bet and should be the cheapest both for hosting and serving the files. You'd be hard pressed to beat the value there without buying your own servers and colocating in eastern Europe.
Most of their storage servers have 1gbps unmetered public bandwidth options and that should be sufficient to serve ~4TB per day, reliably.
Unless its 100TB/mo of pure HTML/CSS/JS (lol) cloudflare will demand you be on enterprise plan long before 100TB/mo. The fine print makes it near useless for any significant volume.
Surprised no one has said Cloudflare Pages. Might not work though depending on your requirements since there’s a max of 20,000 files of no more than 25 mb per project. But if you can fit under that, it’s basically free. If your requirements let you break it up by domain, you can split your data across multiple projects too. Latency is amazing too since all the data is on their CDN.
Smaller VPS providers are a good value for this. I'm currently using ServaRICA for a 2TB box, $7/mo. I use it for some hosting, but mostly for incremental ZFS backups. Storage speed isn't amazing, but it suits my use case.
I'm using cloudflare R2 for a couple hundred GB, where I needed something faster.
I think 2x 1 Gb/s symmetric home fibers + SuperMicro 12x SATA Atom Mini-ITX with Samsung drives can solve this fairly cheaply and durably depending on write intensity.
That said above 80 TB is looking hard for eternity, unless you can provide backup power and endure noise of spinning drives.
You could do this for about $1k/mo with Linode and Wasabi.
For FastComments we store assets in Wasabi and have services in Linode that act as an in-memory+on disk LRU cache.
We have terabytes of data but only pay $6/mo for Wasabi, because the cache hit ratio is high and Wasabi doesn't charge for egress until your egress is more than your storage or something like that.
The rest of the cost is egress on Linode.
The nice thing about this is we gets lots of storage and downloads are fairly fast - most assets are served from memory in userspace.
Following thread to look for even cheaper options without using cloudflare lol
Well, for us it's actually really cheap because we really just want the compute. The bandwidth is just a bonus.
Actually, since the Akami acquisition it would be even cheaper.
$800/mo to serve 100TB with fairly high bandwidth and low latency from cold storage is a good deal IMO. I know companies paying millions a year to serve less than a third of that through AWS when you include compute, DB, and storage.
Fine, but now you’re changing the comparison. Spending millions on compute with low bandwidth requirements doesn’t make it stupid. It probably still is, but that’s a different conversation.
Hetzner has excellent connectivity: https://www.hetzner.com/unternehmen/rechenzentrum/
They are always working to increase their connectivity. I'd even go so far to claim that in many parts of the world they outperform certain hyperscalers.
I used to have a dedicated server there and what happened to me is that my uploads were fast, but my downloads were slow. Looking at an MTR route, it was clear that the route back to me was different (perhaps cheaper?). With google drive for example I could always max out my gbit connection. Same with rsync.net
Also I know that some cheaper Home ISPs also cheap out on peering.
Now, this was some time ago, so things might have changed, just as you suggested.
Sounds like you could find someone with a 1Gbps symmetric fiber net connection, and pay them for it and colo. I have 1Gbps and push that bandwidth every month. You know, for yar har har.
And that's only 309Mbits/s (or 39MB/s).
And a used refurbished server you can easily get loads or ram, cores out the wazoo, and dozens of TB's for under $1000. You'll need a rack, router, switch, and batt backup. Shouldn't cost much more than $2000 for this.
I once had a Hetzner dedicated server that held about 1 TB of content and did some terabytes of traffic per month (record being 1 TB/24 hours). Hetzner charged me 25€/month for that server and S3 would've been like $90/day at peak traffic.
you can definitely do this at home on the cheap. As long as you have a decent internet connection, that is ;)
10TB+ harddisks are not expensive, you can put them in an old enclosure together with a small industrial or NUC PC in your basement
I current have 45 WUH721414ALE6L4 drives in a Supermicro JBOD SC847E26 (SAS2 is way cheaper than SAS3) connected to an LSI 9206-16e controller (HCL reasons) via hybrid Mini SAS2 to Mini SAS3 cables. The SAS expanders in the JBOD are also LSI and qualified for the card. The hard drives are also qualified for the SAS expanders.
I tried this using Pine ROCKPro64 to possibly install Ceph across 2-5 RAID1 NAS enclosures. The problem is I can't get any of their dusty Linux forks to recognize the storage controller, so they're $200 paperweights.
I wrote a SATA HDD "top" utility that brings in data from SMART, mdadm, lvm, xfs, and the Linux SCSI layer. I set monitoring to look for elevated temperature, seek errors, scan errors, reallocation counts, offline reallocation, and
probational count.
> If your monthly egress data transfer is less than or equal to your active storage volume, then your storage use case is a good fit for Wasabi’s free egress policy
> If your monthly egress data transfer is greater than your active storage volume, then your storage use case is not a good fit for Wasabi’s free egress policy.
10TB storage + 100TB bandwidth and S3 will easily be +1000 USD per month, while there are solutions out there that are fast and secure with unrestricted bandwidth for less than 100 USD per month. Magnitude cheaper with same grade in "enterprisey".
Well, I said, if you store small data. For large data, sure, prohibitively expensive!
I don’t think many other solutions are equally fast and secure.
AWS operation is pretty transparent, documented, audited and used by governments. You can lock it down heavily with IAM and a CMK KMS key, and audit the repository. The physical security is also pretty tight, and there is location redundancy.
Even hetzner doesn’t have proper redundancy in place. Other major providers in France burned down (apparently with with data loss), or had security problems with hard drives stolen in transport.
I don’t work for AWS, don’t have much data in there, just saying. GCP and Azure are probably also good.
https://www.youtube.com/watch?v=3t6L-FlfeaI