As a Hetzner bandwidth enjoyer affected by this, this is why (HN cough) multi-cloud/dedi k3s is great, because if you get rug pulled you just migrate to another provider with better prices.
That said, $1/TB for bandwidth overage seems pretty fair. I empathize with the complaining but if the new price is such a ripoff everyone should be recommending what cloud VM provider they're migrating to for a better deal.
I use OVH (VPS’s specifically), which offers unlimited bandwidth. In my experience they’ve been both reliable and affordable which is a rarity. I run a few applications that require high amounts of bandwidth, so silly caps like the ones that Hetzner are imposing are a non-starter for me
That's what a friend said also. If you look beyond the marketing material, hoever, the ToS says:
> OVHcloud reserves the right to restrict the VPS Service bandwidth to 1 Mbps (1 Megabit per second) until the end of the current billing period in cases of excessive use by the Client
but it advertises with "unmetered"... so is a meter attached by which they can tell whether your bandwidth use is excessive or not? Would they eat those costs for you?
I checked out some numbers. Quoting myself from chat history:
> it begs the question: what's "excessive"? I dunno but if they charge $5/month for the VPS and, while AWS may be ~1/3rd cheaper [than some other thing], that's still on the order of 70$/month. And AWS has insane economies of scale working for them, maybe their cost price is $7/month if they don't need to have a competitive price but that's still a loss then
> I bet you'd win the lawsuit where [OVH] falsely advertised with unmetered 500mbps and a terms of service saying "excessive", so when you transfer 2 TB/day on a connection advertised to be capable of 500mbps×24h = 5.4TB/day... that's reasonable right? But then you're having a lawsuit over a 5$/month VPS
Yes, of course. Having flow data (or monitoring ports/interfaces) for traffic engineering and management is pretty essential, not least for determining when capacity upgrades are needed.
I understand both sides of the argument here. The idea of offering "unlimited" is appealing because most users of a typical 2GB RAM virtual machine (as an example) consume less than 1TB of bandwidth per month. Offering unlimited bandwidth removes the hassle of overage charges/billing queries and eases customer concern/friction. Both sides benefit from this.
However, on the other hand, is it reasonable for a $5/month virtual machine customer to use 1Gbps 24/7/365, potentially consuming $100–$200 worth of bandwidth?
Should providers avoid offering unlimited bandwidth unless it's truly unlimited? From an engineer's perspective, yes, I agree. But this stance also risks degrading the experience for the 99.5% of "normal" customers—those who don’t exploit this simplification of "free bandwidth"—just to address a handful of users who take full advantage of it.
It's tough, so IME most such providers leave something in their terms that allows them to intervene in extreme cases but typically exercise restraint in doing so, usually only doing it manually if they notice that 'extreme' usage is damaging other users experience e.g it's serious and prolonged usage.
It’s also reasonable for OVH to not do that, as most of their customers don’t understand 95th percentile billing, which is the model that they’re being charged at by their transit suppliers.
It’s also reasonable for OVH to not do that, as most of their customers dont understand that transit costs blend with port costs depending on destination, and some destinations are effectively ‘free’ to send/receive from (fixed port costs only, no marginal costs), and other destinations are not (marginal costs associated with transit supplier fees).
The billing model consumers want is a simple BW used calculation, without facing the reality that if they consume their entire BW allowance as quickly as possible, it incurs order of magnitude higher costs than if they consume it at a trickle over the whole month.
It’s worth going back to the start of this thread and seeing that this all started with someone complaining that their provider had reduced the BW allowance and wanting somewhere with more generous allowances. When the provider sells things for what it actually costs, the customer gets upset and looks for someone selling a subsidised product in a misleading way. Leading to other people getting upset about the mislead. Those people should go to the original provider, who is doing exactly what they asked for!
kind of true except if you've got 1000's of customers it all evens out and your traffic profile is actually quite smooth if you've got say 6 X 100G transits, 4 X 100G IXP ports and 5 X 100G PNIs then the impact of an individual 1G customer is not even noticeable, honestly. We can work to 1Mbps 95%ile being about 250GB of total transfer at scale.
I completely agree. The product being sold has only a loose connection to the cost incurred by the provider. This is why the product being sold is being sold in a vague / loose-ish way, because for the overwhelming majority of customers, the product being sold can be sold for a profit.
Doing billing like this is awkward as you just don't know what you are going to get. And, in a world where most people are in fact using small amounts of bandwidth, what you have done is caused people who use small amounts of bandwidth to pay MORE than they should, as they are effectively paying the price of the bandwidth for the average user: if you use less than the average, you are subsidizing the people who use more than the average.
Meanwhile, in an ecosystem where everyone isn't already being ripped off with overly-expensive bandwidth, if an ecosystem-level event happens that causes the average user to suddenly use more bandwidth, the service either has to raise rates for everyone or they have to start claiming some uses of bandwidth are "egregious".
The result is then that, to defend the small-scale user from paying even more than the too much you are already charging them (as they are subsiding the larger users), you suddenly start doing traffic analysis with price discrimination by use case, and network neutrality goes out the window :/.
The real reason any of this works is just that people in fact aren't being charged fair prices most of the time, and these unlimited plans let the provider hide that from all involved. If everyone were charged a fair price, not only would heavy users pay a lot and light users pay LESS than they often do today, but everyone would be paying little enough that this idea that it is a big customer "concern" goes away, the same as it is for electricity or water: except in extreme circumstances, no one frets over sudden utility overages.
This is a for profit business not a limited common ressource being shared.
What you are missing here is that the adjustment is not low usage users subsidising high usage users, it’s OVH margins. Nobody is being subsidised. Low usage users just make OVH more money than high usage users. OVH doesn’t mind because per user costs are actually low and they are already competitive at that price without adding more complexity to their product mix. Users which would lead to an actual loss are rate limited.
> What you are missing here is that the adjustment is not low usage users subsidising high usage users, it’s OVH margins.
I did not miss this, and it was part of my point: the only reason this makes any sense at all is because these providers are ripping people off on bandwidth, which is how they have a margin so large that they feel a need to hide it from people under this kind of ridiculous pricing abnormality.
What is awkward is just accepting that and helping to make it worse by advocating for making it easier to kind of hide that fact: bandwidth is a commodity product, and these pricing games aren't pro-consumer because they somehow help people not have to worry about one month getting ripped off too much... they are anti-consumer because they enable the perpetuation of the state of affairs wherein people get ripped off in the first place.
The bandwidth providers know this, but they--of course ;P--like their excessive margins... but, if you just stopped claiming this was pro-consumer and realized what was actually going on here, the idea that a margin so excessive as to be able to essentially make the usage for the median user irrelevant should indicate a nigh-unto-ridiculous level of market distortion.
Like, we shouldn't sit around and just tolerate these margins. And that this particular pricing trick helps make these margins a bit more stomached by people really sucks! And in some sense I get it that it does make it easier to stomach... but... only because I think people are just buying into the idea that this must be a reasonable price :(.
And--even then--it doesn't fix the other problem I talked about (which I explicitly hedged as being in the world where the price wasn't set up to gouge everyone): when Facetime came out, it overnight was going to cause everyone with an iPhone to suddenly need more bandwidth, and so network providers temporarily needed to ban it or charge more for it; we see the same thing with the step up to video streaming services from basic web browsing, leading to providers feeling a need to zero-rate.
The reality is that bandwidth IS a limited common resource being shared at that provider--the same as any other product where the price isn't being distorted: this is the whole reason we use markets for this stuff in the first place--and the pricing of it at different providers should encounter market forces to drive it down closer to cost... except we are trapped in a local minimum here by people who refuse to understand that unlimited schemes cost more, not less.
You don't actually tolerate high seller margins. Hosting is a competitive market.
If there was significant gains to be made by being more aggressive on the low end of the market, providers would already be doing it (and they are - OVH 5$ offer is quite aggressive). There is a reason nobody actually offers a better deal.
If you pay $5 and use $100 of bandwidth costs, you are in fact being subsidized by other users, not by margins. We don't know what OVH pays for bandwidth though.
But nobody can do that. OVH doesn't let you be a large net negative.
What happens is roughly that:
- You are costing 1$ (bandwidth, etc.). You make OVH 4$. They are happy. Nobody offers you a cheaper alternative so you are stuck paying 5$ anyway.
- You are costing 4$ (bandwidth, etc.). You make OVH 1$. They are happy as marginal costs are low anyway.
- You are costing more than 5$. OVH severely rate limit your bandwidth to cut their costs and wait for you to leave because the service is now useless to you.
If I order some shoes from Amazon, I find them uncomfortable, and I return them for a full refund causing Amazon to incur a loss - have I been "subsidised" by other customers?
Personally I would say if Amazon makes a profit selling you a book and makes a loss shipping me some shoes which I return, the loss was paid by Amazon, not by you.
The comparison is more apt if you gained something (because the bandwidth user gets a product out of it), say by having worn the shoes for a day and doing this every day so you get free shoes for life. Then, yes, it's pretty clear the paying customers are the ones footing your bill
View this as an insurance and it suddenly all makes perfect sense. You pay a little more so if this month your usage pikes, you won't get a surprise invoice you didn't budget for, and you won't get cut either. At worst, you'll get rate-limited. This price stability is valuable and paying extra to get it isn't being charged an unfair price. Of course if you don't find it beneficial, you should choose another offering.
This is only relevant because the cost of bandwidth is excessively high--much higher than it should be--and so people essentially need to pay for this gouging-insurance.
So, the "fair price" for internet bandwidth in Europe/NA is typically between a tenth and a quarter of a single cent per GB transfered in the heaviest direction.
So you prefer to pay $4.50 for your vm + 47.12126 cents for 460Gb data transfer , rather than $5 for your VM with unmetered data transfer?
I think by the way that the sensible answer is what DO/Linode etc do which is allocate some included data transfer per VM and pool it across your account. That's honestly a very sensible balance from my viewpoint, but they then charge you quite alot for overage around 1-2c per GB which is ~10X the "fair price".
So it's not unmetered as advertised or am I misunderstanding that word?
> this stance also risks degrading the experience for the 99.5% of "normal" customers—those who don’t exploit this simplification of "free bandwidth"
How so? If they want to be relaxed about it, the terms can say that you can burst more (e.g. "you can use 500GB/month, and burst to 5TB for two months of every two-year period; we'll send you a notification email whenever this happens so you're not caught by surprise"). If they don't want to be flexible, they can mention the hard limit that they are going to enforce regardless of whether they call it unlimited without asterisk. Either way, the buyer would know what they can actually use and doesn't have to guess
“Unmetered” means “You will not be charged under normal circumstances based on the measurement of the data you use.” It does not mean that your traffic is literally not measured.
They don’t put a specific hard limit because doing so both limits their own flexibility as a service provider and creates a target for abuse by users.
For some definition of "normal circumstances". Being a bigger user should fall within it or that's not accurate advertising.
Some places will offer a choice between faster metered and slower unmetered. That seems like a good compromise to me. A nice big link should cost the host a single digit number of dollars per 100Mbps, so it's not hard to find an option where everyone is happy with the speed and pricing.
If you want a contract that has every term and circumstance negotiated up front, you’re going to need to speak with Hetzner’s business development team. You’ll also need to be a bigger fish than a single hobby developer.
> However, on the other hand, is it reasonable for a $5/month virtual machine customer to use 1Gbps 24/7/365, potentially consuming $100–$200 worth of bandwidth?
Irrelevant. If you sell a vCPU with enough bandwidth to feed your 1GBps 24/7/365 needs, and you charge $5/month for it, then it matters nothing what's your personal notion of reasonable. What matters is the service plan offered by the cloud provider and the performance indicators they are contractually obligated to meet.
> What matters is the service plan offered by the cloud provider and the performance indicators they are contractually obligated to meet
Indeed, and those indicators are specified in the contract, not in the headline product description. There are a lot of people unhappy that those indicators in this contract are not specific enough. Those people shouldn’t buy these contracts.
(Also, if you use your 1Gbps port at full speed at the most peak time for bandwidth utilisation, for 37 hours in a month, and not at all outside of that, assuming 20 cents a megabit with 95th percentile billing, the costs you’ve incurred to your provider are $200. Also it doesn’t matter at all what you do after those 37 hours, the costs to the provider are the same. You doing 300TB in a month costs the same as you doing 16TB, if you do the 16TB the ‘wrong’ way.)
Thats only true if all your customers choose the exact same 37 Hours (and the same/similar destinations) back in the real world that's very very unlikely and so the 95%ile "issue" is a bit misleading unless an individual customer has the ability to use more than a couple of percent of your overall capacity (rare-ish at scale).
I completely agree. The product being sold has only a loose connection to the cost incurred by the provider. This is why the product being sold is being sold in a vague / loose-ish way, because for the overwhelming majority of customers, the product being sold can be sold for a profit.
Probably not by default, but if your usage starts to saturate their network switches they’ll add one, to figure out who’s disrupting everyone else’s QoS.
Forgot to mention, OVH makes the current topology and traffic saturation of their network switches public: http://weathermap.ovh.net/
(I think they mostly do this so that customers can see and verify that any DC-level peering relationships, or per-customer site peering contracts [a.k.a. "OVHCloud Connect"] are being taken advantage of to flow the customer's traffic. But it's convenient for other things, too.)
OVH is the most set and forget experience I've ever had. They email me maintenance notices 3 or 4x a year, but I don't think I've ever had any downtime. It just stays happily humming along for years. I think I pay something like 60 a year for it.
OVH and reliable, in the same sentence? They're cheap, so suitable for projects you don't mind going poof.
Personal anecdote. A few years ago, I lost a lot of sleep on a domain renewal at OVH. Their incompetence was mind-boggling. A less common tld was the only slightly challenging bit. After a week of calling and emailing, and on the verge of the domain lapsing, I gave up and sent someone to the tld registry with cash.
Also, do search for OVH SBG2 should you have missed that.
The pain begins when you need support. Just like you, I have lost a lot of sleep over domains held hostage by their incompetence (for almost a year in one instance). Lesson learned, never use OVH for domains.
The support for their dedicated servers is just as bad, mind you, but short of a hardware failure you really don't need them. I have several years of uptime on all my current services.
So for personal projects their vps/dedicated is still a fantastic value.
> OVH's infrastructure is absolutely very reliable.
Well except that time one of their datacenters burned down, likely due to insufficient fire suppression, and the data backups were also lost because they kept them in the same building as the originals.
These reports criticize OVHcloud for having no fire prevention system and no power cut-off on the site, for using wooden floors, and for a free-cooling design that created airflows that spread the fire. The reports also say that water was detected near electrical systems before the fire broke out.
It takes quite a while to regain trust after shitting the bed that badly.
Better to be honest about having weak backups so users can plan accordingly, than to lie about how safe the data is and lull users into a false sense of security...
the OVH contract relating to automatic backup stipulates that a backup of the VPS server is scheduled daily, exported, and then replicated three times before being available in the customer space, and that the storage space allocated to the 'back-up option is physically isolated from the infrastructure in which the VPS server is set up.'
They can be terrible. But it is dirt cheap. So if you use their stuff in a way that you can recover somewhere else if they have issues, you save a lot.
As you mentioned I would stay away from them for things like domain hosting. Just use them for cheap compute, etc.
I keep my whois and NS entries elsewhere and my nameservers sufficiently distributed that I find the risk acceptable, but both hetzner and ovh are firmly in the "I have always felt I got a very cost effective 'exactly what I paid for' - and in the case of my rare interactions with their support more than I'd hoped for" category for me.
Neither has ever caused me a problem that didn't feel like "potentially having this level of problem occasionally is entirely in keeping with how little I'm paying" basically.
Would you recommend vultr for dedicated metal? I’m lazily shopping around for a decent dedicated metal setup, but I need something reasonably reliable.
From personal experience a few years back, OVH support was one of the worst I have ever experienced in my career. Technical incompetence at multiple levels of the chain (e.g lack of understanding of how DNS works). I would never recommend it to anyone, not even my worst enemy.
I had packet loss on my server. They asked me several times to reboot my server into rescue mode and leave it there for 10+ hours until their senior technician could look into it at an unspecified time of day.
After a month of doing this 3-4 times, they finally admitted that their switch is overprovisioned and there was no ETA. This problem happened in 2 locations.
Also had a problem with the failover ip failing to move. Again they told me to reboot into rescue mode and leave it like that for hours. No fix.
I've left OVH entirely after being a customer of theirs for over 10 years.
The systems *I* currently have at hetzner are, so far as I'm aware, on a "we don't charge for bandwidth but if you use a shitload you'll get throttled for the rest of the month" plan just like my ovh boxen.
But I only pull dedis from hetzner; my VPSen are all ovh based. So please nobody expect my experience to generalise without triple checking the terms just like I did in the process of signing up for those systems.
I've only ever been on OVH and was surprised to discover a few years ago that bandwidth is not only unlimited but also costly at most other hosting companies (including cloud ones).
Yeah, I also later noticed they charge $46/month for 4 amps whereas the Raspberry Pi 5 requires a 5V, 5A power supply to take full advantage of its processing capabilities and features.
Its based on wattage, 4A yes, but 4A multiplied by the voltage. So Pi 5 is 5A, but multiple that 5A times 24V dc and you get 120watts :) so yes, you can definitely colo a pi 5 my friend :D
I see terms for hosting and domain registration and managed databases, but not colocation. Do you know if they really not care if you actually use that connection to the stated allowance?
Exactly! I had a three node k3s cluster hosted on OVH. When I decided to switch to Contabo, it was as easy as adding the three Contabo nodes to the cluster, and then removing the three OVH nodes (plus updating some DNS rules). It was the easiest and simplest migration I'd ever done. All my services and data just moved automatically and mostly with zero downtime. The only service that experienced a little downtime was Plex, which as I understand does not support high availability. If I ever find a cheaper host, I'll simply switch over. No hassle and no vendor lock in.
Longhorn volumes automatically replicate to multiple nodes (configurable) and automatically move to the nodes whose pods need it.
A postgres database running on a single node will experience some downtime during (re)deployments or when moving across nodes, but should be pretty quick depending on the size of the database. For true HA database, CockroachDB is supposed to be compatible with postgres, but I haven't had a chance to play with it.
That's indeed the pain point. Distributing a stateless app is relatively easy. Distributing the shared file system and database over a remote, higher latency, cross-cloud setup is hard.
The OVH Eco-line (Kimsufi, So You Start..) are incredibly good and affordable. I have a few beefy servers, which I use to create my own VPS service on top of Proxmox. I'm not a sysadm and it's still simple to setup and maintain.
Eco is clever, because they reuse good hardware pieces to assemble new servers, instead of throwing out as garbage...
Have been a very happy user with several servers for quite some time.
As someone who's already using the dedicated server for a ton of things, I have been really grateful. But now, I have a new question, are they going to do this to their dedicated servers as well?
When someone runs a dedicated server these days, does this mean a one-off linux install? Or is this more likely to be a docker install so that it's portable?
It's an actual entire machine given to you. I remember there were a few options for me from Ubuntu, Debian to Red Hat to choose from, but all of them would also have preconfigured system users and some level of administration done by the provider.
But other than that, it's an actual bare metal machine and I installed Ubuntu on it and threw in a giant heap of services that have been running on it for more than a year now.
If you could rewind the clock, would you have started setting it up any differently, like in a container?
I am just curious what your options would be now if you wanted to migrate. Would you just copy your bash history to a local text file for reference, and then repeat the steps on a new server?
No, I wouldn't have started differently and I like the performance and the dedicated hardware I get for the money I spend. I have a custom backup solution that will upload daily backups of all my data to remote drives and I should be able to restore the setup on another machine without much problem.
Generally even in containerized deployments, you run one container per service/process. You wouldn't run everything you’d run on one box in one container.
I definitely recommend using docker compose or similar even in a one node deployment versus just installing and running things on the host linux system like it’s still 1998. Having a single directory to back up and a single file defining all of the services that can easily be redeployed is very convenient.
What is the performance impact? Going one page into Google results, I found this paper. Is there a better reference?
> At light workload levels, the native host performs better than Docker. However, as the workload
increases, both Docker and the native host show similar performance, with the difference getting smaller
Replying here as your other question is at max thread depth:
A non virtualized Linux install isn't more locked in than a docker install, as for a bare metal server you are choosing your own OS. I have done the docker thing on a bare metal server, but that's because I wanted to run multiple services on it and isolate them operationally.
> A non virtualized Linux install isn't more locked in than a docker install
Again, sorry for my ignorance here, but if not virtualized, how does one move hosting providers otherwise? My experience is limited to either running all the bash commands in an install readme, or installing a docker image.
So there must be something in-between, to recreate a linux install elsewhere?
> Replying here as your other question is at max thread depth:
btw, you can click on the time of the post, and reply there when there is no reply link in the main thread.
Using dedicated servers doesn't mean you're not using virtualization - it just means you're the one managing it. You control the hypervisor and the vms running on top of it.
Because of that, you're actually less tied to a specific hosting provider since you're not reliant on their APIs to set up and manage your infrastructure.
Even if you're not using virtualization there are still plenty of ways to migrate your servers.
One of the most common approaches (which was the thing before docker took over) is managing servers with an IaC approach using tools like chef, puppet, ansible, saltstack etc.
With IaC you define your entire infrastructure in configuration files and deploy those configs to your host.
It's a bit like docker swarm but for managing physical and/or virtual servers instead of containers.
Another popular option, often paired with IaC, is to create your own pre-configured *nix images tailored to your needs.
For example, you might have specific images set up for your load balancers, db servers, file hosts, or other roles in your stack.
I've worked at a company where we handled migrations using dd.
Technically that's also an option.
Wouldn't recommend it tho.
If the server hoster supports it (Hetzner apparently does), you can enable KVM and install a previously prepared image.
If the server hoster & hardware supports it, you can login remotely to the server management interface (like HP iLO) and install an image this way.
If you don't have above options or simply don't want to do it this way, you can also bootstrap via SSH. But instead of manually typing in shell commands, you will automate it in some way with custom scripts and/or tools like Ansible.
You'd buy a computer, plug an install USB drive in, and install ubuntu.
Then you'd connect to it via SSH, configure it, maybe install docker and set up your docker containers, etc.
A dedicated server is very similar.
The server is sitting in a datacenter at hetzner, and you usually install an OS with a button in the management UI, sure.
But everything afterwards is the same. You just connect via SSH, install docker or k8s and your services, maybe an nginx, etc.
You also have an option to request KVM access. That allows you to control the server as if you had connected a keyboard and monitor to it. You can even enter the BIOS to diagnose issues, if you'd like.
Personally I've got an install script that automates everything and sets up kubernetes and automated encrypted backups. Then I just deploy everything else with k8s.
There's no lock-in possible. It's a bare metal Linux machine, you do whatever you want with it; you can replace it with the PC under you desk if you want.
If you want to run k3s, k8s or docker, you can, but personally I find those too complicated. NixOS is much easier to deal with, and achieves the same result.
Yeah, it's a one-off install. In my case, I did Proxmox[1] for a while with VM's and LXC's, with some of my VM's and unprivileged LXC's running Docker (compose) too because it made the installation of said software easier. It's great, but I'm moving over to switch to Debian with Incus[2] instead of Proxmox. Just for fun mostly.
CSPs aren't your cellular provider. You don't get better pricing by telling them you'd switch, because both the AE and yourself (assuming you ever did the math), know it's not a viable option.
Trying to be multicloud by choice, unless you have a very unique use case, which you probably don't, is simply admitting you are incapable of calculating the cost of being multicloud. This would get you horrible pricing, as you just showed your hand.
It does feel like a case of the Costco hotdog going up to $2 followed by "grrrr. Thats it! I'm..... going to keep buying it because it is still damn cheap!"
I remember reading something about the Costco hot dog story, quite funny IMO, here's what I just found from 2018:
"I came to (Jim Sinegal) once and I said, ‘Jim, we can’t sell this hot dog for a buck fifty. We are losing our rear ends.’ And he said, ‘If you raise the effing hot dog, I will kill you. Figure it out.’ That’s all I really needed. By the way, if you raised (the price) to $1.75, it would not be that big of a deal. People would still buy (it). But it’s the mindset that when you think of Costco, you think of the $1.50 hot dog (and soda)." [1].
Turns out Costco has a new CEO this year, and again the hot dog topic came to light apparently, lol. This article is from 2024:
"'To clear up some recent media speculation, I also want to confirm the $1.50 hot dog price is safe,' Millerchip said." [2].
Think about. Imagine they sell 100 hot dogs per hour. A $0.25 difference means $25/hr in a store doing many orders of magnitude more revenue each hour.
It’s nothing. The way they play it up in the media gets a lot of attention and builds goodwill, but it’s entirely meaningless to their bottom line.
It’s amazing that people eat these stories up, though. I’ve heard so many people repeating this story as if it’s some amazing secret.
When the price does increase, you'll know that there's a new CEO who's lost all connection to reality (in the same way that always happens when you put a person in front of an abstraction without obvious leaks).
It's not a loss leader. They still make a profit on the combo because the ingredients are cheap and it takes almost no labor to prepare on a per-dog basis.
And 1.50 is cheap now but was relatively expensive when the combo launched in 1985 (for comparison, a Big Mac combo with fries and a drink was 2.59, a KFC combo was around $3).
Is it an option for you to just move the servers to one of Hetzner's Europe location? I guess a lot of high bandwidth applications don't require low latency.
Can vouch for HiVelocity under the previous owners anyway, not sure what's going on over there now. My companies launch was bandwidth and compute intensive and they handled it well.
Being cloud-agnostic is highly valuable. It's also possible to make determinations on where your needs are served best pro-actively, independent of service plan like this.
That said, $1/TB for bandwidth overage seems pretty fair. I empathize with the complaining but if the new price is such a ripoff everyone should be recommending what cloud VM provider they're migrating to for a better deal.