Often times the backup provider is the hosting provider, whom you have to trust. (This extends all the way from big clouds like AWS and GCE to small providers like Linode and DO). Having an external backup can be unreasonably expensive due to ridiculous egress costs.
If your business can't afford external backups then you don't have a viable business in the first place. And of course egress costs have to be considered when choosing a hosting provider.
Not always an option. For instance, I use Linode’s backup service and it can only back up to the same data center (although it is said to live on a separate system).
You can, and should, back up your irreplaceable data elsewhere using a custom solution. Unless it's some service that doesn't allow you to export the data at all, it may be inconvenient, but it is an option.
Coming from a Linode employee, I can confirm this is true. Linode's backups live in the same data center as the server, but the systems are separated so that they don't directly affect one another.
Do they have separate power supplies? Have steps been taken to ensure that fire can’t spread from one room to the next? What would happen if there was an explosion?
In all seriousness, these are good points. I'm not a data center expert by any means, but here's what I know: The data center hardware has failsafes present by design, but they aren't disaster-proof being that they're in the same building.
To answer your questions: Yes, the backup storage box is in a separate chassis than the host machine that the Linode lives on; they have separate power supplies. The DCs themselves also have some sort of fire suppression. I don't know what would happen if there was an explosion.
They could mean using regular data transfer (i.e. using something like rsync instead of the provider's backup service). Maybe egress costs among servers from the same provider are reduced or nullified.
From[1]:
> Traffic over the private network does not count against your monthly quota.
I wonder how private addresses are setup by Linode.
Each data center has an internal private network with a pool of private IPs available for assignment. If a private IP is assigned to a server, it then has access to the private network.
This becomes very difficult as your data grows. If you live in AWS world, imagine periodic snapshotting from EBS, S3, RDS(and other data stores), EFS etc. For most people a different DC of the same cloud provider should be enough. If you have to put this into a different cloud provider it is a big cost drain and difficult to manage let alone if you want to have your own physical backups.
AWS has tools around this (lifecycle manager) that you can easily leverage for simple site backups. Or you can roll your own, honestly it is not that hard to take rolling snapshots.
Obviously hosting providers do not make it easy to extract your data because that's their vendor lock.
Also, always make sure you're testing your backups by restoring to a non-production space, and ensuring that customer services are still available.
Gandi has never explicitly said they never had their own backups, just that they don't offer backups as a service. It's entirely possible that they did have backups, but couldn't recover/restore them.