But if the data is stored, the clock keeps ticking until you delete it. If I have a TB of data stored, and I hit my $1K (or whatever limit) on April 15, the only way that I don't get hit with a >$1K bill for the month is if AWS deletes anything I have stored on the service. (Or at least holds it hostage until I pay up for the overage.)
You can easily calculate what the bill will be at the end of the month if no new data is stored or deleted between now and then. So if you need a hard cutoff for storage, use that.
There's enough room there for workflows where I know I'm going to delete data later that allowing configuration would be valuable. (Maybe I can set a timed expiration at the moment of storage, instead of having to store first and separately delete later? That would keep end-of-month predictions accurate.) But it isn't difficult to set the hard cutoff.
>> My pet project cannot involve the risk of possibly costing me thousands or more because I made a mistake or it got super popular over night. I'd rather my site just 503.
Intuitively, if you're capping out your S3 storage, the hard cutoff should look like "don't allow me to store any additional data".
If you're capping out retrieval, then "don't serve the data any more".