Hacker News new | past | comments | ask | show | jobs | submit login

Well lets say I have a TokenFactory running somewhere (my servers, AppEngine, whatever) and my app is called Shminstegram - when my users login and get a token that's good for say 1 hour, I probably don't want them uploading 10,000 photos in that time period. That's probably a bot.

Now I could manage this myself with a background task that crunches S3 logs at its own pace and when it notices abuse it reports it to my TokenFactory so next time that user asks for a fresh token they get denied... but it would be great if S3 was a tad smarter about such things on its own. :)

Does that sort of make sense? In other words a token should have finer grained permissions than just time. Maybe... "Accept this token for the next 24 hour, or 200 POSTs, which ever comes first, and no more than 20 POSTs in the last hour." That sort of thing.




In this context bandwidth throttling is also a consideration - perhaps a per token or per bucket max object size?


You can limit objects by size with the existing ACL's already. But you can't specify how many objects overall a user may upload in some time frame, or their accumlated size.

So you can have a token for: upload as many objects as you want of size less than 1MB in the next 1 hour.

You can't have a token for: upload no more than 1000 objects of size less than 1MB, in the next 1 hour and cut them off after 200MB Total.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: