Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Each request must be signed with your AWS credentials. .. Our web identify federation feature to authenticate the users of your application. By incorporating WIF into your application, you can use a public identity provider (Facebook, Google, or Login with Amazon) to initiate the creation of a set of temporary security credentials." http://media.amazonwebservices.com/blog/2013/iam_web_identit...

I rather disagree with getting your autherization tokens by the grace of Google and Facebook.

It seems simple enough to roll your own, perfect use case for App Engine actually:

(Gets temp S3keys)

User <--------------------------> Logins to your site (AppEngine)

ˇ

S3

I just wish Amazon offered better late rimiting options and intergration behind the scenes. The 'Each Request' phrasing can be misleading, you don't need to sign each request, you can give the client side app a token that will last for an hour or a week. (But its on you to refresh it when it expires and keep track of how its being used so there's no abuse)



What kind of rate limiting do you need? What are you trying to guard against?


Well lets say I have a TokenFactory running somewhere (my servers, AppEngine, whatever) and my app is called Shminstegram - when my users login and get a token that's good for say 1 hour, I probably don't want them uploading 10,000 photos in that time period. That's probably a bot.

Now I could manage this myself with a background task that crunches S3 logs at its own pace and when it notices abuse it reports it to my TokenFactory so next time that user asks for a fresh token they get denied... but it would be great if S3 was a tad smarter about such things on its own. :)

Does that sort of make sense? In other words a token should have finer grained permissions than just time. Maybe... "Accept this token for the next 24 hour, or 200 POSTs, which ever comes first, and no more than 20 POSTs in the last hour." That sort of thing.


In this context bandwidth throttling is also a consideration - perhaps a per token or per bucket max object size?


You can limit objects by size with the existing ACL's already. But you can't specify how many objects overall a user may upload in some time frame, or their accumlated size.

So you can have a token for: upload as many objects as you want of size less than 1MB in the next 1 hour.

You can't have a token for: upload no more than 1000 objects of size less than 1MB, in the next 1 hour and cut them off after 200MB Total.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: