Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rant time: this isn’t directed at you. I am just replying to your comment because you said something that triggered me.

Also the “you” below is the generic you - not you personally.

Disclaimer: I work at AWS in Professional Services, all rants are my own.

Now with that out of the way, I hate the fact that there are way too many code samples floating around on the internet that have you explicitly put your access key and secret key in the initialization code for the AWS SDK.

    s3 = boto3.resource(‘s3’,aws_accesskey_id=ccxx,aws_secret_access_key_id=cccc)
Even if you put the access keys in a separate config file in your repo, this is wrong, unnecessary, and can easily lead to checking credentials in.

When all they have to do is

s3=boto3.resource(‘s3’)

All of the SDKs will automatically find your credentials locally in your .config file that is in your home directory when you run “aws configure”.

But really, you shouldn’t do that, you should use temporary access keys.

When you do get ready to run on AWS, the SDK will automatically get the credentials from the attached role.

Even when I’m integrating AWS with Azure DevOps, Microsoft provides a separate secure store that you can attach to your pipeline for your AWS credentials.



Hindsight is 20/20, but definitely one of those places where flat out giving the credentials should not even be an option (or it should be made artificially tedious and/or explicitly clear that it’s a bad idea by e.g. naming the param _this_is_a_bad_idea_use_credentials_file_instead_secret_key or so). Of course there are always edge cases in the vein of running notebooks in containers (probably not an optimal example, but some edge case like that) where you might need the escape hatch of embedding the credentials straight to the code.

But yeah, if the wrong thing is easier or more straightforward than the right way, people tend to follow it when they have a deadline to meet. To end on a positive note, at least cli v2 makes bootstrapping the credentials to a workstation a tad easier!


I remember a Rust AWS library worked like you describe (An old version of rusoto, I think, deprecated now).

I wasn't familiar with how AWS credentials are usually managed so I was very confused why I had to make my own struct and implement the `CredentialSource` trait on it. It felt like I was missing something... because I was. You're not supposed to enter the credentials directly, you're supposed to use the built-in EnvCredentialSource or whatever.


> at least cli v2 makes bootstrapping the credentials to a workstation a tad easier!

I know I should know this seeing that I work in ProServe at AWS, but what do you mean?

I’m going to say there is never a use case for embedding credentials just so I can invoke Cunningham’s Law on purpose.

But when I need to test something in Docker locally I do

    docker run -e AWS_ACCESS_KEY_ID=<your_access_key> -e AWS_SECRET_ACCESS_KEY=<your_secret_key> -e AWS_DEFAULT_REGION=<aws_region> <docker_image_name>
And since you should be using temporary access keys anyway that you can copy and paste from your standard Control Tower interface, it’s easy to pass those environment variables to your container.


I meant the aws configure import which they added — point it to the credentials csv and the cli handles adding the entry to the credentials file.

Sometimes you might need to use stuff that for some reason fails to use the envars, I think I’ve bumped into some stuff which reads s3 via self-rolled http calls. Dunno if it was to save from having boto as a dependency, but those things are usually straightforwardly engineered so no logic in figuring out the other, more smart ways to handle the keys. Here are the parameter slots, enter keys to continue.


> I hate the fact that there are way too many code samples floating around on the internet that have you explicitly put your access key and secret key in the initialization code for the AWS SDK.

See, I thought that was a big strength of a lot of the AWS documentation over Google Cloud.

An AWS example for, say, S3 would show you where to insert the secrets, and it would work.

The Google Cloud Storage examples, though? It didn't seem to have occurred to them that someone reading "how to create bucket example" might not have their credentials set up.

And when the example didn't work - well, it was like the auth documentation was written by a completely different team, and they'd never considered a developer might simply want to access their own account. Instead the documentation was a mess of complicated-ass use cases like your users granting your application access to their google account; sign-in-with-google for your mobile app; and so on.

Google's documentation is better than it once was - but I've always wondered how much of the dominance of AWS arose from the fact their example code actually worked.


> See, I thought that was a big strength of a lot of the AWS documentation over Google Cloud.

Just to clarify, I’ve never seen a code sample published by AWS that has you explicitly specifying your credentials. (Now I await 15 replies showing me samples hosted on Amazon)


For Java they used to demonstrate putting a .properties file in among your source code [1] although admittedly not literally hardcoding a string. The PHP examples suggested putting your code into a config php include [2] (although they did also suggest putting them in your home directory).

But I can't understate how important it was that the AWS getting started guides said "Go to this URL, copy these values into this file" while Google's examples and getting started guides... didn't.

[1] https://web.archive.org/web/20120521060506/http://aws.amazon... [2] https://github.com/amazonwebservices/aws-sdk-for-php/blob/ma...


Wow, that’s some old code :).

But here is the newest documentation for PHP

https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/s...


I deal with this by having a directory in my development tree, named ”doNotCheckThisIntoSourceControl”, and I add a wildcard of it to my global .gitignore.

I’ll put things like server secrets and whatnot, there.

Of course, I need to make sure the local directory is backed up, on this end, since it is not stored in git.

Works a treat.


That’s really not a great idea…


...and why?

I am serious. If there is a better way, I'd use it.

Remember that I don't do online/server-based stuff. Most of my projects are for full compilation/linking, and rendering into host-executable, binary apps. There's a bunch of stuff in my development process that never needs to see a server.


A super simple way is to have a script in your home directory - far away from your repos - that set environment variables that you read in your configuration.


That makes sense. I could do something like that.

[UPDATE] I ended up doing something even simpler. I have issues with running scripts during the build process, unless really necessary (I have done it, and will, again).

Since this is Xcode, I simply needed to store the file in a directory (still with the global ignored name) far out of my dev tree, and dragged the file into the IDE.


That’s basically the idea - get your credentials out of your dev tree. I’m not dogmatic about how it’s done.


Try AWS Secrets


100% agree. We always keep all tokens (not just AWS secret keys) in a separate file that is never checked into the repo and are passed into the CloudFormation template at deployment. (The error in this case was a new repo hastily pushed and .gitignore wasn't properly updated to exclude the file with the keys.) But we've since switched to using AWS Secrets which is a much better solution.


Yeah that’s not good either. Your keys never need to be in a local file. Just put them in Parameter Store/Secrets Manager and you can reference those values in CF.


Yeah, that's what we do now


Yeah I just learned the role-based access approach last year. No keys ever hit the box so there's nothing for attackers to exfiltrate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: