If you use aws-sdk in a Lambda runtime, you don't need provide the credentials to the SDK because it gets the credentials automatically from the execution role of that Lambda function.
I'm curious about that how does this work under the hood? Does the SDK read the credentials from some env variables? How does it get the credentials from the Lambda runtime actually?
Does the SDK read the credentials from some env variables?
Yes. They are taken from Runtime environment variables which include:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
which come from:
The access keys obtained from the function's execution role.
Parallely also check if you are using
these keywords as key names in your env
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_REGION
https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-runtime
these are reserved key names, so use other names instead.
In my case this was the issue.
Inside AWS Lambda you can also add your environment variables which your function/application can access
Related
As I understand the boto3 module has to be configured (for specifying aws_access_key_id and
aws_secret_access_key) before I could use it to access any AWS service.
As from the documentation , the three ways of configuration are:
1.A Config object that's created and passed as the config parameter when creating a client
2.Environment variables
3.The ~/.aws/config file
However, for the examples I have read that there is no need to configure if writing directly on AWS lambda. Moreover, there are no environment variables and I could not find the config file. How is boto3configured on AWS lambda?
there are no environment variables
Yes, they are. They are listed here. Each function has access to many env variables, inluding:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN – The access keys obtained from the function's execution role.
So boto3 takes its credentials from these env variables. And these variables are populated from your function execution role which your function assumes.
When you create an AWS Lambda function, you select an IAM Role that the function will use.
Your code within the function will be automatically supplied the credentials associated with that IAM Role. There is no need to provide any credentials. (Think of it as being the same way that software running on an Amazon EC2 instance receives credentials from an IAM Role.)
I usually use my AWS CLI commands after setting a profile, with the environment variable AWS_PROFILE, with the ~/.aws/credentials file. This works.
What I'm currently trying to do is to set up access via environment variables. To do so, I'm setting those variables in my .bash_profile file - I literally copied the aws_access_key_id and aws_secret_access_key entries from the credentials files and put them in my bash_profile file, under the names of AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
The environment variables are being correctly set, and, yet, when I try to access AWS resources (in this case, I'm trying to run a ls S3 command over a bucket, so the region doesn't matter), I get the message
An error occurred (InvalidAccessKeyId) when calling the ListObjectsV2 operation: The AWS Access Key Id you provided does not exist in our records
which is very weird to me, since the keys are exactly the same. To confirm this, I switch to my credential profile with the AWS_PROFILE environment variable, and then the command works normally.
I suspected that, somehow, I was setting the wrong environment variables, or something like that. Then, I read this AWS guide, and ran the command aws configure list, which, in the first case (the case with environment variables only), returned
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************AAAA env
secret_key ****************AAAA env
region us-east-1 env ['AWS_REGION', 'AWS_DEFAULT_REGION']
For the second case (with the profile set), it returned
Name Value Type Location
---- ----- ---- --------
profile dev-staging manual --profile
access_key ****************AAAA shared-credentials-file
secret_key ****************AAAA shared-credentials-file
region us-east-1 env ['AWS_REGION', 'AWS_DEFAULT_REGION']
In other words, the environment variables are being correctly set, the AWS CLI acknowledges them, their values are the same as when they are set via the credentials file, and, yet, for some reason, it doesn't work that way.
I thought it could be due to the aws_session_token, which I also tried to set as an environment variable, to no avail.
I need to access AWS resources this way to simulate the environment in which my code will run, and I don't see why this would not work the way I'm intending.
Any ideas on how to solve it are appreciated.
You need to edit your ~/.aws/config file when you would like to refer to the credentials from environment variables instead of credentials file.
With AWS Access Keys in credentials file, you must be having profile setup as OR there is no such source_profile config for any profile:
[default]
source_profile = default
However, when you would like to use the credentials set in your environment variables or bash_profile, change/add this setting to every profile in your config file:
[default]
credential_source = Environment
With this change, it should work with your Environment variables as well.
In case you've multiple profiles in ~/.aws/config file, just replace/add source_profile = <profile-name> with credential_source = Environment
In case someone stumbles on this, a possible culprit for this might be the AWS_SESSION_TOKEN and AWS_SECURITY_TOKEN environment variables.
If you were using different AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY environment variables before and an AWS CLI command is run, directly or indirectly, then after first auth the above two token variables are set. And after we overwrite the existing AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY with new values, the older token variables still remain as they are and AWS CLI doesn't have an explicit check to see if the access/secret keys were updated and it continues to use older tokens resulting in older keys being used internally and will continue to do so till the tokens expire.
The aws configure will continue to show new access keys, but internally it would be using older access keys because of cached tokens.
So if you want to continue to use the environment variables in such scenarios you will need to unset the two environment variables containing tokens and in your case also add an unset command for two token variables after setting the new access/secret keys in environment variables.
unset AWS_SESSION_TOKEN
unset AWS_SECURITY_TOKEN
This behavior is one of the reasons people prefer to use different profiles either using aws configure or editing the ~/.aws/* files, and explicitly specify them using the --profile in commands instead of using environment variables.
Per the AWS cli configuration precedence order the usage of ~/.aws/config file is at the top of the precedence order of where AWS CLI picks up the auth to be used, so it overrides the token environment variables and works in your case.
Using sam local start-api it doesn't appear to correctly take on my exported $AWS_PROFILE via ~/.aws/credentials.
Nor does it seem to take on the role defined in the template.yml.
I have two questions. How do I confirm my PHP project has assumed a role? It's not clear reading https://aws.amazon.com/sdk-for-php/
Secondly is it even possible for local development to assume my $AWS_PROFILE? Or am I supposed to code the AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY in my app?
Through boto3 library, I uploaded and downloaded file from AWS s3 successfully.
But after few hours, it shows InvalidAccessKeyId suddenly for the same code.
What I have done:
set ~/.aws/credentials
Set environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
I tried the following solutions, but the error still heppens.
adding quotes on config values
ref2
Do I miss anything? Thanks for your help.
You do not need to configure both .aws/credentials AND environment variables.
From Credentials — Boto 3 documentation:
The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
The fact that your credentials stopped working after a period of time suggests that they were temporary credentials created via the AWS Security Token Service, with an expiry time.
If you have the credentials in ~/.aws/credentials there is no need to set environment variables AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY.
Environment variables are valid only for a session.
If you are using boto3, you can specify the credentials while creating client itself.
The best way to configure AWS credential is to install the AWS Command-Line Interface (CLI) and run aws configure from the bash console:
~/.aws/credentials format
[default]
aws_access_key_id = ***********
aws_secret_access_key = ************
I found this article for the same issue.
Amazon suggests to generate new key, and I did.
Then it works, but we don't know the root cause.
Suggest to do so for saving a lot of time when having the same problem.
Django-Storages provides an S3 file storage backend for Django. It lists
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as required settings. If I am using an AWS Instance Profile to provide S3 access instead of a key pair, how do I configure Django-Storages?
You simply omit these parameters from your settings.
The Django-Storages documentation now explains this:
If AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not set, boto3 internally looks up IAM credentials.
The way this works under the hood is that if you do not provide them, Django-Storages passes None to boto3, which uses the machine's privileges instead of a key pair. If the machine has an associated Instance Profile, this is what gets used. (See the boto3 docs for more on boto3's credential hierarchy)
Thanks to #ChrisShenton for pointing out that the Django-Storages docs had been updated. The Django-Storages docs previously listed these configuration parameters as required, which was incorrect.
The docs now explain this:
If AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not set, boto3 internally looks up IAM credentials.