I usually use my AWS CLI commands after setting a profile, with the environment variable AWS_PROFILE, with the ~/.aws/credentials file. This works.
What I'm currently trying to do is to set up access via environment variables. To do so, I'm setting those variables in my .bash_profile file - I literally copied the aws_access_key_id and aws_secret_access_key entries from the credentials files and put them in my bash_profile file, under the names of AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
The environment variables are being correctly set, and, yet, when I try to access AWS resources (in this case, I'm trying to run a ls S3 command over a bucket, so the region doesn't matter), I get the message
An error occurred (InvalidAccessKeyId) when calling the ListObjectsV2 operation: The AWS Access Key Id you provided does not exist in our records
which is very weird to me, since the keys are exactly the same. To confirm this, I switch to my credential profile with the AWS_PROFILE environment variable, and then the command works normally.
I suspected that, somehow, I was setting the wrong environment variables, or something like that. Then, I read this AWS guide, and ran the command aws configure list, which, in the first case (the case with environment variables only), returned
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************AAAA env
secret_key ****************AAAA env
region us-east-1 env ['AWS_REGION', 'AWS_DEFAULT_REGION']
For the second case (with the profile set), it returned
Name Value Type Location
---- ----- ---- --------
profile dev-staging manual --profile
access_key ****************AAAA shared-credentials-file
secret_key ****************AAAA shared-credentials-file
region us-east-1 env ['AWS_REGION', 'AWS_DEFAULT_REGION']
In other words, the environment variables are being correctly set, the AWS CLI acknowledges them, their values are the same as when they are set via the credentials file, and, yet, for some reason, it doesn't work that way.
I thought it could be due to the aws_session_token, which I also tried to set as an environment variable, to no avail.
I need to access AWS resources this way to simulate the environment in which my code will run, and I don't see why this would not work the way I'm intending.
Any ideas on how to solve it are appreciated.
You need to edit your ~/.aws/config file when you would like to refer to the credentials from environment variables instead of credentials file.
With AWS Access Keys in credentials file, you must be having profile setup as OR there is no such source_profile config for any profile:
[default]
source_profile = default
However, when you would like to use the credentials set in your environment variables or bash_profile, change/add this setting to every profile in your config file:
[default]
credential_source = Environment
With this change, it should work with your Environment variables as well.
In case you've multiple profiles in ~/.aws/config file, just replace/add source_profile = <profile-name> with credential_source = Environment
In case someone stumbles on this, a possible culprit for this might be the AWS_SESSION_TOKEN and AWS_SECURITY_TOKEN environment variables.
If you were using different AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY environment variables before and an AWS CLI command is run, directly or indirectly, then after first auth the above two token variables are set. And after we overwrite the existing AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY with new values, the older token variables still remain as they are and AWS CLI doesn't have an explicit check to see if the access/secret keys were updated and it continues to use older tokens resulting in older keys being used internally and will continue to do so till the tokens expire.
The aws configure will continue to show new access keys, but internally it would be using older access keys because of cached tokens.
So if you want to continue to use the environment variables in such scenarios you will need to unset the two environment variables containing tokens and in your case also add an unset command for two token variables after setting the new access/secret keys in environment variables.
unset AWS_SESSION_TOKEN
unset AWS_SECURITY_TOKEN
This behavior is one of the reasons people prefer to use different profiles either using aws configure or editing the ~/.aws/* files, and explicitly specify them using the --profile in commands instead of using environment variables.
Per the AWS cli configuration precedence order the usage of ~/.aws/config file is at the top of the precedence order of where AWS CLI picks up the auth to be used, so it overrides the token environment variables and works in your case.
Related
I have seen related questions for this issue.
But my problem in particular is I am assuming a role to retrieve temporary credenitals. It works just fine; however, when running CDK Bootstrap, I receive: Need to perform AWS calls for account <account_id>, but no credentials have been configured.
I have tried using CDK Deploy as well and I receive: Unable to resolve AWS account to use. It must be either configured when you define your CDK Stack, or through the environment
How can I specify AWS Account enviornment in CDK when using temp credentials or assuming role?
you need to define the account you are attempting to bootstrap too.
for bootstrap you can use it as part of the command - cdk bootstrap aws://ACCOUNT-NUMBER-1/REGION-1 aws://ACCOUNT-NUMBER-2/REGION-2 ... do note that a given account/region only ever has to be bootstrapped ONCE in its lifetime, no matter how many cdk stacks are going to be deployed there.
AND your credentials need to be part of the default profile. If you are assuming credentials through some sort of enterprise script, please check with them that they store it as part of the default profile. If not, run at least aws config in your bash terminal and get your temp assumed credentials in there
For cdk deploy you need to make sure in your app.py that you have the env defined
env_EU = cdk.Environment(account="8373873873", region="eu-west-1")
env_USA = cdk.Environment(account="2383838383", region="us-west-2")
MyFirstStack(app, "first-stack-us", env=env_USA)
MyFirstStack(app, "first-stack-eu", env=env_EU)
i would however recommend against hardcoding your account numbers ;)
the name of credential and config file should be the same
like:
-credentials
[cdk]
aws_access_key_id = xxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxx
-config
[cdk]
region = "us-east-1"
As I understand the boto3 module has to be configured (for specifying aws_access_key_id and
aws_secret_access_key) before I could use it to access any AWS service.
As from the documentation , the three ways of configuration are:
1.A Config object that's created and passed as the config parameter when creating a client
2.Environment variables
3.The ~/.aws/config file
However, for the examples I have read that there is no need to configure if writing directly on AWS lambda. Moreover, there are no environment variables and I could not find the config file. How is boto3configured on AWS lambda?
there are no environment variables
Yes, they are. They are listed here. Each function has access to many env variables, inluding:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN – The access keys obtained from the function's execution role.
So boto3 takes its credentials from these env variables. And these variables are populated from your function execution role which your function assumes.
When you create an AWS Lambda function, you select an IAM Role that the function will use.
Your code within the function will be automatically supplied the credentials associated with that IAM Role. There is no need to provide any credentials. (Think of it as being the same way that software running on an Amazon EC2 instance receives credentials from an IAM Role.)
If you use aws-sdk in a Lambda runtime, you don't need provide the credentials to the SDK because it gets the credentials automatically from the execution role of that Lambda function.
I'm curious about that how does this work under the hood? Does the SDK read the credentials from some env variables? How does it get the credentials from the Lambda runtime actually?
Does the SDK read the credentials from some env variables?
Yes. They are taken from Runtime environment variables which include:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
which come from:
The access keys obtained from the function's execution role.
Parallely also check if you are using
these keywords as key names in your env
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_REGION
https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-runtime
these are reserved key names, so use other names instead.
In my case this was the issue.
Inside AWS Lambda you can also add your environment variables which your function/application can access
I'm creating an AwsCredentialsProvider class (api docs) from:
awscrt.auth.AwsCredentialsProvider.new_default_chain(client_bootstrap)
I get an error AWS_ERROR_MQTT_UNEXPECTED_HANGUP which I believe occurs because my AWS credentials are under a non default profile in ~/.aws/credentials (based on this git issue).
But I can't see any way to create an AwsCredentialsProvider with a specified profile.
For custom credentials file path set environment variables AWS_CONFIG_FILE and AWS_CREDENTIAL_FILE
For default profile set environment variable AWS_PROFILE with AWS profile name you would like to select as default. In order to use at runtime, this AWS profile name must be present in your AWS credentials file with valid configurations.
If you use only one AWS region, then you can also set environment variable AWS_DEFAULT_REGION. At times it saves few lines of code, where you might need to specify a AWS region.
Through boto3 library, I uploaded and downloaded file from AWS s3 successfully.
But after few hours, it shows InvalidAccessKeyId suddenly for the same code.
What I have done:
set ~/.aws/credentials
Set environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
I tried the following solutions, but the error still heppens.
adding quotes on config values
ref2
Do I miss anything? Thanks for your help.
You do not need to configure both .aws/credentials AND environment variables.
From Credentials — Boto 3 documentation:
The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
The fact that your credentials stopped working after a period of time suggests that they were temporary credentials created via the AWS Security Token Service, with an expiry time.
If you have the credentials in ~/.aws/credentials there is no need to set environment variables AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY.
Environment variables are valid only for a session.
If you are using boto3, you can specify the credentials while creating client itself.
The best way to configure AWS credential is to install the AWS Command-Line Interface (CLI) and run aws configure from the bash console:
~/.aws/credentials format
[default]
aws_access_key_id = ***********
aws_secret_access_key = ************
I found this article for the same issue.
Amazon suggests to generate new key, and I did.
Then it works, but we don't know the root cause.
Suggest to do so for saving a lot of time when having the same problem.