boto3 AWS client ignoring arguments? - amazon-web-services

My library needs to connect to arbitrary AWS-IoT accounts, so I'm passing the required credentials when I instantiate the AWS client:
client = boto3.client('iot',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,...)
According to the boto3 docs (excerpt below), credentials passed in this way should override all other credentials. However this call generates the following log message from Boto3:
"Found credentials in shared credentials file: ~/.aws/credentials"
I really don't want boto3 picking-up whatever credentials a user may have happen to have configured on their system - I want it to use just the ones I'm passing to boto3.client(). Any ideas how to ensure this?
boto3 docs excerpt:
The order in which Boto3 searches for credentials is:
1. Passing credentials as parameters in the boto.client() method
2. Passing credentials as parameters when creating a Session object
3. Environment variables
4. Shared credential file (~/.aws/credentials)
5. AWS config file (~/.aws/config)
6. Assume Role provider
7. Boto2 config file (/etc/boto.cfg and ~/.boto)
8. Instance metadata service on an Amazon EC2 instance that has an IAM role configured.

From my testing, when passing aws_access_key_id & aws_secret_access_key parameters to boto3.client(..), it accepts them even if they are incorrect and does not load any other credentials from file.
When an API call is sent, it returns a NoCredentialsError exception.
However, when above parameters not passed i.e.boto3.client('iot') AND a default profile exist in the system, I can see "Found credentials in shared credentials file: ~/.aws/credentials".
Meaning, as long as your explicitly provide credential parameters, you should be protected from calls executed on the wrong account.

Related

How boto3 is configured on AWS lambda

As I understand the boto3 module has to be configured (for specifying aws_access_key_id and
aws_secret_access_key) before I could use it to access any AWS service.
As from the documentation , the three ways of configuration are:
1.A Config object that's created and passed as the config parameter when creating a client
2.Environment variables
3.The ~/.aws/config file
However, for the examples I have read that there is no need to configure if writing directly on AWS lambda. Moreover, there are no environment variables and I could not find the config file. How is boto3configured on AWS lambda?
there are no environment variables
Yes, they are. They are listed here. Each function has access to many env variables, inluding:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN – The access keys obtained from the function's execution role.
So boto3 takes its credentials from these env variables. And these variables are populated from your function execution role which your function assumes.
When you create an AWS Lambda function, you select an IAM Role that the function will use.
Your code within the function will be automatically supplied the credentials associated with that IAM Role. There is no need to provide any credentials. (Think of it as being the same way that software running on an Amazon EC2 instance receives credentials from an IAM Role.)

How does AWS SDK know the credentials without specifying it?

I am curious about how AWS SDK can access services locally such as S3 without explicitly providing credentials. For example, this python code is only provided with bucket name and key name but can still access the file from s3 on my local:
def s3():
bucket = "my-bucket"
file_name = "folder1/sample.json"
s3 = boto3.client('s3')
obj = s3.get_object(Bucket=bucket, Key=file_name)
file_content = obj["Body"].read().decode('utf-8')
Where did AWS SDK get the credentials? Does it use the role configured using the command aws configure in the CLI? How about if you provide an explicit access key and secret key, what is the level of priority?
All of the Amazon SDK's follow a similar pattern. For boto3, they are documented here but for completeness they are:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
It depends on how your environment is configured but it sounds like you have a ~/.aws/credentials file.

Configuring AWS Session with boto3 in production mode

I am running something that uses AWS services on a production server. The most often provided solutions for providing credentials to session are one of:
from boto3 import Session
session = Session(profile_name='my_aws_profile')
OR
from boto import Session
session = Session(
aws_access_key_id="AWS_ACCESS_KEY",
aws_secret_access_key="AWS_SECRET_ACCESS_KEY"
)
What are my options so that I can
share the code without sharing my credentials, and
specify the path of my aws credentials file instead of assuming that it has to be ~/.aws/credentials?
The documentation lists all the ways Boto can find AWS credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#configuring-credentials
In particular, the best practice would be to put credentials in environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Boto will pick those up automatically.
save your credentials as environment variables
if you want another location for your credentials, save it in the config file and change set the AWS_CONFIG_FILE location to your desired path. If you have credentials stored in credentials file and config file, the one in the credentials file takes precedence.

Boto s3 permission issue

I've come across very weird permission issue. I'm trying to upload a file to s3, here's my function
def UploadFile(FileName, S3FileName):
session = boto3.session.Session()
s3 = session.resource('s3')
s3.meta.client.upload_file(FileName, "MyBucketName", S3FileName)
I did configure aws-cli on the server. This function works fine when I log into server and launch python interpreter but fails when called from my django rest api with:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
No idea why the same function works when called from interpreter and fails when called from django. Both are in the same virtual environment. Any suggestions?
According to the boto3 docs, boto3 is looking for credentials in the following places:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
Note that many of these places are paths with "~" in them. "~" refers to the current user's home directory. Most likely, your REST API is running under a different system user than you are using to test your code.
The proper solution is to use IAM roles, as this allows your server to have S3 access without you needing to give it IAM credentials. However, if that doesn't work for your setup, you should put the IAM credentials in the /etc/boto.cfg file as that is user agnostic.

The AWS Access Key Id you provided does not exist in our records, but credentials was already set

Through boto3 library, I uploaded and downloaded file from AWS s3 successfully.
But after few hours, it shows InvalidAccessKeyId suddenly for the same code.
What I have done:
set ~/.aws/credentials
Set environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
I tried the following solutions, but the error still heppens.
adding quotes on config values
ref2
Do I miss anything? Thanks for your help.
You do not need to configure both .aws/credentials AND environment variables.
From Credentials — Boto 3 documentation:
The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
The fact that your credentials stopped working after a period of time suggests that they were temporary credentials created via the AWS Security Token Service, with an expiry time.
If you have the credentials in ~/.aws/credentials there is no need to set environment variables AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY.
Environment variables are valid only for a session.
If you are using boto3, you can specify the credentials while creating client itself.
The best way to configure AWS credential is to install the AWS Command-Line Interface (CLI) and run aws configure from the bash console:
~/.aws/credentials format
[default]
aws_access_key_id = ***********
aws_secret_access_key = ************
I found this article for the same issue.
Amazon suggests to generate new key, and I did.
Then it works, but we don't know the root cause.
Suggest to do so for saving a lot of time when having the same problem.