How boto3 is configured on AWS lambda - amazon-web-services

As I understand the boto3 module has to be configured (for specifying aws_access_key_id and
aws_secret_access_key) before I could use it to access any AWS service.
As from the documentation , the three ways of configuration are:
1.A Config object that's created and passed as the config parameter when creating a client
2.Environment variables
3.The ~/.aws/config file
However, for the examples I have read that there is no need to configure if writing directly on AWS lambda. Moreover, there are no environment variables and I could not find the config file. How is boto3configured on AWS lambda?

there are no environment variables
Yes, they are. They are listed here. Each function has access to many env variables, inluding:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN – The access keys obtained from the function's execution role.
So boto3 takes its credentials from these env variables. And these variables are populated from your function execution role which your function assumes.

When you create an AWS Lambda function, you select an IAM Role that the function will use.
Your code within the function will be automatically supplied the credentials associated with that IAM Role. There is no need to provide any credentials. (Think of it as being the same way that software running on an Amazon EC2 instance receives credentials from an IAM Role.)

Related

Traefik + Let's Encrypt on AWS Lightsail

I'm currently using Traefik and Lego in order to have HTTPS connection for my docker containers (as mentioned here)
In the following documentation, it's mentioned that I need to use the following provider to do DNS Challenge.
But I get this error:
AccessDeniedException: User: arn:aws:sts::<USER_ID>:assumed-role/AmazonLightsailInstanceRole/<AN_ID> is not authorized to perform: lightsail:CreateDomainEntry on resource: arn:aws:lightsail:us-east-1:<INSTANCE_ID>:*
and another for DeleteDomainEntry, even though I have lightsail:* on Resource: * permission on the IAM user used for configuration.
If I understand correctly Lightsail is managed separately for the other AWS services and thus we need to use STS for connecting to it (tell me if I'm wrong). So my question is this, how can I set the permissions for the temporary token to be able to do CreateDomainEntry and DeleteDomainEntry?
Further information
My instance's region is eu-west-3 (I tried changing the region in Lego config, doesn't work)
The <USER_ID> seen in the error does not correspond to the id found in the ARN of the domain. It correspond to the first number in the supportCode when doing aws lightsail get-domains --region us-east-1 in the CLI.
Lego and Traefik do not call the AssumeRole directly and do not create the temporary token (checked source code)
I'm using AWS_ACCESS_KEY_ID_FILE and AWS_SECRET_ACCESS_KEY_FILE in Traefik environment configuration.
The error message tells that Lego made the request using the IAM role assigned to your lightsail instance. I guess your instance lacks permissions to modify DNS settings for lightsail.
You should create a new user in AWS IAM and enable programmatic access in order to obtain AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
Then, pass those values as environment variables to your containers running Lego. Lego will use those env vars to authenticate with Lightsail APIs in us-east-1. [1]
My instance's region is eu-west-3 (I tried changing the region in Lego config, doesn't work)
Your Lego instance must call AWS APIs in us-east-1, see [2][3].
Lego and Traefik do not call the AssumeRole directly and do not create the temporary token
I guess Traefik/Lego assume the lightsail instance role automatically using EC2 instance metadata service, see [4]:
For applications, AWS CLI, and Tools for Windows PowerShell commands that run on the instance, you do not have to explicitly get the temporary security credentials—the AWS SDKs, AWS CLI, and Tools for Windows PowerShell automatically get the credentials from the EC2 instance metadata service and use them. To make a call outside of the instance using temporary security credentials (for example, to test IAM policies), you must provide the access key, secret key, and the session token.
I'm using AWS_ACCESS_KEY_ID_FILE and AWS_SECRET_ACCESS_KEY_FILE in Traefik environment configuration.
I could not find those env vars in the Lego source code [1]. Make sure that Lego is actually using your configured AWS credentials. The error message posted above suggests it's not using them and falls back to the instance profile instead.
[1] https://github.com/go-acme/lego/blob/master/providers/dns/lightsail/lightsail.go#L81
[2] https://docs.aws.amazon.com/cli/latest/reference/lightsail/create-domain-entry.html#examples
[3] https://github.com/go-acme/lego/blob/master/providers/dns/lightsail/lightsail.go#L69
[4] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials
There were too many confusing things. Martin Löper's answer and answer on the github issue I opened helped me to clear things out.
Here is what was confusing:
Lego lightsail provider documentation is listing the environment variable and then say The environment variable names can be suffixed by _FILE to reference a file instead of a value. Turns out, Lego's code never call their getOrFile method on the credentials. Furthermore, AWS sdk does not check variables with the _FILE suffix for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
The error message from AWS is little bit misleading. I thought all that time that it was a permission problem but in fact it was a authentication problem (little bit different in my opinion).
And here is how to solve it (little bit different from what proposed):
I use the AWS_SHARED_CREDENTIALS_FILE (mentioned here) environment variable so that I can use docker secrets by specifying /run/secrets/aws_shared_credentials file. This is more secure (more info here). AWS sdk will automatically detect this env variable and initialize this new session correctly.

How does aws-sdk in Lambda get the credentials automagically?

If you use aws-sdk in a Lambda runtime, you don't need provide the credentials to the SDK because it gets the credentials automatically from the execution role of that Lambda function.
I'm curious about that how does this work under the hood? Does the SDK read the credentials from some env variables? How does it get the credentials from the Lambda runtime actually?
Does the SDK read the credentials from some env variables?
Yes. They are taken from Runtime environment variables which include:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
which come from:
The access keys obtained from the function's execution role.
Parallely also check if you are using
these keywords as key names in your env
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_REGION
https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-runtime
these are reserved key names, so use other names instead.
In my case this was the issue.
Inside AWS Lambda you can also add your environment variables which your function/application can access

The AWS Access Key Id you provided does not exist in our records, but credentials was already set

Through boto3 library, I uploaded and downloaded file from AWS s3 successfully.
But after few hours, it shows InvalidAccessKeyId suddenly for the same code.
What I have done:
set ~/.aws/credentials
Set environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
I tried the following solutions, but the error still heppens.
adding quotes on config values
ref2
Do I miss anything? Thanks for your help.
You do not need to configure both .aws/credentials AND environment variables.
From Credentials — Boto 3 documentation:
The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
The fact that your credentials stopped working after a period of time suggests that they were temporary credentials created via the AWS Security Token Service, with an expiry time.
If you have the credentials in ~/.aws/credentials there is no need to set environment variables AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY.
Environment variables are valid only for a session.
If you are using boto3, you can specify the credentials while creating client itself.
The best way to configure AWS credential is to install the AWS Command-Line Interface (CLI) and run aws configure from the bash console:
~/.aws/credentials format
[default]
aws_access_key_id = ***********
aws_secret_access_key = ************
I found this article for the same issue.
Amazon suggests to generate new key, and I did.
Then it works, but we don't know the root cause.
Suggest to do so for saving a lot of time when having the same problem.

How to manage multiple IAM users on a single EC2 instance?

For different AWS services, I need different IAM users to secure the access control. Sometimes, I even need to use different IAM user credentials within a single project in a EC2 instance. What's the proper way to manage this and how I can deploy/attach these IAM user credentials to a single EC2 instance?
While I fully agree with accepted answer that using static credentials is one way of solving this problem, I would like to suggest some improvements over it (and proposed Secrets Manager).
What I would advise as architectural step forward to achieve full isolation of credentials, having them dynamic, and not stored in central place (Secrets Manager proposed above) is dockerizing application and running on AWS Elastic Container Service (ECS). This way you can assign different IAM role to different ECS Tasks.
Benefits over Secrets Manager solution
- use case of someone tampering with credentials in Secrets Manager is fully avoided, as credentials are of dynamic nature (temporary, and automatically assumed through SDKs)
Credentials are managed on AWS side for you
Only ECS Service can assume this IAM role, meaning you can't have actual person stealing the credentials, or developer connecting to production environment from his local machine with this credentials.
AWS Official Documentation for Task Roles
The normal way to provide credentials to applications running on an Amazon EC2 instance is to assign an IAM Role to the instance. Temporary credentials associated with the role when then be provided via Instance Metadata. The AWS SDKs will automatically use these credentials.
However, this only works for one set of credentials. If you wish to use more than one credential, you will need to provide the credentials in a credentials file.
The AWS credentials file can contain multiple profiles, eg:
[default]
aws_access_key_id = AKIAaaaaa
aws_secret_access_key = abcdefg
[user2]
aws_access_key_id = AKIAbbbb
aws_secret_access_key = xyzzzy
As a convenience, this can also be configured via the AWS CLI:
$ aws configure --profile user2
AWS Access Key ID [None]: AKIAbbbb
AWS Secret Access Key [None]: xyzzy
Default region name [None]: us-east-1
Default output format [None]: text
The profile to use can be set via an Environment Variable:
Linux: export AWS_PROFILE="user2"
Windows: set AWS_PROFILE="user2"
Alternatively, when calling AWS services via an SDK, simply specify the Profile to use. Here is an example with Python from Credentials — Boto 3 documentation:
session = boto3.Session(profile_name='user2')
# Any clients created from this session will use credentials
# from the [user2] section of ~/.aws/credentials.
dev_s3_client = session.client('s3')
There is an equivalent capability in the SDKs for other languages, too.

What credentials does Boto3 use when running in AWS CodeBuild?

So I've written a set of deployment scripts that run in CodeBuild and use Boto3 to deploy some dockerised apps to ECS. The problem I'm having is when I want to deploy to our separate production account.
If I'm running the CodeBuild project from the dev account but want to create resources in the production account, it's my understanding that I should set up a role in the target account, allow the codebuild role to assume it, then call:
sts_client.assume_role(
RoleArn=arn_of_a_role_I_set_up,
RoleSessionName=some_name
)
This returns an access key, secret key, and session token. This works and returns what I'd expect.
Then what I want to do is just assign those values to these environment variables:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN
This is because according to the documentation here: http://boto3.readthedocs.io/en/latest/guide/configuration.html Boto3 should defer to if you don't explicitly set those variables in the client or session methods.
However, when I do this the resources still get created in the same dev account.
Also, if I call printenv in the first part of my buildspec.yml before my scripts attempt to set the environment variables, those AWS key/secret/token variables aren't present at all.
So when it's running in CodeBuild, where is Boto3 getting its credentials from?
Is the solution just going to be to pass in a key/secret/token to every boto3.client() call to be perfectly sure?
The credentials in the CodeBuild environment are from the service role associated with your CodeBuild project. Boto and botocore will use the "ContainerProvider" automatically to grab those credentials in the CodeBuild environment.