Pass AWS credentials (IAM role credentials) to code running in Docker container - amazon-web-services

When running code on an EC2 instance, the SDK I use to access AWS resources, automagically talks to a locally linked web server on 169.254.169.254 and gets that instances AWS credentials (access_key, secret) that are needed to talk to other AWS services.
Also there are other options, like setting the credentials in environment variables or passing them as command line args.
What is the best practice here? I really prefer to let the container access the 169.254.169.254 (by routing the requests) or even better run a proxy container that mimics the behavior of the real server at 169.254.169.254.
Is there already a solution out there?

The EC2 metadata service will usually be available from within docker (unless you use a more custom networking setup - see this answer on a similar question).
If your docker network setup prevents it from being accessed, you might use the ENV directive in your Dockerfile or pass them directly during run, but keep in mind that credentials from IAM roles are automatically rotated by AWS.

Amazon does have some mechanisms for allowing containers to access IAM roles via the SDK and either routing/forwarding requests through the ECS agent container or the host. There is way too much to copy and paste, but using --net host is the LEAST recommended option because without additionally filters that allows your container full access to anything it's host has permission to do.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
declare -a ENVVARS
declare AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN
get_aws_creds_local () {
# Use this to get secrets on a non AWS host assuming you've set credentials via some mechanism in the past, and then don't pass in a profile to gitlab-runner because it doesn't see the ~/.aws/credentials file where it would look up profiles
awsProfile=${AWS_PROFILE:-default}
AWS_ACCESS_KEY_ID=$(aws --profile $awsProfile configure get aws_access_key_id)
AWS_SECRET_ACCESS_KEY=$(aws --profile $awsProfile configure get aws_secret_access_key)
AWS_SESSION_TOKEN=$(aws --profile $awsProfile configure get aws_session_token)
}
get_aws_creds_iam () {
TEMP_ROLE=$(aws sts assume-role --role-arn "arn:aws:iam::123456789012:role/example-role" --role-session-name AWSCLI-Session)
AWS_ACCESS_KEY_ID=$(echo $TEMP_ROLE | jq -r . Credentials.RoleAccessKeyID)
AWS_SECRET_ACCESS_KEY=$(echo $TEMP_ROLE | jq -r . Credentials.RoleSecretKey)
AWS_SESSION_TOKEN=$(echo $TEMP_ROLE | jq -r . Credentials.RoleSessionToken)
}
get_aws_creds_local
get_aws_creds_iam
ENVVARS=("AWS_ACCESS_KEY_ID=$ACCESS_KEY_ID" "AWS_SECRET_ACCESS_KEY=$ACCESS_SECRET_ACCESS_KEY" "AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN")
# passing creds into GitLab runner
gitlab-runner exec docker stepName $(printf " --env %s" "${ENVVARS[#]}")
# using creds with a docker container
docker run -it --rm $(printf " --env %s" "${ENVVARS[#]}") amazon/aws-cli sts get-caller-identity

Related

How do you specify AWS credentials when running AWS CLI from a Dockerfile in an AWS SAM pipeline?

I have an app using:
SAM
AWS S3
AWS Lambda based on Docker
AWS SAM pipeline
Github function
In the Dockerfile I have:
RUN aws s3 cp s3://mylambda/distilBERT distilBERT.tar.gz
Resulting in the error message:
Step 6/8 : RUN aws s3 cp s3://mylambda/distilBERT distilBERT.tar.gz
---> Running in 786873b916db
fatal error: Unable to locate credentials
Error: InferenceFunction failed to build: The command '/bin/sh -c aws s3 cp s3://mylambda/distilBERT distilBERT.tar.gz' returned a non-zero code: 1
I need to find a way to store the credential in a secured manner. Is it possible with GitHub secrets or something?
Thanks
My solution may be a bit longer but I feel it solves your problem, and
It does not expose any secrets
It does not require any manual work
It is easy to change your AWS keys later if required.
Steps:
You can add the environment variables in Github actions(since you already mentioned Github actions) as secrets.
In your Github CI/CD flow, when you build the Dockerfile, you can create a aws credentials file.
- name: Configure AWS credentials
echo "
[default]
aws_access_key_id = $ACCESS_KEY
aws_secret_access_key = $SECRET_ACCESS_KEY
" > credentials
with:
ACCESS_KEY: ${{ secrets.AWS_ACCESS_KEY_ID }}
SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
In your Dockerfile, you can add instructions to COPY this credentials file and store it
COPY credentials credentials
RUN mkdir ~/.aws
RUN mv credentials ~/.aws/credentials
Changing your credentials requires just changing your github actions.
Docker by default does not have access to the .aws folder running on the host machine. You could either pass the AWS credentials as environment variables to the Docker image:
ENV AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
ENV AWS_SECRET_ACCESS_KEY=...
Keep in mind, hardcoding AWS credentials in a Dockerfile is a bad practice. In order to avoid this, you can pass the environment variables at runtime with using docker run -e MYVAR1 or docker run --env MYVAR2=foo arguments. Other solution would be to use an .env file for the environment variables.
A more involved solution would be to map a volume for the ~/.aws folder from the host machine in the Docker image.

Dockerfile copy files from amazon s3 or another source that needs credentials

I am trying to build a Docker image and I need to copy some files from S3 to the image.
Inside the Dockerfile I am using:
Dockerfile
FROM library/ubuntu:16.04
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
# Copy files from S3 inside docker
RUN aws s3 COPY s3://filepath_on_s3 /tmp/
However, aws requires AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
I know I can probably pass them using ARG. But, is it a bad idea to pass them to the image at build time?
How can I achieve this without storing the secret keys in the image?
In my opinion, Roles is the best to delegate S3 permissions to Docker containers.
Create role from IAM -> Roles -> Create Role -> Choose the service that will use this role, select EC2 -> Next -> select s3policies and Role should be created.
Attach Role to running/stopped instance from Actions-> Instance Settings -> Attach/Replace Role
This worked successfully in Dockerfile:
RUN aws s3 cp s3://bucketname/favicons /var/www/html/favicons --recursive
I wanted to build upon #Ankita Dhandha answer.
In the case of Docker you are probably looking to use ECS.
IAM Roles are absolutely the way to go.
When running locally, locally tailored Docker file and mount your AWS CLI ~/.aws directory to the root users ~/.aws directory in the container (this allows it to use your or a custom IAM user's CLI credentials to mock behavior in ECS for local testing).
# local sytem
from ubuntu:latest
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
docker run --mount type=bind,source="~/.aws",target=/root/.aws
Role Types
EC2 Instance Roles define the global actions any instance can preform. An example would be having access to S3 to download ecs.config to /etc/ecs/ecs.config during your custom user-data.sh setup.
Use the ECS Task Definition to define a Task Role and a Task Execution Role.
Task Roles are used for a running container. An example would be a live web app that is moving files in and out of S3.
Task Execution Roles are for deploying the task. An example would be downloading the ECR image and deploying it to ECS, downloading an environment file from S3 and exporting it to the Docker container.
General Role Propagation
In the example of C# SDK there is a list of locations it will look in order to obtain credentials. Not everything behaves like this. But many do so you have to research it for your situation.
reference: https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/creds-assign.html
Plain text credentials fed into either the target system or environment variables.
CLI AWS credentials and a profile set in the AWS_PROFILE environment variable.
Task Execution Role used to deploy the docker task.
The running task will use the Task Role.
When the running task has no permissions for the current action it will attempt to elevate into the EC2 instance role.
Blocking EC2 instance role access
Because of the EC2 instance role commonly needing access for custom system setup such as configuring ECS its often desirable to block your tasks from accessing this role. This is done by blocking the tasks access to the EC2 metadata endpoints which are well known DNS endpoints in any AWS VPC.
reference: https://aws.amazon.com/premiumsupport/knowledge-center/ecs-container-ec2-metadata/
AWS VPC Network Mode
# ecs.confg
ECS_AWSVPC_BLOCK_IMDS=true
Bind Network Mode
# ec2-userdata.sh
# install dependencies
yum install -y aws-cli iptables-services
# setup ECS dependencies
aws s3 cp s3://my-bucket/ecs.config /etc/ecs/ecs.config
# setup IPTABLES
iptables --insert FORWARD 1 -i docker+ --destination 169.254.169.254/32 --jump DROP
iptables --append INPUT -i docker+ --destination 127.0.0.1/32 -p tcp --dport 51679 -j ACCEPT
service iptables save
Many people pass in the details through the args, which I see as being fine and the way I would personally do it. I think you can overkill certain processes and this I think this is one of them.
Example docker with args
docker run -e AWS_ACCESS_KEY_ID=123 -e AWS_SECRET_ACCESS_KEY=1234
Saying that I can see why some companies want to hide this away and get this from a private API or something. This is why AWS have created IAM roles - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html.
Details could be retrieved from the private ip address which the S3 can only access meaning you would never have to store your credentials in your image itself.
Personally i think its overkill for what you are trying to do, if someone hacks your image they can console the credentials out and still get access to those details. Passing them in as args is safe as long as you protect yourself as you should anyway.
you should configure your credentials on the ~/.aws/credentials file
~$ cat .aws/credentials
[default]
aws_access_key_id = AAAAAAAAAAAAAAAAAAAAAAAAAAAAa
aws_secret_access_key = BBBBBBBBBBBBBBBBBBBBBBBBBBBBB

UnrecognizedClientException error when authenticating on aws-cli

When I pull a clean Alphine Linux Docker image, install aws-cli on it and try to authenticate myself with aws ecr get-authorization-token --region eu-central-1 I keep getting the following error:
An error occurred (UnrecognizedClientException) when calling the
GetAuthorizationToken operation: The security token included in the
request is invalid.
I've already checked the timezone which seem to be okay, and the command works properly on my local machine.
These are the commands I run to set up aws-cli:
apk add --update python python-dev py-pip
pip install awscli --upgrade
export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Is there something obvious I'm missing?
You don't have permission to access those resources until you get permission to aws-cli, for that you can use the below steps.
Log into your AWS account, click on your account name, select my security credentials, click on access keys and download the credentials
Open your PowerShell as administrator and follow the commands.
$ aws configure
$ AWS Access Key ID [****************E5TA]=xxxxxxxxxx
$ AWS Secret Access Key [****************7gNT]=xxxxxxxxxxxxxx
It was an access issue after all! Turns out that if you create a new IAM user with full admin access it can't by default access the ECR registry you created using a different account. Using the IAM credentials from that other account resolved the issue.
In my case, my ~/.aws/credentials file had an old aws_session_token that was not updated by the aws configure CLI command. Once I opened the file with vi ~/.aws/credentials and deleted the aws_session_token entry, I no longer encountered the UnrecognizedClientException. I'm guessing that the AWS CLI first gives priority to the aws_session_token over the aws access key id and aws secret access key when running AWS CLI commands, if aws_session_token is present in the ~/.aws/credentials file.
Create a new account with AmazonEC2ContainerRegistryFullAccess permission.
Add this account to the .credentials file like this:
[ecr-user]
aws_access_key_id = XXX
aws_secret_access_key = XXX
Then next use following command:
aws ecr get-login-password --profile ecr-user
What worked for me is:
on the first part of pipe add the param --profile < your-profile-name >
and after that in every ECR command you need to provide that parameter.
My issue was caused by the fact that I had inactivated my access key in the AWS IAM Management Console earlier as part of an exercise I was doing. Once I reactivated it, the problem was resolved.
(Make sure you're in the right AWS region, too.)
I had same error message however I was using session based AWS access . The solution is to add all the keys given by AWS including session token.
aws_access_key_id="your-key-id"
aws_secret_access_key="your-secret-access-key"
aws_session_token="your-session-token"
add it into ~/.aws/credentials for profile you are using .
After a couple of hours , this is my conclusion :
If you want to use AWS_PROFILE makes sure that the rest of AWS env vars are unset (NOT empty only ... MUST be UNSET).
profile=$AWS_PROFILE
unset $(printenv |grep AWS_ | cut -f1 -d"=");
export AWS_PROFILE=${profile};
Then :
# with aws cli >= 1.x
$(aws ecr get-login --no-include-email --region ${aws_region})
# with aws cli >= 2.x
registry=${aws_account_id}.dkr.ecr.${aws_region}.amazonaws.com
aws ecr get-login-password --region ${aws_region} | docker login --username AWS --password-stdin ${registry}
Resolved issue after following below:
Go to AWS IAM Management Console
Generate credential in section "Access keys (access key ID and secret access key)"
Run command aws configure and set same downloaded credentials in Cdrive-User-directory.aws\credentials
It wasn't working for me. Out of sheer desperation, I copied the lines starting with export and posted them in the terminal and pressed enter.
Thereafter I wrote aws configure and filled in the details from https://MYCOMPANY.awsapps.com/start#/ >> Account >> Clicked "Command line or programmatic access".
Default region name: eu-north-1
Default output format: text
And then the login succeeded. Don't ask my why.
open the file ~/.aws/credentials (or c:\Users\{user}\.aws\credentials on Windows)
It might look something like the following:
[default]
aws_access_key_id = XXXXX
aws_secret_access_key = XXXXX
aws_session_token = XXXXX
Update the aws_access_key_id and aws_secret_access_key with new values and remove the aws_session_token. You can also update aws_access_key_id and aws_secret_access_key via the aws configure command, but this doesn't remove the session token.
Try running echo $varname to see if the environment variables are set correctly:
echo $AWS_ACCESS_KEY_ID
echo $AWS_SECRET_ACCESS_KEY
echo $AWS_DEFAULT_REGION
If they are incorrectly set, run unset varname:
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_DEFAULT_REGION
In my case, the region I wanted to use was not enabled. Addressed by enabling it at Account > AWS Regions -> enable (and wait patiently for some minutes).
An update, --profile must be added, I solve this.

'aws configure' in docker container will not use environment variables or config files

So I have a docker container running jenkins and an EC2 registry on AWS. I would like to have jenkins push containers back to the EC2 registry.
To do this, I would like to be able to automate the aws configure and get login steps on container startup. I figured that I would be able to
export AWS_ACCESS_KEY_ID=*
export AWS_SECRET_ACCESS_KEY=*
export AWS_DEFAULT_REGION=us-east-1
export AWS_DEFAULT_OUTPUT=json
Which I expected to cause aws configure to complete automatically, but that did not work. I then tried creating configs as per the AWS docs and repeating the process, which also did not work. I then tried using aws configure set also with no luck.
I'm going bonkers here, what am I doing wrong?
No real need to issue aws configure instead as long as you populate env vars
export AWS_ACCESS_KEY_ID=aaaa
export AWS_SECRET_ACCESS_KEY=bbbb
... also export zone and region
then issue
aws ecr get-login --region ${AWS_REGION}
you will achieve the same desired aws login status ... as far as troubleshooting I suggest you remote login into your running container instance using
docker exec -ti CONTAINER_ID_HERE bash
then manually issue above aws related commands interactively to confirm they run OK before putting same into your Dockerfile

How to use multiple AWS accounts from the command line?

I've got two different apps that I am hosting (well the second one is about to go up) on Amazon EC2.
How can I work with both accounts at the command line (Mac OS X) but keep the EC2 keys & certificates separate? Do I need to change my environment variables before each ec2-* command?
Would using an alias and having it to the setting of the environment in-line work? Something like: alias ec2-describe-instances1 = export EC2_PRIVATE_KEY=/path; ec2-describe-instances
You can work with two accounts by creating two profiles on the aws command line.
It will prompt you for your AWS Access Key ID, AWS Secret Access Key and desired region, so have them ready.
Examples:
$ aws configure --profile account1
$ aws configure --profile account2
You can then switch between the accounts by passing the profile on the command.
$ aws dynamodb list-tables --profile account1
$ aws s3 ls --profile account2
Note:
If you name the profile to be default it will become default profile i.e. when no --profile param in the command.
More on default profile
If you spend more time using account1, you can make it the default by setting the AWS_DEFAULT_PROFILE environment variable. When the default environment variable is set, you do not need to specify the profile on each command.
Linux, OS X Example:
$ export AWS_DEFAULT_PROFILE=account1
$ aws dynamodb list-tables
Windows Example:
$ set AWS_DEFAULT_PROFILE=account1
$ aws s3 ls
How to set "manually" multiple AWS accounts ?
1) Get access - key
AWS Console > Identity and Access Management (IAM) > Your Security Credentials > Access Keys
2) Set access - file and content
~/.aws/credentials
[default]
aws_access_key_id={{aws_access_key_id}}
aws_secret_access_key={{aws_secret_access_key}}
[{{profile_name}}]
aws_access_key_id={{aws_access_key_id}}
aws_secret_access_key={{aws_secret_access_key}}
3) Set profile - file and content
~/.aws/config
[default]
region={{region}}
output={{output:"json||text"}}
[profile {{profile_name}}]
region={{region}}
output={{output:"json||text"}}
4) Run - file with params
Install command-line app - and use AWS Command Line it, for example for product AWS EC2
aws ec2 describe-instances -- default
aws ec2 describe-instances --profile {{profile_name}} -- [{{profile_name}}]
Ref
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
IMHO, the easiest way is to edit .aws/credentials and .aws/config files manually.
It's easy and it works for Linux, Mac and Windows. Just read this for more detail (1 minute read).
.aws/credentials file:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[user1]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
.aws/config file:
[default]
region=us-west-2
output=json
[profile user1] <-- 'profile' in front of 'profile_name' (not for default)!!
region=us-east-1
output=text
You should be able to use the following command-options in lieu of the EC2_PRIVATE_KEY (and even EC2_CERT) environment variables:
-K <private key>
-C <certificate>
You can put these inside aliases, e.g.
alias ec2-describe-instances1 ec2-describe-instances -K /path/to/key.pem
Create or edit this file:
vim ~/.aws/credentials
List as many key pairs as you like:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[user1]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Set a local variable to select the pair of keys you want to use:
export AWS_PROFILE=user1
Do what you like:
aws s3api list-buckets # any aws cli command now using user1 pair of keys
You can also do it command by command by including --profile user1 with each command:
aws s3api list-buckets --profile user1
# any aws cli command now using user1 pair of keys
More details: Named profiles for the AWS CLI
The new aws tools now support multiple profiles.
If you configure access with the tools, it automatically creates a default in ~/.aws/config.
You can then add additional profiles - more details at: Getting started with the AWS CLI
I created a simple tool, aaws, to switch between AWS accounts.
It works by setting the AWS_DEFAULT_PROFILE in your shell. Just make sure you have some entries in your ~/.aws/credentials file and it will easily switch between multiple accounts.
/tmp
$ aws s3 ls
Unable to locate credentials. You can configure credentials by running "aws configure".
/tmp
$ aaws luk3
[luk3] 🔐 /tmp
$ aws s3 ls
2013-11-05 21:40:04 luk3thomas.com
I wrote a toolkit to switch default AWS profile.
The mechanism is physically moving the profile key to the default section in config and credentials files.
The better solution today should be one of the following ways:
Use aws command option --profile.
Use environment variable AWS_PROFILE.
I don't remember why I didn't use the solution of --profile, maybe I was not realized its existence.
However the toolkit can still be useful by doing other things. I'll add a soft switch flag by using the way of AWS_PROFILE in the future.
$ xsh list aws/cfg
[functions] aws/cfg/move
[functions] aws/cfg/set
[functions] aws/cfg/activate
[functions] aws/cfg/get
[functions] aws/cfg/delete
[functions] aws/cfg/list
[functions] aws/cfg/copy
Repo: https://github.com/xsh-lib/aws
Install:
curl -s https://raw.githubusercontent.com/alexzhangs/xsh/master/boot | bash && . ~/.xshrc
xsh load xsh-lib/aws
Usage:
xsh aws/cfg/list
xsh aws/cfg/activate <profilename>
You can write shell script to set corresponding values of environment variables for each account based on user input. Doing so, you don't need to create any aliases and, furthermore, tools like ELB tools, Auto Scaling Command Line Tools will work under multiple accounts as well.
To use an IAM role, you have to make an API call to STS:AssumeRole, which will return a temporary access key ID, secret key, and security token that can then be used to sign future API calls. Formerly, to achieve secure cross-account, role-based access from the AWS Command Line Interface (CLI), an explicit call to STS:AssumeRole was required, and your long-term credentials were used. The resulting temporary credentials were captured and stored in your profile, and that profile was used for subsequent AWS API calls. This process had to be repeated when the temporary credentials expired (after 1 hour, by default).
More details: How to Use a Single IAM User to Easily Access All Your Accounts by Using the AWS CLI