IAM Role + Boto3 + Docker container - amazon-web-services

As far I as I know, boto3 will try to load credentials from the instance metadata service.
If I am running this code inside a EC2 instance I expected to hae no problem. But when my code is dockerized how the boto3 will find the metadata service?

The Amazon ECS agent populates the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable which can be used to get credentials. These special variables are provided only to process with PID 1. Script that is specified in Dockerfile ENTRYPOINT gets PID 1.
There are many networking modes and details might differ for other networking modes. More information can be found in: How can I configure IAM task roles in Amazon ECS to avoid "Access Denied" errors?
For awsvpc networking mode If you would run printenv with PID 1 you would see something similar to this:
AWS_EXECUTION_ENV=AWS_ECS_FARGATE
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/0f891318-ab05-46fe-8fac-d5113a1c2ecd
HOSTNAME=ip-172-17-0-123.ap-south-1.compute.internal
AWS_DEFAULT_REGION=ap-south-1
AWS_REGION=ap-south-1
ECS_CONTAINER_METADATA_URI_V4=http://169.254.170.2/v4/2c9107c385e04a70b30d3cc4d4de97e7-527074092
ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/2c9107c385e04a70b30d3cc4d4de97e7-527074092
It also gets tricky to debug something since after SSH'ing into container you are using PID other than 1 meaning that services that need to get credentials might fail to do so if you run them manually.
ECS task metadata endpoint documentation

Find .aws folder in ~/.aws in your machine and move this to Docker container's /root folder.
.aws contains files which has AWS KEY and AWS PW.
You can easily copy it to currently running container from your local machine by
docker cp ~/.aws <containder_id>:/root

Related

File upload from Docker container to Google Cloud Storage

I'm putting together a CI/CD pipeline on GKE based on
this guide by Google.
Everything is working well, except that as it's a Django project I also need to run the collectstatic command and upload the files to Google Storage.
In the Dockerfile I have the following commands:
RUN python manage.py collectstatic --noinput
RUN gsutil rsync -R /home/vmagent/app/myapp/static gs://mystorage/static
Collectstatic works as expected, but the gsutil upload fails with the following error message:
ServiceException: 401 Anonymous caller does not have storage.objects.create access to mystorage/static/...
What's the best way to authenticate gsutil?
If you are running your docker container on GCP (GKE), you use application credentials to authenticate a pod, which then has the same permissions as that service account. More information about this can be found here. Both GKE and other kubernetes clusters allow you to import key files as secrets. On all Kubernetes platforms this is done with the following commands. The full guide can be found here.
kubectl create secret generic pubsub-key --from-file=key.json=PATH-TO-KEY-FILE.json
Then set the environment variable in your manifest like this:
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
Too bad running pods under a certain role is not yet possible. GCP operates with service accounts, not with roles like AWS. In AWS you can assign a role to a task, which grants a container permissions under that role.

How to use AWS IAM Role for Packer build command inside Jenkins Pipeline using Kubernetes / Docker Slave

I'm using Jenkins Pipeline and Packer to create AMI inside an AWS Account.
The Jenkins uses Kubernetes cluster as slave (using a cloud plugin that allows me to parameter docker pods template),
I have a pipeline that pull git project with the packer template in it and run packer validate command which is a success. Than, it runs packer build and i get the following error:
[1;31mBuild 'Amazon Linux 2 Classic' errored: No valid credential sources found for AWS Builder. Please see https://www.packer.io/docs/builders/amazon.html#specifying-amazon-credentials for more information on providing credentials for the AWS Builder.[0m
I also use Kube2iam to provide roles on my slave containers.
In my packer template, i don't define any aws credentials since I don't want to use it but role. Do you know if I have something to do inside the packer template to indicate the role to use ?
Best Regards,
Tony.
From what I understand, you are running Jenkins inside a Kubernetes cluster running on AWS EC2 instances? If so, the Jenkins agents running the build should be able to read available roles from the metadata of the instance they're running on.
In this case, the process would be to assign the desire IAM role to instances and Kubernetes should be able to handle that.

Dockerfile copy files from amazon s3 or another source that needs credentials

I am trying to build a Docker image and I need to copy some files from S3 to the image.
Inside the Dockerfile I am using:
Dockerfile
FROM library/ubuntu:16.04
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
# Copy files from S3 inside docker
RUN aws s3 COPY s3://filepath_on_s3 /tmp/
However, aws requires AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
I know I can probably pass them using ARG. But, is it a bad idea to pass them to the image at build time?
How can I achieve this without storing the secret keys in the image?
In my opinion, Roles is the best to delegate S3 permissions to Docker containers.
Create role from IAM -> Roles -> Create Role -> Choose the service that will use this role, select EC2 -> Next -> select s3policies and Role should be created.
Attach Role to running/stopped instance from Actions-> Instance Settings -> Attach/Replace Role
This worked successfully in Dockerfile:
RUN aws s3 cp s3://bucketname/favicons /var/www/html/favicons --recursive
I wanted to build upon #Ankita Dhandha answer.
In the case of Docker you are probably looking to use ECS.
IAM Roles are absolutely the way to go.
When running locally, locally tailored Docker file and mount your AWS CLI ~/.aws directory to the root users ~/.aws directory in the container (this allows it to use your or a custom IAM user's CLI credentials to mock behavior in ECS for local testing).
# local sytem
from ubuntu:latest
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
docker run --mount type=bind,source="~/.aws",target=/root/.aws
Role Types
EC2 Instance Roles define the global actions any instance can preform. An example would be having access to S3 to download ecs.config to /etc/ecs/ecs.config during your custom user-data.sh setup.
Use the ECS Task Definition to define a Task Role and a Task Execution Role.
Task Roles are used for a running container. An example would be a live web app that is moving files in and out of S3.
Task Execution Roles are for deploying the task. An example would be downloading the ECR image and deploying it to ECS, downloading an environment file from S3 and exporting it to the Docker container.
General Role Propagation
In the example of C# SDK there is a list of locations it will look in order to obtain credentials. Not everything behaves like this. But many do so you have to research it for your situation.
reference: https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/creds-assign.html
Plain text credentials fed into either the target system or environment variables.
CLI AWS credentials and a profile set in the AWS_PROFILE environment variable.
Task Execution Role used to deploy the docker task.
The running task will use the Task Role.
When the running task has no permissions for the current action it will attempt to elevate into the EC2 instance role.
Blocking EC2 instance role access
Because of the EC2 instance role commonly needing access for custom system setup such as configuring ECS its often desirable to block your tasks from accessing this role. This is done by blocking the tasks access to the EC2 metadata endpoints which are well known DNS endpoints in any AWS VPC.
reference: https://aws.amazon.com/premiumsupport/knowledge-center/ecs-container-ec2-metadata/
AWS VPC Network Mode
# ecs.confg
ECS_AWSVPC_BLOCK_IMDS=true
Bind Network Mode
# ec2-userdata.sh
# install dependencies
yum install -y aws-cli iptables-services
# setup ECS dependencies
aws s3 cp s3://my-bucket/ecs.config /etc/ecs/ecs.config
# setup IPTABLES
iptables --insert FORWARD 1 -i docker+ --destination 169.254.169.254/32 --jump DROP
iptables --append INPUT -i docker+ --destination 127.0.0.1/32 -p tcp --dport 51679 -j ACCEPT
service iptables save
Many people pass in the details through the args, which I see as being fine and the way I would personally do it. I think you can overkill certain processes and this I think this is one of them.
Example docker with args
docker run -e AWS_ACCESS_KEY_ID=123 -e AWS_SECRET_ACCESS_KEY=1234
Saying that I can see why some companies want to hide this away and get this from a private API or something. This is why AWS have created IAM roles - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html.
Details could be retrieved from the private ip address which the S3 can only access meaning you would never have to store your credentials in your image itself.
Personally i think its overkill for what you are trying to do, if someone hacks your image they can console the credentials out and still get access to those details. Passing them in as args is safe as long as you protect yourself as you should anyway.
you should configure your credentials on the ~/.aws/credentials file
~$ cat .aws/credentials
[default]
aws_access_key_id = AAAAAAAAAAAAAAAAAAAAAAAAAAAAa
aws_secret_access_key = BBBBBBBBBBBBBBBBBBBBBBBBBBBBB

When to use AWS CLI and EB CLI

For a month or so, I've been studying AWS services and now I have to accomplish some basic stuff on AWS elastic beanstalk via command line. As far as I understand there are the aws elasticbeanstalk [command] and the eb [command] CLI installed on the build instance.
When I run eb status inside application folder, I get response in the form:
Environment details for: app-name
Application name: app-name
Region: us-east-1
Deployed Version: app-version
Environment ID: env-name
Platform: 64bit Amazon Linux ........
Tier: WebServer-Standard
CNAME: app-name.elasticbeanstalk.com
Updated: 2016-07-14 .......
Status: Ready
Health: Green
That tells me eb init has been run for the application.
On the other hand if I run:
aws elasticbeanstalk describe-application-versions --application-name app-name --region us-east-1
I get the error:
Unable to locate credentials. You can configure credentials by running "aws configure".
In home folder of current user there is a .aws directory with a credential file containing a [profile] line and aws_access_key_id and
aws_secret_access_key lines all set up.
Beside the obvious problem with the credentials, what I really lack is understanding of the two cli. Why is EB cli not asking for credentials and AWS cli is? When do I use one or the other? Can I use only aws cli? Any clarification on the matter will be highly appreciated.
EDIT:
For anyone ending up here, having the same problem with "Unable to locate credentials". Adding --profile profile-name option solved the problem for me. profile-name can be found in ~/.aws/config (or credentials) file on [profile profile-name] line.
In order to verify that the AWS CLI is configured on your system run aws configure and provide it with all the details it requires. That should fix your credentials problem and checking the change in configuration will allow you to understand what's wrong with your current conf.
the eb cli and the aws cli have very similar capabilities, and I too am a bit confused as to why they both should exist. From my experience the main differences are that the cli is used to interact with your AWS account using simple requests while the eb cli creates connections between you and the eb envs and so allows for finer control over them.
For instance - I've just developed a CI/CD pipeline for our beanstalk apps. When I use the eb cli I can monitor the deployment of our apps and notify the developers when it's finished. aws cli does not offer that functionality, and the only to achieve that is to repeatedly query the service until you receive the desired result.
The AWS CLI is a general tool that works on all AWS resources. It's not tied to a specific software project, the type of machine you're on, the directory you're in, or anything like that. It only needs credentials, whether they've been put there manually if it's your own machine, or generated by AWS if it's an EC2 instance.
The EB CLI is a high level tool to wrangle your software project into place. It's tied to the directory you're in, it assumes that the stuff in your directory is your project, and it has short commands that do a lot of background work to magically put everything in the right place.

Dockerrun.aws.json structure for ECR Repo

We are switching from Docker Hub to ECR and I'm curious how to structure the Dockerrun.aws.json file to use this image. I attempted to modify the name as <my_ECR_URL>/<repo_name>:<image_tag> but this is not successful. I also saw the details of private registries using an authentication file on S3 but this doesn't seem like the correct route when aws ecr get-login is the recommended way to authenticate with ECR.
Can anyone point me to how I can use an ECR image in a Beanstalk Dockerrun.aws.json file?
If I look at the ECS Task Definition,there's a required attribute called com.amazonaws.ecs.capability.ecr-auth, but I'm not setting that anywhere in my Dockerrun.aws.json file and I'm not sure what needs to be there. Perhaps it is an S3 bucket? Something is needed as every time I try to run the Elastic Beanstalk created tasks from ECS, I get:
Run tasks failed
Reasons : ATTRIBUTE
Any insights are greatly appreciated.
Update I see from some other threads that this used to occur with earlier versions of the ECS agent but I am currently running Agent version 1.6.0 and Docker version 1.7.1, which I believe are the recommended versions. Is this possibly an issue with the Docker version?
So it turns out, the ECS agent was only able to pull images with version 1.7, and that's where mine was falling. Updating the agent resolves my issue, and hopefully it helps someone else.
This is most likely an issue with IAM roles if you are using a role that was previously created for Elastic Beanstalk. Ensure that the role that Elastic Beanstalk is running with has the AmazonEC2ContainerRegistryReadOnly managed policy attached
Source: http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_IAM_policies.html
Support for ECR was added in version 1.7.0 of the ECS Agent.
When using Elasticbeanstalk and ECR you don't need to authenticate. Just make sure the user has the policy AmazonEC2ContainerRegistryReadOnly
You can store your custom Docker images in AWS with Amazon EC2 Container Registry (Amazon ECR). When you store your Docker images in
Amazon ECR, Elastic Beanstalk automatically authenticates to the
Amazon ECR registry with your environment's instance profile, so you
don't need to generate an authentication file and upload it to Amazon
Simple Storage Service (Amazon S3).
You do, however, need to provide your instances with permission to
access the images in your Amazon ECR repository by adding permissions
to your environment's instance profile. You can attach the
AmazonEC2ContainerRegistryReadOnly managed policy to the instance
profile to provide read-only access to all Amazon ECR repositories in
your account, or grant access to single repository by using the
following template to create a custom policy:
Source: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html