Dockerrun.aws.json structure for ECR Repo - amazon-web-services

We are switching from Docker Hub to ECR and I'm curious how to structure the Dockerrun.aws.json file to use this image. I attempted to modify the name as <my_ECR_URL>/<repo_name>:<image_tag> but this is not successful. I also saw the details of private registries using an authentication file on S3 but this doesn't seem like the correct route when aws ecr get-login is the recommended way to authenticate with ECR.
Can anyone point me to how I can use an ECR image in a Beanstalk Dockerrun.aws.json file?
If I look at the ECS Task Definition,there's a required attribute called com.amazonaws.ecs.capability.ecr-auth, but I'm not setting that anywhere in my Dockerrun.aws.json file and I'm not sure what needs to be there. Perhaps it is an S3 bucket? Something is needed as every time I try to run the Elastic Beanstalk created tasks from ECS, I get:
Run tasks failed
Reasons : ATTRIBUTE
Any insights are greatly appreciated.
Update I see from some other threads that this used to occur with earlier versions of the ECS agent but I am currently running Agent version 1.6.0 and Docker version 1.7.1, which I believe are the recommended versions. Is this possibly an issue with the Docker version?

So it turns out, the ECS agent was only able to pull images with version 1.7, and that's where mine was falling. Updating the agent resolves my issue, and hopefully it helps someone else.

This is most likely an issue with IAM roles if you are using a role that was previously created for Elastic Beanstalk. Ensure that the role that Elastic Beanstalk is running with has the AmazonEC2ContainerRegistryReadOnly managed policy attached
Source: http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_IAM_policies.html

Support for ECR was added in version 1.7.0 of the ECS Agent.

When using Elasticbeanstalk and ECR you don't need to authenticate. Just make sure the user has the policy AmazonEC2ContainerRegistryReadOnly
You can store your custom Docker images in AWS with Amazon EC2 Container Registry (Amazon ECR). When you store your Docker images in
Amazon ECR, Elastic Beanstalk automatically authenticates to the
Amazon ECR registry with your environment's instance profile, so you
don't need to generate an authentication file and upload it to Amazon
Simple Storage Service (Amazon S3).
You do, however, need to provide your instances with permission to
access the images in your Amazon ECR repository by adding permissions
to your environment's instance profile. You can attach the
AmazonEC2ContainerRegistryReadOnly managed policy to the instance
profile to provide read-only access to all Amazon ECR repositories in
your account, or grant access to single repository by using the
following template to create a custom policy:
Source: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html

Related

How to use AWS IAM Role for Packer build command inside Jenkins Pipeline using Kubernetes / Docker Slave

I'm using Jenkins Pipeline and Packer to create AMI inside an AWS Account.
The Jenkins uses Kubernetes cluster as slave (using a cloud plugin that allows me to parameter docker pods template),
I have a pipeline that pull git project with the packer template in it and run packer validate command which is a success. Than, it runs packer build and i get the following error:
[1;31mBuild 'Amazon Linux 2 Classic' errored: No valid credential sources found for AWS Builder. Please see https://www.packer.io/docs/builders/amazon.html#specifying-amazon-credentials for more information on providing credentials for the AWS Builder.[0m
I also use Kube2iam to provide roles on my slave containers.
In my packer template, i don't define any aws credentials since I don't want to use it but role. Do you know if I have something to do inside the packer template to indicate the role to use ?
Best Regards,
Tony.
From what I understand, you are running Jenkins inside a Kubernetes cluster running on AWS EC2 instances? If so, the Jenkins agents running the build should be able to read available roles from the metadata of the instance they're running on.
In this case, the process would be to assign the desire IAM role to instances and Kubernetes should be able to handle that.

Allow awscli in docker inside EC2 without configuration

I have an EC2 with a role that gives it full control over others EC2.
This role allows calling aws ec2 ... without doing the aws configure step.
However, if I install docker and run a docker container inside that EC2, this container is not able to do the aws ec2 ... without configuring the awscli.
Is there some kind of folder to share of feature to enable in order to run awscli commands inside my container without configuring it with an accesskey/password ?
The aws command is utilizing the IAM instance profile assigned to the EC2 instance, which it is obtaining via the EC2 metadata service. You would need to share that metadata with the Docker container somehow.
Are you using the AWS ECS service? Or are you manually installing and managing docker on an EC2 instance? ECS handles this for you.
Otherwise you might look into something like this Lyft project designed to proxy the EC2 IAM role to the Docker container.

AWS EC2 instance role with docker

We are running docker containers in EC2 instance.
When applying IAM role with S3 access it seems that the container cant reach S3
Is there any solution to this kind of problem accept of using ECR?
You can use IAM-docker for this issue, see: https://github.com/swipely/iam-docker
You can try using AWS CLI From Docker to access Bucket.

How I can inject artifact from AWS S3 inside Docker image?

I need to prepare Docker image with embedded Jar file to push it into ECR. Jar file is storing in S3 bucket. How I can inject jar inside image without explicit storing AWS access keys into image?
Maybe I can use AWS CLI or exist other way?
Also it is not recommended to add public access to my s3 bucket and set access keys via env variable during execute docker run.
You can define an AWS IAM Role and attach it to EC2 Instances. So any instance that needs to run this docker build command, can do so as long as it has the IAM role attached to it. You can do so from the AWS Console. This solves the problem of you putting AWS credentials on the instance itself.
You will still need to install the aws cli in your Dockerfile. Once IAM Role is attached, you don't have to worry about credentials.
Recommended docs:
IAM Roles for Amazon EC2
Here's an official blog post tutorial on how to do this:
Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI
Just make sure you specify in the IAM Role which S3 Buckets you want these instances to have access to.

How do I pull Docker images from the Tutum private registry with Amazon ECS?

I am trying to set up an Amazon ECS deployment which employs an image from the Tutum private Docker registry. Tutum being private, it requires authenticating obviously.
As per the ECS documentation, I've modified the file '/etc/ecs/ecs.config' on the EC2 instance to contain the correct authentication credentials for Tutum:
ECS_ENGINE_AUTH_TYPE=dockercfg
ECS_ENGINE_AUTH_DATA={"tutum.co":{"auth":"<auth-string>","email":"<my-email>"}}
The auth string is a Base64 encoding of my Tutum credentials: '<username>:<password>'.
However, when I try to run the corresponding ECS task, it fails with this message: CannotPullContainerError: Authentication is required.
How do I properly configure ECS to authenticate against the Tutum registry, so I can successfully pull images from there?
Seems as if what it took was to reboot the EC2 instance, so that the settings in '/etc/ecs/ecs.config' were applied.