Is it possible to pull images from ECR without using docker login - amazon-web-services

I have an ECR and EC2 instance running docker. What I want to do is to pull images without doing docker login first.
Is it possible at all? If yes what kind of policy should I attach to EC2 instance and/or ECR repo? I did a lot of experiments, but did not succeed.
And please - no suggestions on how to use aws get-login. My aim is to get rid of it by using IAM policy/roles.

To use an EC2 Role without having to use docker login, https://github.com/awslabs/amazon-ecr-credential-helper can be used.
Place the docker-credential-ecr-login binary on your PATH and set the contents of your ~/.docker/config.json file to be:
{
"credsStore": "ecr-login"
}
Now commands such as docker pull or docker push will work transparently.
My aim is to get rid of it by using IAM policy/roles.
I don't see how this is possible since some form of authentication is required.

Related

Unrecognized command for AWS Secret creation with Docker - ECS deploy

I'm planning to deploy a stack to ECS making use of the (new?) "Deploying Docker containers on ECS"
feature. Though, I make use of GitLab for code versioning and CI/CD pipelines, therefore I want to store my Docker images in the GitLab registry (and they should be private).
I understand that ECS can easily support such a configuration through the x-aws-pull_credentials extension, therefore, following the link above, I make use of a GitLab access token and I try to create a Docker secret as suggested through the command
docker secret create gitLabAccessToken --username <GITLAB_USER> --password <GITLAB_TOKEN>
Though, I get the error:
unknown flag: --username
Why is that? What am I doing wrong?
Thanks in advance.
The problem may be with copy-pasting. Type the command directly.

Issue with Docker Login with AWS ECR

I'm following an aws tutorial to deploy a simple application using containers on aws. I'm trying to connect to AWS's ECR using docker and i get a warning message which doesnt allow me to login.
I'm brand new to the world of docker, containers and aws. I was going through aws tutorials to deploy a simple nodejs application using docker containers into aws per the following instructions:
https://aws.amazon.com/getting-started/projects/break-monolith-app-microservices-ecs-docker-ec2/module-one/
Per instructions, i've installed docker, AWS CLI and created a AWS ECR for docker to access. I've basically got till the following step:
Step 4 Build and Push the docker image - Point 2 - getting login
As per point 2, i copy pasted the login details (docker login -u AWS -p ) and ran it and i got the following warning message which isnt allowing me to login or push the docker image to ECR. I tried to research online a lot on what to change. There are lots of articles mentioning the issue but no clear direction as to what exactly to do. I'm not exactly sure where in the command i should use --password-stdin. I've also tried what was provided in the following link [Docker: Using --password via the CLI is insecure. Use --password-stdin but that didnt work either
Expected result:
Login succeeded
Actual result:
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
the warning is fine. have you verified whether the docker push/pull is working ?

Access ECS repository from external account

I´m using ECS repository for two accounts, one for non_prod another for prod. The repo is part of the non_prod account
The problem that I found is, even giving access from the non_prod account repository to the prod account, the prod account cannot pull the docker image. And complains that the docker image does not exist.
Still I can access to the repository, but not pull the image since the prod account think, that does not exist
Obviously the image exist since it´s used in the non_prod environment.
Also I compare the ~/.docker/config.json credentials and are the same to connect to the ECS repo that I have on the non_prod account.
I even try the temporally dangerous give all access in the repo and still nothing. Any idea what´s wrong here?
Regards.
We are using the same setup in our environments and it works pretty fine for us. Make sure you are doing the following things:
1) While accessing the containers you are passing the --registryid parameter in the login command, essentially,
aws ecr get-login --region us-east-1 --registry-ids <value here> | /bin/sh
2) You have given the access in the ECR repository for the prod account, essentially in the permission section of ECR repo, there is an entry in Prinicipal for arn:aws:iam:::root and correct permissions are there for push and pull operations.
Please let me know if you are doing both of these things and still not able to access the containers.

'aws configure' in docker container will not use environment variables or config files

So I have a docker container running jenkins and an EC2 registry on AWS. I would like to have jenkins push containers back to the EC2 registry.
To do this, I would like to be able to automate the aws configure and get login steps on container startup. I figured that I would be able to
export AWS_ACCESS_KEY_ID=*
export AWS_SECRET_ACCESS_KEY=*
export AWS_DEFAULT_REGION=us-east-1
export AWS_DEFAULT_OUTPUT=json
Which I expected to cause aws configure to complete automatically, but that did not work. I then tried creating configs as per the AWS docs and repeating the process, which also did not work. I then tried using aws configure set also with no luck.
I'm going bonkers here, what am I doing wrong?
No real need to issue aws configure instead as long as you populate env vars
export AWS_ACCESS_KEY_ID=aaaa
export AWS_SECRET_ACCESS_KEY=bbbb
... also export zone and region
then issue
aws ecr get-login --region ${AWS_REGION}
you will achieve the same desired aws login status ... as far as troubleshooting I suggest you remote login into your running container instance using
docker exec -ti CONTAINER_ID_HERE bash
then manually issue above aws related commands interactively to confirm they run OK before putting same into your Dockerfile

Dockerrun.aws.json structure for ECR Repo

We are switching from Docker Hub to ECR and I'm curious how to structure the Dockerrun.aws.json file to use this image. I attempted to modify the name as <my_ECR_URL>/<repo_name>:<image_tag> but this is not successful. I also saw the details of private registries using an authentication file on S3 but this doesn't seem like the correct route when aws ecr get-login is the recommended way to authenticate with ECR.
Can anyone point me to how I can use an ECR image in a Beanstalk Dockerrun.aws.json file?
If I look at the ECS Task Definition,there's a required attribute called com.amazonaws.ecs.capability.ecr-auth, but I'm not setting that anywhere in my Dockerrun.aws.json file and I'm not sure what needs to be there. Perhaps it is an S3 bucket? Something is needed as every time I try to run the Elastic Beanstalk created tasks from ECS, I get:
Run tasks failed
Reasons : ATTRIBUTE
Any insights are greatly appreciated.
Update I see from some other threads that this used to occur with earlier versions of the ECS agent but I am currently running Agent version 1.6.0 and Docker version 1.7.1, which I believe are the recommended versions. Is this possibly an issue with the Docker version?
So it turns out, the ECS agent was only able to pull images with version 1.7, and that's where mine was falling. Updating the agent resolves my issue, and hopefully it helps someone else.
This is most likely an issue with IAM roles if you are using a role that was previously created for Elastic Beanstalk. Ensure that the role that Elastic Beanstalk is running with has the AmazonEC2ContainerRegistryReadOnly managed policy attached
Source: http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_IAM_policies.html
Support for ECR was added in version 1.7.0 of the ECS Agent.
When using Elasticbeanstalk and ECR you don't need to authenticate. Just make sure the user has the policy AmazonEC2ContainerRegistryReadOnly
You can store your custom Docker images in AWS with Amazon EC2 Container Registry (Amazon ECR). When you store your Docker images in
Amazon ECR, Elastic Beanstalk automatically authenticates to the
Amazon ECR registry with your environment's instance profile, so you
don't need to generate an authentication file and upload it to Amazon
Simple Storage Service (Amazon S3).
You do, however, need to provide your instances with permission to
access the images in your Amazon ECR repository by adding permissions
to your environment's instance profile. You can attach the
AmazonEC2ContainerRegistryReadOnly managed policy to the instance
profile to provide read-only access to all Amazon ECR repositories in
your account, or grant access to single repository by using the
following template to create a custom policy:
Source: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html