Authenticate AWS copilot app with private Docker Hub repo - amazon-web-services

I am trying to deploy an app to AWS ECS using Copilot (https://aws.github.io/copilot-cli/). I want to specify an image, not a Dockerfile, so that I don't have to build and push locally. However, my image on Docker Hub is private.
I've created an AWS secret with my credentials. I've edited my copilot manifest to try to use that secret:
image:
location: my_repo/my_image
repositoryCredentials:
credentialsParameter: my_credentials_secret_ARN
However, I still get not found/not authorized when I deploy. If this is the right approach, what have I got wrong? If not, how do I proceed?

I've been told by someone at AWS that copilot doesn't yet support building from an image hosted on a private repo. Hopefully the functionality will be coming soon.

Follow progress on this request here:
https://github.com/aws/copilot-cli/issues/2101
https://github.com/aws/copilot-cli/issues/2061

Related

Deploy app created with docker-compose to AWS

Final goal: To deploy a ready-made cryptocurrency exchange on AWS.
I have setup a readymade server by 0xProject by running the following command on my local machine:
npx #0x/launch-kit-wizard && docker-compose up
This command creates a docker-compose.yml file which has multiple container definitions and starts the exchange on http://localhost:3001/
I need to deploy this to AWS for which I'm following this Youtube tutorial
I have created a registry user with appropriate permissions
An EC2 instance is created
ECR repository is created
AWS CLI is configured
As per AWS instructions, I'm retrieving an authentication token and authenticating Docker client to registry:
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin <docker-id-given-by-AWS>.dkr.ecr.us-east-2.amazonaws.com
I'm trying to build the docker image:
docker build -t testdockerregistry .
Now, since in this case, we have docker-compose.yml instead of Dockerfile - when I try to build the image - it throws the following error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: CreateFile C:\Users\hp\Desktop\xxx\Dockerfile: The system cannot find the file specified.
I tried building image from docker-compose itself as per this guide, which fails with the following message:
postgres uses an image, skipping
frontend uses an image, skipping
mesh uses an image, skipping
backend uses an image, skipping
nginx uses an image, skipping
Can anyone please help me with this?
You can use the aws ecs cli-compose command from the ECS CLI.
By using this command it will translate the docker-compose file you create into a ECS Task Definition.
If you're interested in finding out more about the CLI take a read of the AWS documentation here.
Another approach, instead of using the AWS ECS CLI directly, is to use the new docker/compose-cli
This CLI tool makes it easy to run Docker containers and Docker Compose applications in the cloud using either Amazon Elastic Container Service (ECS) or Microsoft Azure Container Instances (ACI) using the Docker commands you already know.
See "Docker Announces Open Source Compose for AWS ECS & Microsoft ACI " from Aditya Kulkarni.
It references "Docker Open Sources Compose for Amazon ECS and Microsoft ACI" from Chris Crone, Engineer #docker:
While implementing these integrations, we wanted to make sure that existing CLI commands were not impacted.
We also wanted an architecture that would make it easy to add new backends and provide SDKs in popular languages. We achieved this with the following architecture:

Issue with Docker Login with AWS ECR

I'm following an aws tutorial to deploy a simple application using containers on aws. I'm trying to connect to AWS's ECR using docker and i get a warning message which doesnt allow me to login.
I'm brand new to the world of docker, containers and aws. I was going through aws tutorials to deploy a simple nodejs application using docker containers into aws per the following instructions:
https://aws.amazon.com/getting-started/projects/break-monolith-app-microservices-ecs-docker-ec2/module-one/
Per instructions, i've installed docker, AWS CLI and created a AWS ECR for docker to access. I've basically got till the following step:
Step 4 Build and Push the docker image - Point 2 - getting login
As per point 2, i copy pasted the login details (docker login -u AWS -p ) and ran it and i got the following warning message which isnt allowing me to login or push the docker image to ECR. I tried to research online a lot on what to change. There are lots of articles mentioning the issue but no clear direction as to what exactly to do. I'm not exactly sure where in the command i should use --password-stdin. I've also tried what was provided in the following link [Docker: Using --password via the CLI is insecure. Use --password-stdin but that didnt work either
Expected result:
Login succeeded
Actual result:
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
the warning is fine. have you verified whether the docker push/pull is working ?

Is it possible to pull images from ECR without using docker login

I have an ECR and EC2 instance running docker. What I want to do is to pull images without doing docker login first.
Is it possible at all? If yes what kind of policy should I attach to EC2 instance and/or ECR repo? I did a lot of experiments, but did not succeed.
And please - no suggestions on how to use aws get-login. My aim is to get rid of it by using IAM policy/roles.
To use an EC2 Role without having to use docker login, https://github.com/awslabs/amazon-ecr-credential-helper can be used.
Place the docker-credential-ecr-login binary on your PATH and set the contents of your ~/.docker/config.json file to be:
{
"credsStore": "ecr-login"
}
Now commands such as docker pull or docker push will work transparently.
My aim is to get rid of it by using IAM policy/roles.
I don't see how this is possible since some form of authentication is required.

aws ecs 403 error to login private registry

I am trying to login to Quay with ECS.
Quay is a private registry docker.
I followed this documentation but I have also a 403 error: "{\"error\": \"Permission Denied\"}".
I put this code in /etc/ecs/ecs.config:
ECS_ENGINE_AUTH_TYPE=docker
ECS_ENGINE_AUTH_DATA={"https://quay.io/": { "username": "xxxxxx","password":"xxxxx","email": "."}}
And I've reboot the ecs services but it's not working.
Have you got an idea ?
The documentation points out to a slightly different content of /etc/ecs/ecs.conf:
ECS_ENGINE_AUTH_TYPE=dockercfg
ECS_ENGINE_AUTH_DATA={"https://quay.io": {"auth": "YOURAUTHTOKENFROMDOCKERCFG", "email": "user#example.com"}}
It uses dockercfg and a token rather than username/password.
The dockercfg is described in the documentation page "I'm authorized but I'm still getting 403s"
docker stores the credentials it uses for push and pull in a file typically placed at $HOME/.dockercfg.
If you are executing docker in another environment (scripted docker build, virtual machine, makefile, virtualenv, etc), docker will not be able to find the .dockercfg file and will fail.
As the OP Mathieu Perochon comments below, this is also linked to the environment version of the Amazon Machine Image:
I have upgrade my AMI (Amazon ECS-Optimized Amazon Linux) and it's working
Thanks for reply #VonC. i've resolved my problem. i have upgrade my AMI (Amazon ECS-Optimized Amazon Linux) and it's working.
Link to good AMI : https://aws.amazon.com/marketplace/pp/B00U6QTYI2/

Dockerrun.aws.json structure for ECR Repo

We are switching from Docker Hub to ECR and I'm curious how to structure the Dockerrun.aws.json file to use this image. I attempted to modify the name as <my_ECR_URL>/<repo_name>:<image_tag> but this is not successful. I also saw the details of private registries using an authentication file on S3 but this doesn't seem like the correct route when aws ecr get-login is the recommended way to authenticate with ECR.
Can anyone point me to how I can use an ECR image in a Beanstalk Dockerrun.aws.json file?
If I look at the ECS Task Definition,there's a required attribute called com.amazonaws.ecs.capability.ecr-auth, but I'm not setting that anywhere in my Dockerrun.aws.json file and I'm not sure what needs to be there. Perhaps it is an S3 bucket? Something is needed as every time I try to run the Elastic Beanstalk created tasks from ECS, I get:
Run tasks failed
Reasons : ATTRIBUTE
Any insights are greatly appreciated.
Update I see from some other threads that this used to occur with earlier versions of the ECS agent but I am currently running Agent version 1.6.0 and Docker version 1.7.1, which I believe are the recommended versions. Is this possibly an issue with the Docker version?
So it turns out, the ECS agent was only able to pull images with version 1.7, and that's where mine was falling. Updating the agent resolves my issue, and hopefully it helps someone else.
This is most likely an issue with IAM roles if you are using a role that was previously created for Elastic Beanstalk. Ensure that the role that Elastic Beanstalk is running with has the AmazonEC2ContainerRegistryReadOnly managed policy attached
Source: http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_IAM_policies.html
Support for ECR was added in version 1.7.0 of the ECS Agent.
When using Elasticbeanstalk and ECR you don't need to authenticate. Just make sure the user has the policy AmazonEC2ContainerRegistryReadOnly
You can store your custom Docker images in AWS with Amazon EC2 Container Registry (Amazon ECR). When you store your Docker images in
Amazon ECR, Elastic Beanstalk automatically authenticates to the
Amazon ECR registry with your environment's instance profile, so you
don't need to generate an authentication file and upload it to Amazon
Simple Storage Service (Amazon S3).
You do, however, need to provide your instances with permission to
access the images in your Amazon ECR repository by adding permissions
to your environment's instance profile. You can attach the
AmazonEC2ContainerRegistryReadOnly managed policy to the instance
profile to provide read-only access to all Amazon ECR repositories in
your account, or grant access to single repository by using the
following template to create a custom policy:
Source: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html