aws ecs 403 error to login private registry - amazon-web-services

I am trying to login to Quay with ECS.
Quay is a private registry docker.
I followed this documentation but I have also a 403 error: "{\"error\": \"Permission Denied\"}".
I put this code in /etc/ecs/ecs.config:
ECS_ENGINE_AUTH_TYPE=docker
ECS_ENGINE_AUTH_DATA={"https://quay.io/": { "username": "xxxxxx","password":"xxxxx","email": "."}}
And I've reboot the ecs services but it's not working.
Have you got an idea ?

The documentation points out to a slightly different content of /etc/ecs/ecs.conf:
ECS_ENGINE_AUTH_TYPE=dockercfg
ECS_ENGINE_AUTH_DATA={"https://quay.io": {"auth": "YOURAUTHTOKENFROMDOCKERCFG", "email": "user#example.com"}}
It uses dockercfg and a token rather than username/password.
The dockercfg is described in the documentation page "I'm authorized but I'm still getting 403s"
docker stores the credentials it uses for push and pull in a file typically placed at $HOME/.dockercfg.
If you are executing docker in another environment (scripted docker build, virtual machine, makefile, virtualenv, etc), docker will not be able to find the .dockercfg file and will fail.
As the OP Mathieu Perochon comments below, this is also linked to the environment version of the Amazon Machine Image:
I have upgrade my AMI (Amazon ECS-Optimized Amazon Linux) and it's working

Thanks for reply #VonC. i've resolved my problem. i have upgrade my AMI (Amazon ECS-Optimized Amazon Linux) and it's working.
Link to good AMI : https://aws.amazon.com/marketplace/pp/B00U6QTYI2/

Related

How to use multi container docker in Elastic beanstalk using Amazon linux 2?

Currently, Amazon deprecated Multi-container Docker running on 64bit Amazon Linux.Need migrate to Docker running on 64bit Amazon Linux 2. In 1st version , we used Dockerrun.aws.json v2 to manage multi container docker. In latest version (Docker running on 64bit Amazon Linux 2), we need to use Dockerrun.aws.json v3 or docker-compose. But there is no working example or blogs are available. Can i get working samples ?.
In regards to Elastic Beanstalk and the Docker running on 64bit Amazon Linux 2 platform.
I was struggling too and finally got to the bottom of it. What confused me is that the documentation makes it seem like you can choose to use either, the Dockerrun.aws.json (v3) or a docker-compose.yml in your EB application package.
Then you go looking for the documentation on Dockerrun.aws.json (v3), and you won't find it anywhere.
The reason for this is that, you don't get a choice. If you want to run multiple containers you must include a docker-compose.yml in your application package. The only thing the Dockerrun.aws.json (v3) allows you to do is configure the s3 bucket and key to the location of your container repository authentication file ".dockercfg"
This is essentially the documentation for "Dockerrun.aws.json (v3)" it doesn't support anything similar to the "Dockerrun.aws.json (v2)
{
"AWSEBDockerrunVersion": "3",
"Authentication": {
"bucket": "DOC-EXAMPLE-BUCKET",
"key": "mydockercfg"
}
}
Include a docker-compose.yml and you'll need the dockerrun.aws.json (v3) only if the docker images are in a private repository.
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/single-container-docker-configuration.html
According to AWS Docs, Multi-container Docker running on Amazon Linux can be migrated to ECS on Amazon Linux 2
This option seems to be easier to apply with the CLI than using the Elastic Beanstalk console because it requires 1 single command:
aws elasticbeanstalk update-environment \
--environment-name ${my-env} \
--solution-stack-name "64bit Amazon Linux 2 ${version} running ECS" \
--region ${my-region}
I'd suggest that you first clone the environment you'd like to upgrade, apply the command mentioned above to this copied environment and test it, if everything works as expected then you can use a blue/green deployment to avoid downtime.
I hope this helps someone!

Authenticate AWS copilot app with private Docker Hub repo

I am trying to deploy an app to AWS ECS using Copilot (https://aws.github.io/copilot-cli/). I want to specify an image, not a Dockerfile, so that I don't have to build and push locally. However, my image on Docker Hub is private.
I've created an AWS secret with my credentials. I've edited my copilot manifest to try to use that secret:
image:
location: my_repo/my_image
repositoryCredentials:
credentialsParameter: my_credentials_secret_ARN
However, I still get not found/not authorized when I deploy. If this is the right approach, what have I got wrong? If not, how do I proceed?
I've been told by someone at AWS that copilot doesn't yet support building from an image hosted on a private repo. Hopefully the functionality will be coming soon.
Follow progress on this request here:
https://github.com/aws/copilot-cli/issues/2101
https://github.com/aws/copilot-cli/issues/2061

When to use AWS CLI and EB CLI

For a month or so, I've been studying AWS services and now I have to accomplish some basic stuff on AWS elastic beanstalk via command line. As far as I understand there are the aws elasticbeanstalk [command] and the eb [command] CLI installed on the build instance.
When I run eb status inside application folder, I get response in the form:
Environment details for: app-name
Application name: app-name
Region: us-east-1
Deployed Version: app-version
Environment ID: env-name
Platform: 64bit Amazon Linux ........
Tier: WebServer-Standard
CNAME: app-name.elasticbeanstalk.com
Updated: 2016-07-14 .......
Status: Ready
Health: Green
That tells me eb init has been run for the application.
On the other hand if I run:
aws elasticbeanstalk describe-application-versions --application-name app-name --region us-east-1
I get the error:
Unable to locate credentials. You can configure credentials by running "aws configure".
In home folder of current user there is a .aws directory with a credential file containing a [profile] line and aws_access_key_id and
aws_secret_access_key lines all set up.
Beside the obvious problem with the credentials, what I really lack is understanding of the two cli. Why is EB cli not asking for credentials and AWS cli is? When do I use one or the other? Can I use only aws cli? Any clarification on the matter will be highly appreciated.
EDIT:
For anyone ending up here, having the same problem with "Unable to locate credentials". Adding --profile profile-name option solved the problem for me. profile-name can be found in ~/.aws/config (or credentials) file on [profile profile-name] line.
In order to verify that the AWS CLI is configured on your system run aws configure and provide it with all the details it requires. That should fix your credentials problem and checking the change in configuration will allow you to understand what's wrong with your current conf.
the eb cli and the aws cli have very similar capabilities, and I too am a bit confused as to why they both should exist. From my experience the main differences are that the cli is used to interact with your AWS account using simple requests while the eb cli creates connections between you and the eb envs and so allows for finer control over them.
For instance - I've just developed a CI/CD pipeline for our beanstalk apps. When I use the eb cli I can monitor the deployment of our apps and notify the developers when it's finished. aws cli does not offer that functionality, and the only to achieve that is to repeatedly query the service until you receive the desired result.
The AWS CLI is a general tool that works on all AWS resources. It's not tied to a specific software project, the type of machine you're on, the directory you're in, or anything like that. It only needs credentials, whether they've been put there manually if it's your own machine, or generated by AWS if it's an EC2 instance.
The EB CLI is a high level tool to wrangle your software project into place. It's tied to the directory you're in, it assumes that the stuff in your directory is your project, and it has short commands that do a lot of background work to magically put everything in the right place.

IAM Role + Boto3 + Docker container

As far I as I know, boto3 will try to load credentials from the instance metadata service.
If I am running this code inside a EC2 instance I expected to hae no problem. But when my code is dockerized how the boto3 will find the metadata service?
The Amazon ECS agent populates the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable which can be used to get credentials. These special variables are provided only to process with PID 1. Script that is specified in Dockerfile ENTRYPOINT gets PID 1.
There are many networking modes and details might differ for other networking modes. More information can be found in: How can I configure IAM task roles in Amazon ECS to avoid "Access Denied" errors?
For awsvpc networking mode If you would run printenv with PID 1 you would see something similar to this:
AWS_EXECUTION_ENV=AWS_ECS_FARGATE
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/0f891318-ab05-46fe-8fac-d5113a1c2ecd
HOSTNAME=ip-172-17-0-123.ap-south-1.compute.internal
AWS_DEFAULT_REGION=ap-south-1
AWS_REGION=ap-south-1
ECS_CONTAINER_METADATA_URI_V4=http://169.254.170.2/v4/2c9107c385e04a70b30d3cc4d4de97e7-527074092
ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/2c9107c385e04a70b30d3cc4d4de97e7-527074092
It also gets tricky to debug something since after SSH'ing into container you are using PID other than 1 meaning that services that need to get credentials might fail to do so if you run them manually.
ECS task metadata endpoint documentation
Find .aws folder in ~/.aws in your machine and move this to Docker container's /root folder.
.aws contains files which has AWS KEY and AWS PW.
You can easily copy it to currently running container from your local machine by
docker cp ~/.aws <containder_id>:/root

Dockerrun.aws.json structure for ECR Repo

We are switching from Docker Hub to ECR and I'm curious how to structure the Dockerrun.aws.json file to use this image. I attempted to modify the name as <my_ECR_URL>/<repo_name>:<image_tag> but this is not successful. I also saw the details of private registries using an authentication file on S3 but this doesn't seem like the correct route when aws ecr get-login is the recommended way to authenticate with ECR.
Can anyone point me to how I can use an ECR image in a Beanstalk Dockerrun.aws.json file?
If I look at the ECS Task Definition,there's a required attribute called com.amazonaws.ecs.capability.ecr-auth, but I'm not setting that anywhere in my Dockerrun.aws.json file and I'm not sure what needs to be there. Perhaps it is an S3 bucket? Something is needed as every time I try to run the Elastic Beanstalk created tasks from ECS, I get:
Run tasks failed
Reasons : ATTRIBUTE
Any insights are greatly appreciated.
Update I see from some other threads that this used to occur with earlier versions of the ECS agent but I am currently running Agent version 1.6.0 and Docker version 1.7.1, which I believe are the recommended versions. Is this possibly an issue with the Docker version?
So it turns out, the ECS agent was only able to pull images with version 1.7, and that's where mine was falling. Updating the agent resolves my issue, and hopefully it helps someone else.
This is most likely an issue with IAM roles if you are using a role that was previously created for Elastic Beanstalk. Ensure that the role that Elastic Beanstalk is running with has the AmazonEC2ContainerRegistryReadOnly managed policy attached
Source: http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_IAM_policies.html
Support for ECR was added in version 1.7.0 of the ECS Agent.
When using Elasticbeanstalk and ECR you don't need to authenticate. Just make sure the user has the policy AmazonEC2ContainerRegistryReadOnly
You can store your custom Docker images in AWS with Amazon EC2 Container Registry (Amazon ECR). When you store your Docker images in
Amazon ECR, Elastic Beanstalk automatically authenticates to the
Amazon ECR registry with your environment's instance profile, so you
don't need to generate an authentication file and upload it to Amazon
Simple Storage Service (Amazon S3).
You do, however, need to provide your instances with permission to
access the images in your Amazon ECR repository by adding permissions
to your environment's instance profile. You can attach the
AmazonEC2ContainerRegistryReadOnly managed policy to the instance
profile to provide read-only access to all Amazon ECR repositories in
your account, or grant access to single repository by using the
following template to create a custom policy:
Source: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html