docker swarm and aws ecr authentication using api keys - amazon-web-services

I'm having trouble pulling docker images from AWS ECR when deploying a stack to my docker swarm cluster that runs in AWS EC2.
If I try to ssh to any of the nodes and authenticate manually and pull an image manually, there are no issues
This works:
root#manager1 ~ # `aws ecr get-login --no-include-email --region us-west-2 `
Login Succeeded
root#manager1 ~ # docker pull *****.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
however, if I try deploying a stack or service:
docker stack deploy --compose-file docker-compose.yml myapp
The image can't be found and on the node that I already authenticated as well as on all other manager/worker nodes.
Error from docker service ps myapp :
"No such image: *****.dkr.ecr.us-west-2.amazonaws.com/myapp:latest"
OS: RHEL 7.3
Docker version: Docker version 1.13.1-cs5, build 21c42d8
Anyone have a solution for this issue?

Try this command
docker login -u Username -p password *****.dkr.ecr.us-west-2.amazonaws.com && docker stack deploy --compose-file docker-compose.yml myapp --with-registry-auth

Related

Pushing a docker image to aws ecr gives no basic auth credentials

when I try to push a docker image to aws ecr it fails giving the following
sudo docker push xxxxxxx.dkr.ecr.us-east-2.amazonaws.com/my-app:1.0
7d9a9c94af8d: Preparing
f77d412f54b5: Preparing
629960860aca: Preparing
f019278bad8b: Preparing
8ca4f4055a70: Preparing
3e207b409db3: Waiting
no basic auth credentials
although logging in is done successfully
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin xxxx.dkr.ecr.us-east-2.amazonaws.com
Login Succeeded
And the /home/[my user]/.docker/config.json file has the following data
{
"auths": {
"xxxx.dkr.ecr.us-east-2.amazonaws.com": {
"auth": "QVsVkhaRT...."
}
}
}
I am using aws cli version 2.3.5
aws --version
aws-cli/2.3.5 Python/3.8.8 Linux/5.8.0-63-generic exe/x86_64.ubuntu.20 prompt/off
I am using docker version 20.10.10
docker --version
Docker version 20.10.10, build b485636
How can I solve this problem?
You're running sudo docker push.
This means that the credentials in your account won't be used. Instead, Docker is trying to use (nonexistent) credentials in the root user account.
Changing your command to docker push should suffice.

Docker container export and deployement question

I got a question - I have a docker image running locally on my Mac. - I'm trying to export that local image and deploy on AWS elasticbean stalk env.
Should I use docker export command which outputs it as a tar file then upload to AWS? or should it be in a different non compressed format?
I already tried the above and docker export it as a tar file but AWS didn't like that so what approach should I take here?
You can create a repository in your aws ECR (Amazon Elastic Container Registry) and push your local image to that repo
aws ecr get-login --no-include-email --region us-east-2
docker tag test-pod:latest 24533xxxxx.dkr.ecr.us-east-2.amazonaws.com/test:latest
docker push 24533xxxxx.dkr.ecr.us-east-2.amazonaws.com/test:latest

Dockerfile - install jenkins on AWS

New to AWS so any help would be appreciated.
I'm attempting to run Jenkins through Docker on AWS. I found this article https://docs.aws.amazon.com/aws-technical-content/latest/jenkins-on-aws/containerized-deployment.html
Can anyone share a better step-by-step tutorial to achieve this? the page above seems incomplete.
It talks about "The Dockerfile should also contain the steps to install the Jenkins Amazon ECS plugin" but does not show how to go about installing the plugin using the Dockerfile.
thanks
Please follow below steps:
Launch EC2 cluster according to your needs.
Install docker in you local machine. For example, for ubuntu (sudo apt-get isntall docker.io)
systemctl start docker
Create new folder for our jenkins docker. Create new Dockerfile inside it with following contents.
FROM Jenkins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
Create plugins.txt in same folder and add below line
amazon-ecs:1.3
Login to ECR using aws cli. Configure aws first with your credentials.
aws ecr get-login --region <REGION>
Run the output returned from above command to docker login.
sudo docker build -t jenkins_master .
sudo docker tag jenkins_master:latest <AWS ACC ID>.dkr.ecr.<REGION>.amazonaws.com/jenkins_master:latest
Create repository in ECR for this image
aws ecr create-repository --repository-name jenkins_master
Push the image in AWS ECR.
sudo docker push <AWS ACC ID>.dkr.ecr.<REGION>+.amazonaws.com/jenkins_master:latest
Our Jenkins docker image is ready. But data stored by this Jenkins server will not be persistent. To store data permanently, we will create another docker image which will create a volume with mount point. For that, create new directory for this new docker image and inside it create another Dockerfile with below content.
FROM Jenkins
VOLUME ["/var/jenkins_home"]
Again follow same commands to push this new repository to ECR.
sudo docker build -t jenkins_dv .
sudo docker tag jenkins_dv:latest <AWS ACC ID>.dkr.ecr.<REGION>.amazonaws.com/jenkins_dv:latest
aws ecr create-repository --repository-name jenkins_dv
sudo docker push <AWS Account Number>.dkr.ecr.<REGION>.amazonaws.com/jenkins_dv:latest
Now our images are ready. We will use this images to run them as service on our ECS cluster. For that we need to install ecs-cli using below command for linux.
sudo curl -o /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest
Create a new txt file with below contents which will have jenkins configuration.
jenkins_master:
image: jenkins_master
cpu_shares: 100
mem_limit: 2000M
ports:
- "8080:8080"
- "50000:50000"
volumes_from:
- jenkins_dv
jenkins_dv:
image: jenkins_dv
cpu_shares: 100
mem_limit: 500M
15. Finally push this service using above file to your newly created cluster.
ecs-cli compose --file docker_compose.txt service up --cluster <cluster_name>
Hope this helps!

Docker unable to connect AWS EC2 cloud

Hi I am able to deploy my spring boot application in my local docker container(1.11.2) in Windows-7.I followed the below steps to run the docker image in AWS EC2(Free Account:eu-central-1) but getting error
Step 1
Generated Amazon "AccessKeyID" and "SecretKey".
Step 2
Created new repository and it shows 5 Steps to push my docker image in AWS EC2.
Step 3
Installed Amazon CLI and run "aws configure" and configured all the details.
While running aws iam list-users --output table it shows all the user list
Step 4
Run the following command in Docker Container aws ecr get-login --region us-west-2
It returns the docker login.
While running the docker login it returns the following error :
XXXX#XXXX MINGW64 ~
$ docker login -u AWS -p <accessKey>/<secretKey>
Uwg
Error response from daemon: Get https://registry-1.docker.io/v2/: unauthorized:
incorrect username or password
XXXX#XXXX MINGW64 ~
$ gLBBgkqhkiG9w0BBwagggKyMIICrgIBADCCAqcGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQME8
Zei
bash: gLBBgkqhkiG9w0BBwagggKyMIICrgIBADCCAqcGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQ
ME8Zei: command not found
XXXX#XXXX MINGW64 ~
$ lJnpBND9CwzAgEQgIICeLBms72Gl3TeabEXDx+YkK9ZlbyGxPmsuVI/rq81tDeIC68e0Ma+ghg3Dt
Bus
bash: lJnpBND9CwzAgEQgIICeLBms72Gl3TeabEXDx+YkK9ZlbyGxPmsuVI/rq81tDeIC68e0Ma+ghg
3DtBus: No such file or directory
I didn't get proper answer in google.It would be great if some one guide me to resolve this issue.Thanks in advance.
Your command is not pointing to your ECR endpoint, but to DockerHub. Using Linux, normally I would simply run:
$ eval $(aws ecr get-login --region us-west-2)
This is possible because the get-login command is a wrapper that retrieves a new authorization token and formats the docker login command. You only need to execute the formatted command (in this case with eval)
But if you really want to run the docker login manually, you'll have to specify the authorization token and the endpoint of your repository:
$ docker login -u AWS -p <password> -e none https://<aws_account_id>.dkr.ecr.<region>.amazonaws.com
Where <password> is actually the authorization token (which can be generated by the aws ecr get-authorization-token command).
Please refer to the documentation for more details: http://docs.aws.amazon.com/cli/latest/reference/ecr/index.html

How to pull a docker template image on a Jenkins slave (AWS/ECR)?

In our current setup we have the following Jenkins config.
Jenkins Master <-- ssh --> Jenkins Slave
So the Jenkins Master is able to connect successfully to the slave. I would like to provide a way so that the slave gets a docker image so we can build using a prebuilt Docker slave. When building the Docker slave locally i can use it, but i seem to hit a wall when i want to pull this Docker build slave from an AWS ECR repository. I seem to be unable to find a way to provide the credentials.
We are using the AWS ECR plugin but this not help in providing details for the ECR pull. (See post http://getmesh.io/Blog/Jenkins+2+Pipeline+101)
Any clue where i can configure the AWS ECR credentials so the Docker Template can be pulled?
As far as I am aware your Jenkins docker slave server should have awscli installed with a valid AWS secret and key. Once this is completed you can run below command on the Jenkins docker slave server to authenticate:
aws ecr get-login --region YOUR_REGION --no-include-email | xargs -n 1 -P 10 -I {} bash -c {}
The command will take the output from the awscli and login on the AWS ECR.
As AWS ECR token expires every 12 hours I have added a cronjob to renew the token every 6 hours.
0 */6 * * * aws ecr get-login --region YOUR_REGION --no-include-email | xargs -n 1 -P 10 -I {} bash -c {}
Or as an alternative you can create an internal AWS ECR anonymous proxy from where everyone on your organisation can pull containers. Check this project for more details