How to authenticate docker client commands in AWS? - amazon-web-services

Below authentication can be implemented using certificates(client & server), for any human user using docker client that talks to docker daemon:
But, jenkins pipeline also run docker commands to talk to docker daemon.
How to authenticate jenkins pipeline to run specific docker commands? where this pipeline is launched as jenkins slave container in AWS EC2 on every new commit in Git..... Does ECS cluster approach in launching pipeline task help in authentication?

You can run docker login from your jenkins script and store the secrets in jenkins config. You could also pre-install credentials on the machine as part of your build process. If you are talking about permissions to talk to the daemon, you have to give the jenkins user the appropriate permissions (usually add it to the docker group`

Related

Deploy Django Code with Docker to AWS EC2 instance without ECS

Here is my setup.
My Django code is hosted on Github
I have docker setup in my EC2 Instance
My deployment process is manual. I have to run git pull, docker build and docker run on every single code change. I am using a dockerservice account to perform this step.
How do I automate step 3 with AWS code deploy or something similar.
Every example I am seeing on the internet involves ECS, Fargate which I am not ready to use yet
Check this out on how to Use Docker Images from a Private Registry (eg. dockerhub) for Your Build Environment
How to Use Docker Images from a Private Registry for Your Build Environment

Single Docker image push into AWS elastic container registry (ECR) from VSTS build/release definition

We have a python docker image which needs to build/publish (CI/CD) into AWS container registry.
At the moment AWS does not support for running docker tasks using docker hub private repositories, therefore we have to use ECR instead of docker hub.
Our CI/CD pipeline uses docker build and push tasks. Docker authentication is done via a Service Endpoint in the VSTS project.
There are few steps we should follow to setup a VSTS service endpoint for ECR. This required to execute AWS CLI command (locally or cloud) to get a user and password for docker client to login, it looks like;
aws ecr get-login --no-include-email
Above command outputs a docker login command with a username (AWS) and a password (token).
The issue with this approach is access token will last only for 12 hours. Therefore CI/CD task requires updating the Service Endpoint every 12 hours, otherwise build fail with unauthorised token exception.
Other option we have is to run some shell commands to execute aws get-login command and run docker build/push commands in the same context. This option required installing aws cli into build agent (we are using public linux agent).
In addition shell command involves awkward task configuration with environment/variables. Otherwise we will be exposing aws application id and secret in the build steps.
Could you please advice if you have solved VSTS CI/CD pipeline using docker with AWS ecr?
Thanks, Mahi
After lot of research, trial and error I found an answer to my own question.
AWS provides an extension to VSTS with build tasks and Service Endpoints. You need to configure AWS service endpoint using an account number, application ID, and secret. Then, in your build/release definition;
build docker image using out of the box docker build task, or shell/bash command (for an example; docker build -t your:tag . )
Then add another build step to push image into AWS registry, for this you can use AWS extension task (Amazon Elastic Container Registry Push Image). Amazon Elastic Container Registry Push Image build task will generate token and login docker client every time you run this build definition. You don't have to worry about updating username/token every 12 hours, AWS extension build task will do that for you.
You are looking for this
Amazon ECR Docker Credential Helper
AWS documentation
This is where Amazon ECR Docker Credential Helper makes it easy for developers to use ECR without the need to use docker login or write logic to refresh tokens and provide transparent access to ECR repositories.
Credential Helper helps developers in a continuous development environment to automate the authentication process to ECR repositories without having to regenerate tokens every 12 hours. In addition, Credential Helper also provides token caching under the hood so you don’t have to worry about getting throttled or writing additional logic

Correct approach in deploying stack to Docker for AWS

I am trying to deploy my docker-compose based stack to Docker for AWS (created via AWS CloudFormation).
My compose YAML file is managed in Git repository and Docker images in private registry (Gitlab).
What is the correct way of working with Manager to deploy a service?
I tried (and failed) several approaches:
Working with local Docker client via Docker API is not possible, because Docker for AWS manager node is not opening 2375 port.
Rsyncing compose YAML and environment file directly to manager node is not possible, because rsync is not installed on Amazon Docker AMI.
curl the file from Gitlab seems like a very inconvenient way of doing it.
Thanks
Found a way to do it more or less properly (according to some comment in Swarm documentation):
Create SSH tunnel to manager:
$ ssh -NL localhost:2374:/var/run/docker.sock docker#<manager ip> &
Run everything locally with
$ docker -H localhost:2374 info
or define
export DOCKER_HOST=localhost:2374
and use docker as you if you are running on Swarm manager
$ docker info
In my opinion, there are 2 options you can try
Use Jenkins and then there is a plugin called publish over SSH. You can use this plugin to send your compose file and then run commands like "docker stack deploy".More description can be found here
You can use Docker cloud to bring your swarm to your local terminal, similar to what you have already done . Follow this link
The first approach is much better because you have automated the deployment, now you can schedule deployments, run it on click of a button or even on commits.
Hope it helps !!

How to push docker auto build image to AWS instance with a hook?

I have a Dockerhub account linked with Github which is auto build. With every action on Github , a build is started on dockerhub account. but now, I have to find a way through which AWS can listen the new build and take a pull of this build and run it.
I am new to docker skills, Please let me know if any one know the solution to this ?
You can use the Webhook feature in DockerHub to link with a CI/CD tool to trigger the docker pull in your EC2 instance. For example using Jenkins
Setup Jenkins in a EC2 instance or on-premise
Remote SSH to the EC2 instance using Jenkins Remote SSH plugin
Create a Build Job for Deployment
Inside the Job, add shell script code to pull the image from DockerHub.
To trigger the job, register a webhook linked with the Jenkins Job, inside DockerHub Webhooks section as a build trigger to invoke the webhook.

docker build taking too much time in aws ecs

I have setup master and slave configuration of jenkins on aws ecs. Written a job that will build docker images and push to ecr. So each time the job builds it is taking the same amount of time approx to 10 min. My jenkins master is running on container and and have used Amazon EC2 Container Service Plugin to configure slave. I have mounted the docker socket file so that the slave node will have access to docker daemon but it is not using the layer images of the ecs instance. Each time it starts from fresh.
Overview of each build:
Probably your docker build is not using caching mechanism of docker. Please refer this for building cache.