Single Docker image push into AWS elastic container registry (ECR) from VSTS build/release definition - amazon-web-services

We have a python docker image which needs to build/publish (CI/CD) into AWS container registry.
At the moment AWS does not support for running docker tasks using docker hub private repositories, therefore we have to use ECR instead of docker hub.
Our CI/CD pipeline uses docker build and push tasks. Docker authentication is done via a Service Endpoint in the VSTS project.
There are few steps we should follow to setup a VSTS service endpoint for ECR. This required to execute AWS CLI command (locally or cloud) to get a user and password for docker client to login, it looks like;
aws ecr get-login --no-include-email
Above command outputs a docker login command with a username (AWS) and a password (token).
The issue with this approach is access token will last only for 12 hours. Therefore CI/CD task requires updating the Service Endpoint every 12 hours, otherwise build fail with unauthorised token exception.
Other option we have is to run some shell commands to execute aws get-login command and run docker build/push commands in the same context. This option required installing aws cli into build agent (we are using public linux agent).
In addition shell command involves awkward task configuration with environment/variables. Otherwise we will be exposing aws application id and secret in the build steps.
Could you please advice if you have solved VSTS CI/CD pipeline using docker with AWS ecr?
Thanks, Mahi

After lot of research, trial and error I found an answer to my own question.
AWS provides an extension to VSTS with build tasks and Service Endpoints. You need to configure AWS service endpoint using an account number, application ID, and secret. Then, in your build/release definition;
build docker image using out of the box docker build task, or shell/bash command (for an example; docker build -t your:tag . )
Then add another build step to push image into AWS registry, for this you can use AWS extension task (Amazon Elastic Container Registry Push Image). Amazon Elastic Container Registry Push Image build task will generate token and login docker client every time you run this build definition. You don't have to worry about updating username/token every 12 hours, AWS extension build task will do that for you.

You are looking for this
Amazon ECR Docker Credential Helper
AWS documentation
This is where Amazon ECR Docker Credential Helper makes it easy for developers to use ECR without the need to use docker login or write logic to refresh tokens and provide transparent access to ECR repositories.
Credential Helper helps developers in a continuous development environment to automate the authentication process to ECR repositories without having to regenerate tokens every 12 hours. In addition, Credential Helper also provides token caching under the hood so you don’t have to worry about getting throttled or writing additional logic

Related

Deploy Django Code with Docker to AWS EC2 instance without ECS

Here is my setup.
My Django code is hosted on Github
I have docker setup in my EC2 Instance
My deployment process is manual. I have to run git pull, docker build and docker run on every single code change. I am using a dockerservice account to perform this step.
How do I automate step 3 with AWS code deploy or something similar.
Every example I am seeing on the internet involves ECS, Fargate which I am not ready to use yet
Check this out on how to Use Docker Images from a Private Registry (eg. dockerhub) for Your Build Environment
How to Use Docker Images from a Private Registry for Your Build Environment

How to authenticate docker client commands in AWS?

Below authentication can be implemented using certificates(client & server), for any human user using docker client that talks to docker daemon:
But, jenkins pipeline also run docker commands to talk to docker daemon.
How to authenticate jenkins pipeline to run specific docker commands? where this pipeline is launched as jenkins slave container in AWS EC2 on every new commit in Git..... Does ECS cluster approach in launching pipeline task help in authentication?
You can run docker login from your jenkins script and store the secrets in jenkins config. You could also pre-install credentials on the machine as part of your build process. If you are talking about permissions to talk to the daemon, you have to give the jenkins user the appropriate permissions (usually add it to the docker group`

AWS profile with gitlab -ci

We are using git-lab as our repo and decided to go with gitlab ci. we are using server-less framework to deploy our code on AWS. I want to integrate AWS profiles to Gitlab so that it can call the specific profile and enter into the AWS account specified. I have tried hard-coding the variables but if i have to enter using a different profile for the Deployment, i need to change all the gitlab-ci files as am having more than 100 repos.
Any way to configure the aws profiles in gitlab?
Basically my git-lab-CI jobs runs on Docker. so i created a docker image with all the needed prerequisites needed for my Deployment and now my runtime is same as my Local machine with AWS-CLI installed and i can use my AWS profiles for the deployment in the serverless files

AWS Fargate ECS CLI Compose Private Registry

I am trying to create a Fargate cluster using Cloud Formation in AWS which uses a bunch of images stored in a private registry behind username/password authentication.
This command
./ecs-cli.exe compose --project-name AdminUI service up --create-log-
groups --cluster-config AdminUIConfig
results in an error
FATA[0302] Deployment has not completed: Running count has not changed
for 5.00 minutes
After investigation it appears the problem is because of the lack of basic auth against the repo which holds the images. How on earth do I pass this? I am currently running on Windows 10 using VS Code, if that matters. It feels like it is not client side, it is the cluster itself which needs to be sending the authentication.
Sorry, new to Docker and AWS
Fargate currently only supports pulling images from an unauthenticated registry (like Docker Hub) or from Amazon ECR.
From the documentation:
The Fargate launch type only supports images in Amazon ECR or public repositories in Docker Hub.

How to push docker auto build image to AWS instance with a hook?

I have a Dockerhub account linked with Github which is auto build. With every action on Github , a build is started on dockerhub account. but now, I have to find a way through which AWS can listen the new build and take a pull of this build and run it.
I am new to docker skills, Please let me know if any one know the solution to this ?
You can use the Webhook feature in DockerHub to link with a CI/CD tool to trigger the docker pull in your EC2 instance. For example using Jenkins
Setup Jenkins in a EC2 instance or on-premise
Remote SSH to the EC2 instance using Jenkins Remote SSH plugin
Create a Build Job for Deployment
Inside the Job, add shell script code to pull the image from DockerHub.
To trigger the job, register a webhook linked with the Jenkins Job, inside DockerHub Webhooks section as a build trigger to invoke the webhook.