How to push docker auto build image to AWS instance with a hook? - amazon-web-services

I have a Dockerhub account linked with Github which is auto build. With every action on Github , a build is started on dockerhub account. but now, I have to find a way through which AWS can listen the new build and take a pull of this build and run it.
I am new to docker skills, Please let me know if any one know the solution to this ?

You can use the Webhook feature in DockerHub to link with a CI/CD tool to trigger the docker pull in your EC2 instance. For example using Jenkins
Setup Jenkins in a EC2 instance or on-premise
Remote SSH to the EC2 instance using Jenkins Remote SSH plugin
Create a Build Job for Deployment
Inside the Job, add shell script code to pull the image from DockerHub.
To trigger the job, register a webhook linked with the Jenkins Job, inside DockerHub Webhooks section as a build trigger to invoke the webhook.

Related

Deploy Django Code with Docker to AWS EC2 instance without ECS

Here is my setup.
My Django code is hosted on Github
I have docker setup in my EC2 Instance
My deployment process is manual. I have to run git pull, docker build and docker run on every single code change. I am using a dockerservice account to perform this step.
How do I automate step 3 with AWS code deploy or something similar.
Every example I am seeing on the internet involves ECS, Fargate which I am not ready to use yet
Check this out on how to Use Docker Images from a Private Registry (eg. dockerhub) for Your Build Environment
How to Use Docker Images from a Private Registry for Your Build Environment

TeamCity: push docker image to AWS ECR

Running TeamCity 2019.1.4 with one server and three separate agents. Both agents and the server are running in their respective server/agent containers in separate EC2 instances. I want the build artifact (docker image) to be pushed to the ECR. Permission is configured via IAM role. I am getting Unauthorized error when pushing/pulling. Manually pulling image from the agent EC2 host works. But manually pulling from within the agent EC2 container gives the same error. How do I configure the TeamCity agent container to identify itself as the host machine?
PS: An option I am trying to avoid is to run TeamCity agents in a classic mode (manual installation) which will most likely work.
Do the following:
in TeamCity project configuration, add ECR connection.
then, in the build configuration, add build feature, add "Docker Support".
make sure the choice "Log in to the Docker registry before the build"
is checked and you choose the ECR connection from the project
configuration.

Single Docker image push into AWS elastic container registry (ECR) from VSTS build/release definition

We have a python docker image which needs to build/publish (CI/CD) into AWS container registry.
At the moment AWS does not support for running docker tasks using docker hub private repositories, therefore we have to use ECR instead of docker hub.
Our CI/CD pipeline uses docker build and push tasks. Docker authentication is done via a Service Endpoint in the VSTS project.
There are few steps we should follow to setup a VSTS service endpoint for ECR. This required to execute AWS CLI command (locally or cloud) to get a user and password for docker client to login, it looks like;
aws ecr get-login --no-include-email
Above command outputs a docker login command with a username (AWS) and a password (token).
The issue with this approach is access token will last only for 12 hours. Therefore CI/CD task requires updating the Service Endpoint every 12 hours, otherwise build fail with unauthorised token exception.
Other option we have is to run some shell commands to execute aws get-login command and run docker build/push commands in the same context. This option required installing aws cli into build agent (we are using public linux agent).
In addition shell command involves awkward task configuration with environment/variables. Otherwise we will be exposing aws application id and secret in the build steps.
Could you please advice if you have solved VSTS CI/CD pipeline using docker with AWS ecr?
Thanks, Mahi
After lot of research, trial and error I found an answer to my own question.
AWS provides an extension to VSTS with build tasks and Service Endpoints. You need to configure AWS service endpoint using an account number, application ID, and secret. Then, in your build/release definition;
build docker image using out of the box docker build task, or shell/bash command (for an example; docker build -t your:tag . )
Then add another build step to push image into AWS registry, for this you can use AWS extension task (Amazon Elastic Container Registry Push Image). Amazon Elastic Container Registry Push Image build task will generate token and login docker client every time you run this build definition. You don't have to worry about updating username/token every 12 hours, AWS extension build task will do that for you.
You are looking for this
Amazon ECR Docker Credential Helper
AWS documentation
This is where Amazon ECR Docker Credential Helper makes it easy for developers to use ECR without the need to use docker login or write logic to refresh tokens and provide transparent access to ECR repositories.
Credential Helper helps developers in a continuous development environment to automate the authentication process to ECR repositories without having to regenerate tokens every 12 hours. In addition, Credential Helper also provides token caching under the hood so you don’t have to worry about getting throttled or writing additional logic

build and push docker image to AWS ECR using lambda

Is it possible to automate building a docker image from code committed into github (no tests involved) and then push it to AWS ECR using a lambda function?
you cannot do it just with lambda as lambda is not really a suitable execution environment for the docker daemon (necessary to build the images), however you can use lambda + sns to trigger an endpoint that could point to a service you developed, hosted on ec2 that would trigger the docker build command after a git clone (you can use something similar to python's fabfile.org or a framework that allows you to execute server commands).
You sure can extend this idea on perhaps bringing the ec2 build machine up with some ami that automates this, etc....
The big point here is that you don't really have control over what's provisioned in lambda, so you need ec2.

AWS: How do I continuously deploy a static website on AWS

I have a github repo with static website contents (i.e I try not to use EC2, but the AWS static website service). Now I want to automatically deploy it on AWS anytime I change and push something to the master branch of my github repo.
Any experience or idea doing this?
I do this for many projects by using a Jenkins server - I happen to run it on another ec2 instance, but you could also run it on-premise if you prefer.
Github notifies Jenkins server that a checkin has occurred, and a Jenkins job deploys all the files to the proper places and also notifies me by SMS (or email), that a deployment has occurred.
(Jenkins is not the only tool that can do this there are others).