I have a Symfony app that runs with docker-compose and I want to implement the auto-deployment with GitLab CI/CD, to run the app in some aws instance. I don't know what would be the best approach to take, basically this are my ideas and their steps:
Approach 1: (building in GitLab)
Build the docker images in the GitLab runners
Push the images to some image registry
ssh to the aws instance
pull the new image
run the new containers with docker-compose
Approach 2: (building in aws)
ssh to aws
pull the branch to deploy
build the docker images
run the new containers with docker-compose
I like the first approach but maybe there is another better way to do it. It would be amazing to have some .gitlab-ci.yml reference file.
Thanks!
If you build it in GitLab runners and push it to a registry, you can use it in more places that only on AWS.
Here is a reference file for the Docker-in-Docker build method(from docs):
.gitlab-ci.yml
build:
image: docker:stable
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.example.com
- docker build -t registry.example.com/group/project/image:latest .
- docker push registry.example.com/group/project/image:latest
Related
I have an ubuntu EC2 instance where the docker container runs. I need a simple CD architecture that will pull code from GitHub and run docker build... and docker run ... on my EC2 instance after every code push.
I've tried with GitHub actions and I'm able to connect to the EC2 instance but it gets stuck after docker commands.
name: scp files
on: [push]
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Pull changes and run docker
uses: fifsky/ssh-action#master
with:
command: |
cd test_ec2_deployment
git pull
sudo docker build --network host -f Dockerfile -t test .
sudo docker run -d --env-file=/home/ubuntu/.env -ti test
host: ${{ secrets.HOST }}
user: ubuntu
key: ${{ secrets.SSH_KEY }}
args: "-tt"
output
Step 12/13 : RUN /usr/bin/crontab /etc/cron.d/cron-job
---> Running in 52a5a0174958
Removing intermediate container 52a5a0174958
---> badf6fdaf774
Step 13/13 : CMD printenv > /etc/environment && cron -f
---> Running in 0e9fd12db4f7
Removing intermediate container 0e9fd12db4f7
---> 888a2a9e5910
Successfully built 888a2a9e5910
Successfully tagged test:latest
Also, I've tried to separate docker commands into .sh script but it didn't help. Here is an issue for that https://github.com/fifsky/ssh-action/issues/30.
I wonder if it's possible to implement this CD structure using AWS CodePipeline or any other AWS services. Also, I'm not sure is it too complicated to set up Jenkins for this case.
This is definitely possible using AWS CodePipeline but it will require you to have a Lambda function since you want to deploy your container to your own EC2 instance (which I think is not necessary unless you have a specific use-case). This is how your pipeline would look like;
AWS CodePipline stages:
Source: Connect your GitHub repository. In the background, it will automatically clone code from your Git repo, zip it, and store it in S3 to be used by the next stage. There are other options as well if you want to do it all by yourself. For example;
using your GitHub actions, you zip the file and store it in S3 bucket. On the AWS side, you will add S3 as a source and provide the bucket and object key so whenever this object version changes, it will trigger the pipeline.
You can also use GitHub actions to actually build your Docker image and push it to AWS ECR (container registry) and totally skip build stage. So, either do build on GitHub or on AWS side, upto you.
Build: For this stage (if you decide to build using AWS), you can either use Jenkins or AWS Codebuild. I have used AWS Codebuild, so IMO this is fairly easy and quick solution for the build stage. At this stage, it will use the zip file in S3 bucket, unzip it, build your Docker container image and push it to AWS ECR.
Deploy: Since you want to run your Docker container on EC2, there is no straight forward way to do this. However, you can utilize the power of Lambda function to run your image on your own EC2 instance. But you will have to code your function which could be tricky. I would highly recommend using AWS ECS to run your container in a more manageable way. You can essentially do all the things that you want to do in your EC2 instance to your ECS container.
As #Myz suggested, this can be done using GitHub actions with AWS ECR and AWS ECS. Below are some articles which I was following to solve the issue:
https://docs.github.com/en/actions/deployment/deploying-to-your-cloud-provider/deploying-to-amazon-elastic-container-service
https://kubesimplify.com/cicd-pipeline-github-actions-with-aws-ecs
I've installed the credential helper GitHub on our ec2 instance and got it working for my account. What I want to do is to use it during my GitLab CI/CD pipeline, where my gitlab-runner is actually running inside a docker container, and spawns new containers for the build, test & deploy phases. This is what our test phase looks like now:
image: docker:stable
run_tests:
stage: test
tags:
- test
before_script:
- echo "Starting tests for CI_COMMIT_SHA=$CI_COMMIT_SHA"
- docker run --rm mikesir87/aws-cli aws ecr get-login-password | docker login --username AWS --password-stdin $IMAGE_URL
script:
- docker run --rm $IMAGE_URL:$CI_COMMIT_SHA npm test
This works fine, but what I'd like to see if I could get working is the following:
image: docker:stable
run_tests:
image: $IMAGE_URL:$CI_COMMIT_SHA
stage: test
tags:
- test
script:
- npm test
When I try the 2nd option it I get the no basic auth credentials. So I'm wondering if there is a way to get the credential helper to map to the docker container without having to have the credential helper installed on the image itself.
Configure your runner to use the credential helper with DOCKER_AUTH_CONFIG environment variable. A convenient way to do this is to bake it all into your image.
So, your gitlab-runner image should include the the docker-credential-ecr-login binary (or you should mount it in from the host).
FROM gitlab/gitlab-runner:v14.3.2
COPY bin/docker-credential-ecr-login /usr/local/bin/docker-credential-ecr-login
Then when you call gitlab-runner register pass in the DOCKER_AUTH_CONFIG environment variable using --env flag as follows:
AUTH_ENV="DOCKER_AUTH_CONFIG={ \"credsStore\": \"ecr-login\" }"
gitlab-runner register \
--non-interactive \
...
--env "${AUTH_ENV}" \
--env "AWS_SDK_LOAD_CONFIG=true" \
...
You can also set this equivalently in the config.toml, instance CI/CD variables, or anywhere CI/CD variables are set (group, project, yaml, trigger, etc).
As long as your EC2 instance (or ECS task role if running the gitlab-runner as an ECS task) has permission to pull the image, your jobs will be able to pull down images from ECR declared in image: sections.
However this will NOT necessarily let you automatically pull images using docker-in-docker (e.g. invoking docker pull within the script: section of a job). This can be configured (as it seems you already have working), but may require additional setup, depending on your runner and IAM configuration.
Current Situation:
We have CI and CD as below.
we have poll SCM in Jenkins, once new commits comes Jenkins will start the build through jenkinsfile, and Jenkins file look for pom and starts building jar file, once jar created it will start to create docker image out of it with help of dockerfile, and image will push to docker hub(private dockerhub).(CD==> then we use portainer to deploy the latest image to aws docker swarm cluster manually).
We are trying to achieve CD with below fashion:
Now I have to deploy the latest image from dockerhub to aws(docker swarm cluster) automatically through Jenkins like one click deployment.
How Can we achieve this deployment using Ansible or Portainer in auto-fashion like build and deploy?
If so please suggest with reference or steps to achieve this?
is there any better approach than Ansible?
New to AWS so any help would be appreciated.
I'm attempting to run Jenkins through Docker on AWS. I found this article https://docs.aws.amazon.com/aws-technical-content/latest/jenkins-on-aws/containerized-deployment.html
Can anyone share a better step-by-step tutorial to achieve this? the page above seems incomplete.
It talks about "The Dockerfile should also contain the steps to install the Jenkins Amazon ECS plugin" but does not show how to go about installing the plugin using the Dockerfile.
thanks
Please follow below steps:
Launch EC2 cluster according to your needs.
Install docker in you local machine. For example, for ubuntu (sudo apt-get isntall docker.io)
systemctl start docker
Create new folder for our jenkins docker. Create new Dockerfile inside it with following contents.
FROM Jenkins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
Create plugins.txt in same folder and add below line
amazon-ecs:1.3
Login to ECR using aws cli. Configure aws first with your credentials.
aws ecr get-login --region <REGION>
Run the output returned from above command to docker login.
sudo docker build -t jenkins_master .
sudo docker tag jenkins_master:latest <AWS ACC ID>.dkr.ecr.<REGION>.amazonaws.com/jenkins_master:latest
Create repository in ECR for this image
aws ecr create-repository --repository-name jenkins_master
Push the image in AWS ECR.
sudo docker push <AWS ACC ID>.dkr.ecr.<REGION>+.amazonaws.com/jenkins_master:latest
Our Jenkins docker image is ready. But data stored by this Jenkins server will not be persistent. To store data permanently, we will create another docker image which will create a volume with mount point. For that, create new directory for this new docker image and inside it create another Dockerfile with below content.
FROM Jenkins
VOLUME ["/var/jenkins_home"]
Again follow same commands to push this new repository to ECR.
sudo docker build -t jenkins_dv .
sudo docker tag jenkins_dv:latest <AWS ACC ID>.dkr.ecr.<REGION>.amazonaws.com/jenkins_dv:latest
aws ecr create-repository --repository-name jenkins_dv
sudo docker push <AWS Account Number>.dkr.ecr.<REGION>.amazonaws.com/jenkins_dv:latest
Now our images are ready. We will use this images to run them as service on our ECS cluster. For that we need to install ecs-cli using below command for linux.
sudo curl -o /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest
Create a new txt file with below contents which will have jenkins configuration.
jenkins_master:
image: jenkins_master
cpu_shares: 100
mem_limit: 2000M
ports:
- "8080:8080"
- "50000:50000"
volumes_from:
- jenkins_dv
jenkins_dv:
image: jenkins_dv
cpu_shares: 100
mem_limit: 500M
15. Finally push this service using above file to your newly created cluster.
ecs-cli compose --file docker_compose.txt service up --cluster <cluster_name>
Hope this helps!
I have been using my Docker-hub account till now in CircleCI, and now for some reason I'm trying to use my ECR repository image in the same place as build image in CircleCI (2.0)
But I see ECR doesn't support public images. So I can't mention my image as below as I did for Dockerhub image,
version: 2
jobs:
build:
working-directory: ~/tmp
docker:
- image: <dockerhub-name>/<image>
as,
version: 2
jobs:
build:
working-directory: ~/tmp
docker:
- image: aws-id.dkr.ecr.eu-central-1.amazonaws.com/image
It will throw error,
no basic auth credentials
In a straight forward operation it needs to get authenticated via command,
aws ecr get-login --region <region-name>
and then running,
docker login -u AWS -p <password> -e none https://aws-id.dkr.ecr.eu-central-1.amazonaws.com
I tried putting this commands in Pre-dependency commands section of CircleCI plan settings and didn't work.
Ideas?
What "Pre-dependency commands"? That sounds like you're referring to configuration structure from CircleCI 1.0, which you don't seem to be using.
Because of the way AWS requires you to authenticate with ECR, I wouldn't use an image from there with the docker executor. Either use some random image, and then use setup_remote_docker or use the machine executor.
This doc shows the former, and this one covers the latter.