Use ECS repository image as build image in CircleCI - amazon-web-services

I have been using my Docker-hub account till now in CircleCI, and now for some reason I'm trying to use my ECR repository image in the same place as build image in CircleCI (2.0)
But I see ECR doesn't support public images. So I can't mention my image as below as I did for Dockerhub image,
version: 2
jobs:
build:
working-directory: ~/tmp
docker:
- image: <dockerhub-name>/<image>
as,
version: 2
jobs:
build:
working-directory: ~/tmp
docker:
- image: aws-id.dkr.ecr.eu-central-1.amazonaws.com/image
It will throw error,
no basic auth credentials
In a straight forward operation it needs to get authenticated via command,
aws ecr get-login --region <region-name>
and then running,
docker login -u AWS -p <password> -e none https://aws-id.dkr.ecr.eu-central-1.amazonaws.com
I tried putting this commands in Pre-dependency commands section of CircleCI plan settings and didn't work.
Ideas?

What "Pre-dependency commands"? That sounds like you're referring to configuration structure from CircleCI 1.0, which you don't seem to be using.
Because of the way AWS requires you to authenticate with ECR, I wouldn't use an image from there with the docker executor. Either use some random image, and then use setup_remote_docker or use the machine executor.
This doc shows the former, and this one covers the latter.

Related

Simple AWS EC2 CI/CD solution

I have an ubuntu EC2 instance where the docker container runs. I need a simple CD architecture that will pull code from GitHub and run docker build... and docker run ... on my EC2 instance after every code push.
I've tried with GitHub actions and I'm able to connect to the EC2 instance but it gets stuck after docker commands.
name: scp files
on: [push]
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Pull changes and run docker
uses: fifsky/ssh-action#master
with:
command: |
cd test_ec2_deployment
git pull
sudo docker build --network host -f Dockerfile -t test .
sudo docker run -d --env-file=/home/ubuntu/.env -ti test
host: ${{ secrets.HOST }}
user: ubuntu
key: ${{ secrets.SSH_KEY }}
args: "-tt"
output
Step 12/13 : RUN /usr/bin/crontab /etc/cron.d/cron-job
---> Running in 52a5a0174958
Removing intermediate container 52a5a0174958
---> badf6fdaf774
Step 13/13 : CMD printenv > /etc/environment && cron -f
---> Running in 0e9fd12db4f7
Removing intermediate container 0e9fd12db4f7
---> 888a2a9e5910
Successfully built 888a2a9e5910
Successfully tagged test:latest
Also, I've tried to separate docker commands into .sh script but it didn't help. Here is an issue for that https://github.com/fifsky/ssh-action/issues/30.
I wonder if it's possible to implement this CD structure using AWS CodePipeline or any other AWS services. Also, I'm not sure is it too complicated to set up Jenkins for this case.
This is definitely possible using AWS CodePipeline but it will require you to have a Lambda function since you want to deploy your container to your own EC2 instance (which I think is not necessary unless you have a specific use-case). This is how your pipeline would look like;
AWS CodePipline stages:
Source: Connect your GitHub repository. In the background, it will automatically clone code from your Git repo, zip it, and store it in S3 to be used by the next stage. There are other options as well if you want to do it all by yourself. For example;
using your GitHub actions, you zip the file and store it in S3 bucket. On the AWS side, you will add S3 as a source and provide the bucket and object key so whenever this object version changes, it will trigger the pipeline.
You can also use GitHub actions to actually build your Docker image and push it to AWS ECR (container registry) and totally skip build stage. So, either do build on GitHub or on AWS side, upto you.
Build: For this stage (if you decide to build using AWS), you can either use Jenkins or AWS Codebuild. I have used AWS Codebuild, so IMO this is fairly easy and quick solution for the build stage. At this stage, it will use the zip file in S3 bucket, unzip it, build your Docker container image and push it to AWS ECR.
Deploy: Since you want to run your Docker container on EC2, there is no straight forward way to do this. However, you can utilize the power of Lambda function to run your image on your own EC2 instance. But you will have to code your function which could be tricky. I would highly recommend using AWS ECS to run your container in a more manageable way. You can essentially do all the things that you want to do in your EC2 instance to your ECS container.
As #Myz suggested, this can be done using GitHub actions with AWS ECR and AWS ECS. Below are some articles which I was following to solve the issue:
https://docs.github.com/en/actions/deployment/deploying-to-your-cloud-provider/deploying-to-amazon-elastic-container-service
https://kubesimplify.com/cicd-pipeline-github-actions-with-aws-ecs

Can I use amazon ecr credential helper inside a docker container if its installed on my EC2 VM?

I've installed the credential helper GitHub on our ec2 instance and got it working for my account. What I want to do is to use it during my GitLab CI/CD pipeline, where my gitlab-runner is actually running inside a docker container, and spawns new containers for the build, test & deploy phases. This is what our test phase looks like now:
image: docker:stable
run_tests:
stage: test
tags:
- test
before_script:
- echo "Starting tests for CI_COMMIT_SHA=$CI_COMMIT_SHA"
- docker run --rm mikesir87/aws-cli aws ecr get-login-password | docker login --username AWS --password-stdin $IMAGE_URL
script:
- docker run --rm $IMAGE_URL:$CI_COMMIT_SHA npm test
This works fine, but what I'd like to see if I could get working is the following:
image: docker:stable
run_tests:
image: $IMAGE_URL:$CI_COMMIT_SHA
stage: test
tags:
- test
script:
- npm test
When I try the 2nd option it I get the no basic auth credentials. So I'm wondering if there is a way to get the credential helper to map to the docker container without having to have the credential helper installed on the image itself.
Configure your runner to use the credential helper with DOCKER_AUTH_CONFIG environment variable. A convenient way to do this is to bake it all into your image.
So, your gitlab-runner image should include the the docker-credential-ecr-login binary (or you should mount it in from the host).
FROM gitlab/gitlab-runner:v14.3.2
COPY bin/docker-credential-ecr-login /usr/local/bin/docker-credential-ecr-login
Then when you call gitlab-runner register pass in the DOCKER_AUTH_CONFIG environment variable using --env flag as follows:
AUTH_ENV="DOCKER_AUTH_CONFIG={ \"credsStore\": \"ecr-login\" }"
gitlab-runner register \
--non-interactive \
...
--env "${AUTH_ENV}" \
--env "AWS_SDK_LOAD_CONFIG=true" \
...
You can also set this equivalently in the config.toml, instance CI/CD variables, or anywhere CI/CD variables are set (group, project, yaml, trigger, etc).
As long as your EC2 instance (or ECS task role if running the gitlab-runner as an ECS task) has permission to pull the image, your jobs will be able to pull down images from ECR declared in image: sections.
However this will NOT necessarily let you automatically pull images using docker-in-docker (e.g. invoking docker pull within the script: section of a job). This can be configured (as it seems you already have working), but may require additional setup, depending on your runner and IAM configuration.

Deploy Applications on Amazon ECS Using docker compose

I'm trying to deploy a docker container with multiple services to ECS. I've been following this article which looks great: https://aws.amazon.com/blogs/containers/deploy-applications-on-amazon-ecs-using-docker-compose/
I can get my container to run locally, and I can connect to the ECS context using the AWS CLI; however in the basic example from the article when I run
docker compose up
In order to deploy the image to ECS, I get the error:
pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Can't seem to make heads or tails of this. My docker is logged in to ECS using
aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
The default IAM user on my aws CLI has AmazonECS_FullAccess as well as "ecs:ListAccountSettings" and "cloudformation:ListStackResources"
I read here: pull access denied repository does not exist or may require docker login mikemaccana 's answer that after Nov 2020 authentication may be required in your YAML file to allow AWS to pull from hub.docker.io (e.g. give aws your Docker hub username and password) but I can't get the 'auth' syntax to work in my yaml file. This is my YAML file that runs tomcat and mariadb locally:
version: "2"
services:
database:
build:
context: ./tba-database
image: tba-database
# set default mysql root password, change as needed
environment:
MYSQL_ROOT_PASSWORD: password
# Expose port 3306 to host. Not for the application but
# handy to inspect the database from the host machine.
ports:
- "3306:3306"
restart: always
webserver:
build:
context: ./tba-webserver
image: tba-webserver
# mount point for application in tomcat
volumes:
- ./target/testPROJ:/usr/local/tomcat/webapps/ROOT
links:
- database:tba-database
# open ports for tomcat and remote debugging
ports:
- "8080:8080"
- "8000:8000"
restart: always
Author of the blog here (thanks for the kind comment!). I haven't played much with the build side of things but I suspect what's happening here is that when you run docker compose up we ignore the build phase and only leverage the image field. What happens next is that the containers being deployed on ECS/Fargate tries to pull the image tba-database (which is where the deploying seems to be complaining because it doesn't exist). You need extra steps to push your image to either GH or ECR before you could bring it life using docker compose up when in the ecs context.
You also probably need to change the compose version ("2" is very old).

Unable to update the docker image. Error : repository does not exist or may require 'docker login''

I have deploy watchtower which automatically update running Docker containers inside Docker Swarm.
I run this Docker Swarm on two AWS EC2 servers and use AWS ECR as Docker registry.
to avoid aws ecr get-login I have used Amazon ECR Docker Credential Helper which Automatically gets credentials for Amazon ECR on docker push/docker pull and no need to login ech 12 hours.
Problem is watchtower is throwing a error like :
time="2019-03-12T03:41:10Z" level=info msg="Unable to update container /crmproxy.1.wop3c1u2qktbkab8rukrlrgr6, err='Error response from daemon: pull access denied for 00000000000.dkr..amazonaws.com/crm, repository does not exist or may require 'docker login''. Proceeding to next."
I am sure that is not about login to ECR. I have correctly linked credentials into WATCHTOWER contaiener using docker-compose.yml file.
here is the watchtower configurations on docker-compose.yml file.
watchtower:
image: v2tec/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ~/.docker/config.json:/config.json
command: --interval 30
In my research about this issue, I saw others has same problem as me and there is person has fixed it him self but i don't understand it.
this is the what i found : solution that is unclear
I don't exactly know this answer is correct or not. but he has said :
The problem was that I installed docker as root. Now installed with
the ec2-user of the Amazon Linux AMI and working
Please help me to avoid this problem that I'm facing. I tried so many times.
Any help would be adavantage to me.
There's an additional dot in your image url. Might that be the reason for your issue?
00000000000.dkr..amazonaws.com/crm
^
Also, you may just add the ec2-user to the docker group to let it execute docker commands as well: sudo usermod -aG docker ec2-user. No need to reinstall.

How to deploy docker app in aws using GitLab CI/CD

I have a Symfony app that runs with docker-compose and I want to implement the auto-deployment with GitLab CI/CD, to run the app in some aws instance. I don't know what would be the best approach to take, basically this are my ideas and their steps:
Approach 1: (building in GitLab)
Build the docker images in the GitLab runners
Push the images to some image registry
ssh to the aws instance
pull the new image
run the new containers with docker-compose
Approach 2: (building in aws)
ssh to aws
pull the branch to deploy
build the docker images
run the new containers with docker-compose
I like the first approach but maybe there is another better way to do it. It would be amazing to have some .gitlab-ci.yml reference file.
Thanks!
If you build it in GitLab runners and push it to a registry, you can use it in more places that only on AWS.
Here is a reference file for the Docker-in-Docker build method(from docs):
.gitlab-ci.yml
build:
image: docker:stable
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.example.com
- docker build -t registry.example.com/group/project/image:latest .
- docker push registry.example.com/group/project/image:latest