Can you help me find a useful step-by-step guide or a Gist outlining in detail how to configure CircleCI (using 2.0 syntax) to deploy to AWS EC2?
I understand the basic requirements and the moving pieces, but unsure what to put in the .circleci/config.yml file in the deploy step.
So far I got:
A "Hello World" Node.js app which is building successfully in CircleCI (just without the deploy step)
A running EC2 instance (Ubuntu 16.04)
An IAM user with sufficient permissions added to CircleCI for that particular job
Can you help out with the CircleCI deploy step?
Following your repository, you could create a script just like that: deploy.sh
#!/bin/bash
echo "Start deploy"
cd ~/circleci-aws
git pull
npm i
npm run build
pm2 stop build/server
pm2 start build/server
echo "Deploy end"
And in your .circleci/conf.yml you do it:
deploy:
docker:
- image: circleci/node:chakracore-8.11.1
steps:
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
- run:
name: AWS EC2 deploy
command: |
#upload all the code to machine
scp -r -o StrictHostKeyChecking=no ./ ubuntu#13.236.1.107:/home/circleci-aws/
#Run script inside of machine
ssh -o StrictHostKeyChecking=no ubuntu#13.236.1.107 "./deploy.sh"
But this is so ugly, try something like AWS Codedeploy or ecs for using containers.
Related
I have an ubuntu EC2 instance where the docker container runs. I need a simple CD architecture that will pull code from GitHub and run docker build... and docker run ... on my EC2 instance after every code push.
I've tried with GitHub actions and I'm able to connect to the EC2 instance but it gets stuck after docker commands.
name: scp files
on: [push]
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Pull changes and run docker
uses: fifsky/ssh-action#master
with:
command: |
cd test_ec2_deployment
git pull
sudo docker build --network host -f Dockerfile -t test .
sudo docker run -d --env-file=/home/ubuntu/.env -ti test
host: ${{ secrets.HOST }}
user: ubuntu
key: ${{ secrets.SSH_KEY }}
args: "-tt"
output
Step 12/13 : RUN /usr/bin/crontab /etc/cron.d/cron-job
---> Running in 52a5a0174958
Removing intermediate container 52a5a0174958
---> badf6fdaf774
Step 13/13 : CMD printenv > /etc/environment && cron -f
---> Running in 0e9fd12db4f7
Removing intermediate container 0e9fd12db4f7
---> 888a2a9e5910
Successfully built 888a2a9e5910
Successfully tagged test:latest
Also, I've tried to separate docker commands into .sh script but it didn't help. Here is an issue for that https://github.com/fifsky/ssh-action/issues/30.
I wonder if it's possible to implement this CD structure using AWS CodePipeline or any other AWS services. Also, I'm not sure is it too complicated to set up Jenkins for this case.
This is definitely possible using AWS CodePipeline but it will require you to have a Lambda function since you want to deploy your container to your own EC2 instance (which I think is not necessary unless you have a specific use-case). This is how your pipeline would look like;
AWS CodePipline stages:
Source: Connect your GitHub repository. In the background, it will automatically clone code from your Git repo, zip it, and store it in S3 to be used by the next stage. There are other options as well if you want to do it all by yourself. For example;
using your GitHub actions, you zip the file and store it in S3 bucket. On the AWS side, you will add S3 as a source and provide the bucket and object key so whenever this object version changes, it will trigger the pipeline.
You can also use GitHub actions to actually build your Docker image and push it to AWS ECR (container registry) and totally skip build stage. So, either do build on GitHub or on AWS side, upto you.
Build: For this stage (if you decide to build using AWS), you can either use Jenkins or AWS Codebuild. I have used AWS Codebuild, so IMO this is fairly easy and quick solution for the build stage. At this stage, it will use the zip file in S3 bucket, unzip it, build your Docker container image and push it to AWS ECR.
Deploy: Since you want to run your Docker container on EC2, there is no straight forward way to do this. However, you can utilize the power of Lambda function to run your image on your own EC2 instance. But you will have to code your function which could be tricky. I would highly recommend using AWS ECS to run your container in a more manageable way. You can essentially do all the things that you want to do in your EC2 instance to your ECS container.
As #Myz suggested, this can be done using GitHub actions with AWS ECR and AWS ECS. Below are some articles which I was following to solve the issue:
https://docs.github.com/en/actions/deployment/deploying-to-your-cloud-provider/deploying-to-amazon-elastic-container-service
https://kubesimplify.com/cicd-pipeline-github-actions-with-aws-ecs
I've installed the credential helper GitHub on our ec2 instance and got it working for my account. What I want to do is to use it during my GitLab CI/CD pipeline, where my gitlab-runner is actually running inside a docker container, and spawns new containers for the build, test & deploy phases. This is what our test phase looks like now:
image: docker:stable
run_tests:
stage: test
tags:
- test
before_script:
- echo "Starting tests for CI_COMMIT_SHA=$CI_COMMIT_SHA"
- docker run --rm mikesir87/aws-cli aws ecr get-login-password | docker login --username AWS --password-stdin $IMAGE_URL
script:
- docker run --rm $IMAGE_URL:$CI_COMMIT_SHA npm test
This works fine, but what I'd like to see if I could get working is the following:
image: docker:stable
run_tests:
image: $IMAGE_URL:$CI_COMMIT_SHA
stage: test
tags:
- test
script:
- npm test
When I try the 2nd option it I get the no basic auth credentials. So I'm wondering if there is a way to get the credential helper to map to the docker container without having to have the credential helper installed on the image itself.
Configure your runner to use the credential helper with DOCKER_AUTH_CONFIG environment variable. A convenient way to do this is to bake it all into your image.
So, your gitlab-runner image should include the the docker-credential-ecr-login binary (or you should mount it in from the host).
FROM gitlab/gitlab-runner:v14.3.2
COPY bin/docker-credential-ecr-login /usr/local/bin/docker-credential-ecr-login
Then when you call gitlab-runner register pass in the DOCKER_AUTH_CONFIG environment variable using --env flag as follows:
AUTH_ENV="DOCKER_AUTH_CONFIG={ \"credsStore\": \"ecr-login\" }"
gitlab-runner register \
--non-interactive \
...
--env "${AUTH_ENV}" \
--env "AWS_SDK_LOAD_CONFIG=true" \
...
You can also set this equivalently in the config.toml, instance CI/CD variables, or anywhere CI/CD variables are set (group, project, yaml, trigger, etc).
As long as your EC2 instance (or ECS task role if running the gitlab-runner as an ECS task) has permission to pull the image, your jobs will be able to pull down images from ECR declared in image: sections.
However this will NOT necessarily let you automatically pull images using docker-in-docker (e.g. invoking docker pull within the script: section of a job). This can be configured (as it seems you already have working), but may require additional setup, depending on your runner and IAM configuration.
Follow this link, I can create a pod whose service account's role can access the AWS resources; so the pod can access them either.
Then, inspired by this EKS-Jenkins-Workshop, I change this workshop a little bit. I want to deploy Jenkins Pipeline, this Jenkins Pipeline can create a pod whose account service's role can access aws resources, but the problem is the cdk code in this pod cannot access AWS resources. (I write the cdk code to access AWS resources, reference (Your first AWS CDK app)[https://docs.aws.amazon.com/cdk/latest/guide/hello_world.html])
This is my Jenkinsfile
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
metadata:
name: jenkins-agent
Namespace: default
spec:
serviceAccountName: jenkins
containers:
- name: node-yuvein
image: node
command:
- cat
tty: true
"""
}
}
stages {
stage('Build') {
steps {
container('node-yuvein') {
dir('hello-cdk'){
sh "pwd"
sh 'npm --version'
sh 'node -v'
sh 'npm install -g typescript'
sh 'npm install -g aws-cdk'
sh 'npm install #aws-cdk/aws-s3'
sh 'npm run build'
sh 'cdk deploy'
}
}
}
}
}
}
When I run the pipeline, it has this error:
User: arn:aws:sts::450261875116:assumed-role/eksctl-eksworkshop-eksctl3-nodegr-NodeInstanceRole-1TCVDYSM1QKSO/i-0a4df3778517df0c6 is not authorized to perform: cloudformation:DescribeStacks on resource: arn:aws:cloudformation:us-west-2:450261875116:stack/HelloCdkStack/*
I am a beginner of K8s, Jenkins and cdk.
Hope someone can help me.
Thanks a lot.
Further Debugging:
In Jenkins Console, I can get serviceAccountName: "jenkins", and the name of my service account in EKS is jenkins.
the pod also get correct ENV:
+ echo $AWS_ROLE_ARN
arn:aws:iam::450261875116:role/eksctl-eksworkshop-eksctl3-addon-iamservicea-Role1-YYYFXFS0J4M2
+ echo $AWS_WEB_IDENTITY_TOKEN_FILE
/var/run/secrets/eks.amazonaws.com/serviceaccount/token
The node.js and npm I installed are the lastest version.
+ npm --version
6.14.8
+ node -v
v14.13.0
+ aws sts get-caller-identity
{
"UserId": "AROAWRVNS7GWO5C7QJGRF:botocore-session-1601436882",
"Account": "450261875116",
"Arn": "arn:aws:sts::450261875116:assumed-role/eksctl-eksworkshop-eksctl3-addon-iamservicea-Role1-YYYFXFS0J4M2/botocore-session-1601436882"
}
when I run this command, it appears my service account role. But I still get the original error.
Jenkins podTemplate has serviceAccount option:
https://github.com/jenkinsci/kubernetes-plugin#pod-and-container-template-configuration
Create an IAM role mapped to an EKS cluster
Create a ServiceAccount mapped to an IAM role
Pass ServiceAccount name to a podTemplate
Further debugging:
Ensure the pod has correct service account name.
Check if pod got AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE env vars (they are added automatically).
Check if AWS SDK you use is above the minimal version: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-minimum-sdk.html
Run aws sts get-caller-identity to see the role, don't waste time on running an actual job.
In the case of working with Jenkins slaves, one needs to customize the container images to use AWS CLI V2 instead of AWS CLI V1. I was running into errors related to authorization like the question poses; my client was using the cluster node roles instead of using the assumed web identity role of my service account attached to my Jenkins-pods for the slave containers.
Apparently V2 of the AWS CLI includes the web identity token file as part of the default credentials chain whereas V1 does not.
Here's a sample Dockerfile that pulls the latest AWS CLI version so this pattern works.
FROM jenkins/inbound-agent
# run updates as root
USER root
# Create docker group
RUN addgroup docker
# Update & Upgrade OS
RUN apt-get update
RUN apt-get -y upgrade
#install python3
RUN apt-get -y install python3
# add AWS Cli version 2 for web_identity_token files
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
# Add Maven
RUN apt-get -y install maven --no-install-recommends
# Add docker
RUN curl -sSL https://get.docker.com/ | sh
RUN usermod -aG docker jenkins
# Add docker compose
RUN curl -L "https://github.com/docker/compose/releases/download/1.26.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
# Delete cached files we don't need anymore:
RUN apt-get clean
RUN rm -rf /var/lib/apt/lists/*
# close root access
USER jenkins
Further, I had to make sure my serviceaccount was created and attached to both the Jenkins master image and the jenkins slaves. This can be accomplished via Manage Jenkins -> Manage Nodes and Clouds -> Configure Clouds -> Pod Template Details.
Be sure to edit Namespace and Serviceaccount fields with the appropriate values.
I am trying to set up a pipeline that builds my react application and deploys it to my AWS S3 bucket. It is building fine, but fails on the deploy.
My .gitlab-ci.yml is :
image: node:latest
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
S3_BUCKET_NAME: $S3_BUCKET_NAME
stages:
- build
- deploy
build:
stage: build
script:
- npm install --progress=false
- npm run build
deploy:
stage: deploy
script:
- aws s3 cp --recursive ./build s3://MYBUCKETNAME
It is failing with the error:
sh: 1: aws: not found
#jellycsc is spot on.
Otherwise, if you want to just use the node image, then you can try something like Thomas Lackemann details (here), which is to use a shell script to install; python, aws cli, zip and use those tools to do the deployment. You'll need AWS credentials stored as environment variables in your gitlab project.
I've successfully used both approaches.
The error is telling you AWS CLI is not installed in the CI environment. You probably need to use GitLab’s AWS Docker image. Please read the Cloud deployment documentation.
I am trying to get travis-ci to run a custom deploy script that uses awscli to push a deployment up to my staging server.
In my .travis.yml file I have this:
before_deploy:
- 'curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"'
- 'unzip awscli-bundle.zip'
- './awscli-bundle/install -b ~/bin/aws'
- 'export PATH=~/bin:$PATH'
- 'aws configure'
And I have set up the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
with their correct values in the travis-ci web interface.
However when the aws configure runs, it stops and waits for user input. How can I tell it to use the environment variables I have defined?
Darbio's solution works fine but it's not taking into consideration that you may end up pushing your AWS credentials in your repository.
That is a bad thing especially if docker is trying to pull a private image from one of your ECR repositories. It would mean that you probably had to store your AWS production credentials in the .travis.yml file and that is far from ideal.
Fortunately Travis gives you the possibility to encrypt environment variables, notification settings, and deploy api keys.
gem install travis
Do a travis login first of all, it will ask you for your github credentials. Once you're logged in get in your project root folder (where your .travis.yml file is) and encrypt your access key id and secret access key.
travis encrypt AWS_ACCESS_KEY_ID="HERE_PUT_YOUR_ACCESS_KEY_ID" --add
travis encrypt AWS_SECRET_ACCESS_KEY="HERE_PUT_YOUR_SECRET_ACCESS_KEY" --add
Thanks to the --add option you'll end up with two new (encrypted) environment variables in your configuration file. Now just open your .travis.yml file and you should see something like this:
env:
global:
- secure: encrypted_stuff
- secure: encrypted_stuff
Now you can make travis run a shell script that creates the ~/.aws/credentials file for you.
ecr_credentials.sh
#!/usr/bin/env bash
mkdir -p ~/.aws
cat > ~/.aws/credentials << EOL
[default]
aws_access_key_id = ${AWS_ACCESS_KEY_ID}
aws_secret_access_key = ${AWS_SECRET_ACCESS_KEY}
EOL
Then you just need to run the ecr_credentials.sh script from your .travis.yml file:
before_install:
- ./ecr_credentials.sh
Done! :-D
Source: Encription keys on Travis CI
You can set these in a couple of ways.
Firstly, by creating a file at ~/.aws/config (or ~/.aws/credentials).
For example:
[default]
aws_access_key_id=foo
aws_secret_access_key=bar
region=us-west-2
Secondly, you can add environment variables for each of your settings.
For example, create the following environment variables:
AWS_DEFAULT_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Thirdly, you can pass region in as a command line argument. For example:
aws eb deploy --region us-west-2
You won't need to run aws configure in these cases as the cli is configured.
There is further AWS documentation on this page.
Following the advice from #Darbio, I came up with this solution:
- stage: deploy
name: "Deploy to AWS EKS"
language: minimal
before_install:
# Install kubectl
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- sudo mv ./kubectl /usr/local/bin/kubectl
# Install AWS CLI
- if ! [ -x "$(command -v aws)" ]; then curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" ; unzip awscliv2.zip ; sudo ./aws/install ; fi
# export environment variables for AWS CLI (using Travis environment variables)
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
# Setup kubectl config to use the desired AWS EKS cluster
- aws eks update-kubeconfig --region ${AWS_DEFAULT_REGION} --name ${AWS_EKS_CLUSTER_NAME}
deploy:
- provider: script
# bash script containing the kubectl commands to setup the cluster
script: bash k8s-config/deployment.sh
on:
branch: master
It is also possible to avoid installing AWS CLI altogether. Then you need to configure kubectl:
kubectl config set-cluster --server= --certificate-authority=
kubectl config set-credentials --client-certificate= --client-key=
kubectl config set-context myContext --cluster= --namespace= --user=
kubectl config use-context myContext
You can find most of the needed values in your users home directory in /.kube/config, after you performed the aws eks update-kubeconfig command on your local machine.
Except for the client certificate and key. I couldn't figure out where to get them from and therefore needed to install AWS CLI in the pipeline as well.