In my previous question I solved a problem of deployin a Maven project on AWS EC2 instance with Gitlab CI/CD by using SSH with PEM file, but I have read on Internet that it is not a best practice to commit the .pem file in a Git repository. So how do I have to change to deploy my application on aws without using pem file.
I'm trying to follow this tutorial but here the application is written with node.js while my app is built with maven so what do I need to change?
It does not matter what language is used to write an application. The tutorial is correct: you should use GitLab CI/CD environment variables to store secrets such as keys.
Variables are exposed as environment variables at the build time. You can use them like:
production:
stage: deploy
image: alpine/latest
variables:
GIT_STRATEGY: none
before_script:
- eval $(ssh-agent -s)
- echo "$DEPLOY_KEY" | tr -d '\r' | ssh-add - > /dev/null
script:
- ./deploy # This script uses SSH to deploy things
- ssh-agent -k
Related
I have an ubuntu EC2 instance where the docker container runs. I need a simple CD architecture that will pull code from GitHub and run docker build... and docker run ... on my EC2 instance after every code push.
I've tried with GitHub actions and I'm able to connect to the EC2 instance but it gets stuck after docker commands.
name: scp files
on: [push]
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Pull changes and run docker
uses: fifsky/ssh-action#master
with:
command: |
cd test_ec2_deployment
git pull
sudo docker build --network host -f Dockerfile -t test .
sudo docker run -d --env-file=/home/ubuntu/.env -ti test
host: ${{ secrets.HOST }}
user: ubuntu
key: ${{ secrets.SSH_KEY }}
args: "-tt"
output
Step 12/13 : RUN /usr/bin/crontab /etc/cron.d/cron-job
---> Running in 52a5a0174958
Removing intermediate container 52a5a0174958
---> badf6fdaf774
Step 13/13 : CMD printenv > /etc/environment && cron -f
---> Running in 0e9fd12db4f7
Removing intermediate container 0e9fd12db4f7
---> 888a2a9e5910
Successfully built 888a2a9e5910
Successfully tagged test:latest
Also, I've tried to separate docker commands into .sh script but it didn't help. Here is an issue for that https://github.com/fifsky/ssh-action/issues/30.
I wonder if it's possible to implement this CD structure using AWS CodePipeline or any other AWS services. Also, I'm not sure is it too complicated to set up Jenkins for this case.
This is definitely possible using AWS CodePipeline but it will require you to have a Lambda function since you want to deploy your container to your own EC2 instance (which I think is not necessary unless you have a specific use-case). This is how your pipeline would look like;
AWS CodePipline stages:
Source: Connect your GitHub repository. In the background, it will automatically clone code from your Git repo, zip it, and store it in S3 to be used by the next stage. There are other options as well if you want to do it all by yourself. For example;
using your GitHub actions, you zip the file and store it in S3 bucket. On the AWS side, you will add S3 as a source and provide the bucket and object key so whenever this object version changes, it will trigger the pipeline.
You can also use GitHub actions to actually build your Docker image and push it to AWS ECR (container registry) and totally skip build stage. So, either do build on GitHub or on AWS side, upto you.
Build: For this stage (if you decide to build using AWS), you can either use Jenkins or AWS Codebuild. I have used AWS Codebuild, so IMO this is fairly easy and quick solution for the build stage. At this stage, it will use the zip file in S3 bucket, unzip it, build your Docker container image and push it to AWS ECR.
Deploy: Since you want to run your Docker container on EC2, there is no straight forward way to do this. However, you can utilize the power of Lambda function to run your image on your own EC2 instance. But you will have to code your function which could be tricky. I would highly recommend using AWS ECS to run your container in a more manageable way. You can essentially do all the things that you want to do in your EC2 instance to your ECS container.
As #Myz suggested, this can be done using GitHub actions with AWS ECR and AWS ECS. Below are some articles which I was following to solve the issue:
https://docs.github.com/en/actions/deployment/deploying-to-your-cloud-provider/deploying-to-amazon-elastic-container-service
https://kubesimplify.com/cicd-pipeline-github-actions-with-aws-ecs
I've installed the credential helper GitHub on our ec2 instance and got it working for my account. What I want to do is to use it during my GitLab CI/CD pipeline, where my gitlab-runner is actually running inside a docker container, and spawns new containers for the build, test & deploy phases. This is what our test phase looks like now:
image: docker:stable
run_tests:
stage: test
tags:
- test
before_script:
- echo "Starting tests for CI_COMMIT_SHA=$CI_COMMIT_SHA"
- docker run --rm mikesir87/aws-cli aws ecr get-login-password | docker login --username AWS --password-stdin $IMAGE_URL
script:
- docker run --rm $IMAGE_URL:$CI_COMMIT_SHA npm test
This works fine, but what I'd like to see if I could get working is the following:
image: docker:stable
run_tests:
image: $IMAGE_URL:$CI_COMMIT_SHA
stage: test
tags:
- test
script:
- npm test
When I try the 2nd option it I get the no basic auth credentials. So I'm wondering if there is a way to get the credential helper to map to the docker container without having to have the credential helper installed on the image itself.
Configure your runner to use the credential helper with DOCKER_AUTH_CONFIG environment variable. A convenient way to do this is to bake it all into your image.
So, your gitlab-runner image should include the the docker-credential-ecr-login binary (or you should mount it in from the host).
FROM gitlab/gitlab-runner:v14.3.2
COPY bin/docker-credential-ecr-login /usr/local/bin/docker-credential-ecr-login
Then when you call gitlab-runner register pass in the DOCKER_AUTH_CONFIG environment variable using --env flag as follows:
AUTH_ENV="DOCKER_AUTH_CONFIG={ \"credsStore\": \"ecr-login\" }"
gitlab-runner register \
--non-interactive \
...
--env "${AUTH_ENV}" \
--env "AWS_SDK_LOAD_CONFIG=true" \
...
You can also set this equivalently in the config.toml, instance CI/CD variables, or anywhere CI/CD variables are set (group, project, yaml, trigger, etc).
As long as your EC2 instance (or ECS task role if running the gitlab-runner as an ECS task) has permission to pull the image, your jobs will be able to pull down images from ECR declared in image: sections.
However this will NOT necessarily let you automatically pull images using docker-in-docker (e.g. invoking docker pull within the script: section of a job). This can be configured (as it seems you already have working), but may require additional setup, depending on your runner and IAM configuration.
Can you help me find a useful step-by-step guide or a Gist outlining in detail how to configure CircleCI (using 2.0 syntax) to deploy to AWS EC2?
I understand the basic requirements and the moving pieces, but unsure what to put in the .circleci/config.yml file in the deploy step.
So far I got:
A "Hello World" Node.js app which is building successfully in CircleCI (just without the deploy step)
A running EC2 instance (Ubuntu 16.04)
An IAM user with sufficient permissions added to CircleCI for that particular job
Can you help out with the CircleCI deploy step?
Following your repository, you could create a script just like that: deploy.sh
#!/bin/bash
echo "Start deploy"
cd ~/circleci-aws
git pull
npm i
npm run build
pm2 stop build/server
pm2 start build/server
echo "Deploy end"
And in your .circleci/conf.yml you do it:
deploy:
docker:
- image: circleci/node:chakracore-8.11.1
steps:
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
- run:
name: AWS EC2 deploy
command: |
#upload all the code to machine
scp -r -o StrictHostKeyChecking=no ./ ubuntu#13.236.1.107:/home/circleci-aws/
#Run script inside of machine
ssh -o StrictHostKeyChecking=no ubuntu#13.236.1.107 "./deploy.sh"
But this is so ugly, try something like AWS Codedeploy or ecs for using containers.
I am trying to get travis-ci to run a custom deploy script that uses awscli to push a deployment up to my staging server.
In my .travis.yml file I have this:
before_deploy:
- 'curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"'
- 'unzip awscli-bundle.zip'
- './awscli-bundle/install -b ~/bin/aws'
- 'export PATH=~/bin:$PATH'
- 'aws configure'
And I have set up the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
with their correct values in the travis-ci web interface.
However when the aws configure runs, it stops and waits for user input. How can I tell it to use the environment variables I have defined?
Darbio's solution works fine but it's not taking into consideration that you may end up pushing your AWS credentials in your repository.
That is a bad thing especially if docker is trying to pull a private image from one of your ECR repositories. It would mean that you probably had to store your AWS production credentials in the .travis.yml file and that is far from ideal.
Fortunately Travis gives you the possibility to encrypt environment variables, notification settings, and deploy api keys.
gem install travis
Do a travis login first of all, it will ask you for your github credentials. Once you're logged in get in your project root folder (where your .travis.yml file is) and encrypt your access key id and secret access key.
travis encrypt AWS_ACCESS_KEY_ID="HERE_PUT_YOUR_ACCESS_KEY_ID" --add
travis encrypt AWS_SECRET_ACCESS_KEY="HERE_PUT_YOUR_SECRET_ACCESS_KEY" --add
Thanks to the --add option you'll end up with two new (encrypted) environment variables in your configuration file. Now just open your .travis.yml file and you should see something like this:
env:
global:
- secure: encrypted_stuff
- secure: encrypted_stuff
Now you can make travis run a shell script that creates the ~/.aws/credentials file for you.
ecr_credentials.sh
#!/usr/bin/env bash
mkdir -p ~/.aws
cat > ~/.aws/credentials << EOL
[default]
aws_access_key_id = ${AWS_ACCESS_KEY_ID}
aws_secret_access_key = ${AWS_SECRET_ACCESS_KEY}
EOL
Then you just need to run the ecr_credentials.sh script from your .travis.yml file:
before_install:
- ./ecr_credentials.sh
Done! :-D
Source: Encription keys on Travis CI
You can set these in a couple of ways.
Firstly, by creating a file at ~/.aws/config (or ~/.aws/credentials).
For example:
[default]
aws_access_key_id=foo
aws_secret_access_key=bar
region=us-west-2
Secondly, you can add environment variables for each of your settings.
For example, create the following environment variables:
AWS_DEFAULT_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Thirdly, you can pass region in as a command line argument. For example:
aws eb deploy --region us-west-2
You won't need to run aws configure in these cases as the cli is configured.
There is further AWS documentation on this page.
Following the advice from #Darbio, I came up with this solution:
- stage: deploy
name: "Deploy to AWS EKS"
language: minimal
before_install:
# Install kubectl
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- sudo mv ./kubectl /usr/local/bin/kubectl
# Install AWS CLI
- if ! [ -x "$(command -v aws)" ]; then curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" ; unzip awscliv2.zip ; sudo ./aws/install ; fi
# export environment variables for AWS CLI (using Travis environment variables)
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
# Setup kubectl config to use the desired AWS EKS cluster
- aws eks update-kubeconfig --region ${AWS_DEFAULT_REGION} --name ${AWS_EKS_CLUSTER_NAME}
deploy:
- provider: script
# bash script containing the kubectl commands to setup the cluster
script: bash k8s-config/deployment.sh
on:
branch: master
It is also possible to avoid installing AWS CLI altogether. Then you need to configure kubectl:
kubectl config set-cluster --server= --certificate-authority=
kubectl config set-credentials --client-certificate= --client-key=
kubectl config set-context myContext --cluster= --namespace= --user=
kubectl config use-context myContext
You can find most of the needed values in your users home directory in /.kube/config, after you performed the aws eks update-kubeconfig command on your local machine.
Except for the client certificate and key. I couldn't figure out where to get them from and therefore needed to install AWS CLI in the pipeline as well.
I want to integrate Atlassian Bamboo with AWS Elastic Beanstalk. Is there anyway to do this?
It depends a bit on your Bamboo and beanstalk config as well as the type of application you are planning to deploy on AWS Beanstalk.
We did some things for Java Web Apps:
Since Bamboo understands maven, you can have a look at the following maven plugin:
http://beanstalker.ingenieux.com.br/beanstalk-maven-plugin/configurations-and-templates.html
We are using it for some environments to create wars and upload them to elastic beanstalk. You can then create a maven task in bamboo to call the plugin.
If you downloaded and installed Bamboo on a machine you own yourself you could use the Elastic Beanstalk command line interface (CLI).
This is probably the most powerful approach, but you need to install the CLI on the bamboo instance. Then you can do almost anything. This approach should also work for other environments besides Java/Tomcat.
Another idea:
If you use Beanstalk using git (i.e. you deploy by making a code change and pushing to Beanstalk), then you can also use the new "Deployment Project" Feature in Bamboo to push the code once it passes all tests.
David's answer provides good options for cross product usage of AWS Elastic Beanstalk (+1). Nowadays I'd recommend the excellent unified AWS Command Line Interface over the now legacy AWS Elastic Beanstalk API Command Line Interface, see the resp. AWS CLI commands for elasticbeanstalk.
If you are looking for a Bamboo specific solution, you might be interested in Utoolity's Tasks for AWS (Bamboo) add-on (commercial, see disclaimer), which provides three dedicated tasks, specifically:
AWS Elastic Beanstalk Application - create, update or delete AWS Elastic Beanstalk applications.
AWS Elastic Beanstalk Application Version - create, update or delete AWS Elastic Beanstalk application versions.
AWS Elastic Beanstalk Environment - create, update, rebuild, restart, swap or terminate AWS Elastic Beanstalk environments and specify configuration settings and advanced options.
Disclaimer: I'm the co-founder of this add-on's vendor, Utoolity.
In case you're interested in C# deployments:
What we do is to simply start the awsdeploy tool (should already be installed on the build server) with a link to the configuration script. I create the environment simply in Visual Studio and when I redeploy the application once, I save the script. Once the script is on the build server, I reference it in the deployment configuration with awsdeploy /r c:\location\of\myscript.txt.
The package itself the is referenced in the AWS deployment configuration script is created at build time with the MSbuild /target:package command and defined as an artifact (default location of the ZIP package is c:\build-dir\...\project\obj\debug\package, but can be overwritten.
Everything works pretty well so far, although I am having problem to start an elastic instance when none is available (e.g. nightly builds).
Take a look at our repo: https://github.com/matzegebbe/docker-aws-login
With that snippet you are able to login with the aws an push images
simple bamboo task script (of course you need docker installed on the agents):
#!/bin/bash
docker images hellmann/awscli | grep -q awscli
[ "$?" -eq "0" ] && exit 0
cat <<'EOF' >> Dockerfile
FROM python
MAINTAINER Mathias Gebbe <mathias.gebbe#hellmann.net>
RUN pip install awscli --ignore-installed six
ENV aws_access_key_id AWS_ACCESS_KEY
ENV aws_secret_access_key AWS_SECRET_ACCESS_KEY
RUN mkdir /root/.aws/
RUN printf "[default]\nregion = eu-west-1\n" > /root/.aws/config
RUN printf "[default]\naws_access_key_id = ${aws_access_key_id}\naws_secret_access_key = ${aws_secret_access_key}\n" > /root/.aws/credentials
ENTRYPOINT ["/bin/bash","-c"]
CMD ["aws ecr get-login"]
EOF
docker build -t hellmann/awscli .
$(docker run --rm hellmann/awscli)