AWS EC2 Bitbucket Pipeline is not executing the latest code deployed - amazon-web-services

I've followed all the steps of implementing the Bitbucket pipeline in order to have continuous development in AWS EC2. I've used the Code Deploy Application tool together with all configuration that needs to be done in AWS. I'm using EC2, Ubuntu and I'm trying to deploy a MEAN app.
As per bitbucket, I've added variables under "Repository variables" including:
S3_BUCKET
DEPLOYMENT_GROUP_NAME
DEPLOYMENT_CONFIG
AWS_DEFAULT_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
and also I've added three required files:
codedeploy_deploy.py - that I've got from this link: https://bitbucket.org/awslabs/aws-codedeploy-bitbucket-pipelines-python/src/73b7c31b0a72a038ea0a9b46e457392c45ce76da/codedeploy_deploy.py?at=master&fileviewer=file-view-default
appspec.yml -
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/aok
permissions:
- object: /home/ubuntu/aok
owner: ubuntu
group: ubuntu
hooks:
AfterInstall:
- location: scripts/install_dependencies
timeout: 300
runas: root
- location: scripts/start_server
timeout: 300
runas: root
3. **bitbucket-pipelines.yml**
mage: node:10.15.1
pipelines:
default:
- step:
script:
- apt-get update && apt-get install -y python-dev
- curl -O https://bootstrap.pypa.io/get-pip.py
- python get-pip.py
- pip install awscli
- python codedeploy_deploy.py
- aws deploy push --application-name $APPLICATION_NAME --s3-location s3://$S3_BUCKET/aok.zip --ignore-hidden-files
- aws deploy create-deployment --application-name $APPLICATION_NAME --s3-location bucket=$S3_BUCKET,key=aok.zip,bundleType=zip --deployment-group-name $DEPLOYMENT_GROUP_NAME
On the Pipeline tab on Bitbucket when I am pushing the code is showing the Successful message and also in S3 when I am downloading the latest version, the changes that I pushed are there. The problem is the website is not showing the new changes, there is still the initial version that I cloned before implementing the PIPELINE.

This codedeploy_deploy.py script is not supported anymore. The recommended way is to migrate from the CodeDeploy addon to aws-code-deploy Bitbucket Pipe. There is a deployment guide from Atlassian that will help you to get started with the pipe: https://confluence.atlassian.com/bitbucket/deploy-to-aws-with-codedeploy-976773337.html

Related

Deploy code directly to AWS EC2 instance using Github Actions

As the title says I am trying to deploy my Laravel-Angular application directly from Github to AWS EC2 instance using Github Actions.
In my application there are 3 Angular 8+ projects which are needed to be build before deployment. Where as laravel does not need to be build.
The solutions that are available suggests to use AWS Elastic Beanstalk to deploy code. But, if that is to be done how to attach an elastic beanstalk to an existing instance is not clear enough.
Is there a way to deploy code to AWS EC2 without using Elastic Beanstalk?
Here is my Github Actions build.yml :
name: Build Develop Branch
on:
push:
branches: [ develop ]
pull_request:
branches: [ develop ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [14.x]
steps:
- name: Code Checkout
uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: App 1 npm install
run: npm install
working-directory: angular-app-1
- name: App 1 Build
run: npm run build:staging
working-directory: angular-app-1
- name: App 2 npm install
run: npm install
working-directory: angular-app-2
- name: App 2 Build
run: node node_modules/#angular/cli/bin/ng build --configuration=staging
working-directory: angular-app-2
- name: App 3 npm install
run: npm install
working-directory: angular-app-3
- name: App 3 Build
run: node node_modules/#angular/cli/bin/ng build --configuration=staging
working-directory: angular-app-3
Is there a way to deploy code to AWS EC2 without using Elastic Beanstalk?
I found a simple way to deploy to EC2 instance (or to any server that accepts rsync commands over ssh) using GitHub Actions.
I have a simple file in the repo's .github/workflows folder, which GitHub Actions runs to deploy to my EC2 instance whenever a push is made to my GitHub repo.
No muss, no fuss, no special incantations or Byzantine AWS configuration details.
File .github/workflows/pushtoec2.yml:
name: Push-to-EC2
on: push
jobs:
deploy:
name: Push to EC2 Instance
runs-on: ubuntu-latest
steps:
- name: Checkout the code
uses: actions/checkout#v1
- name: Deploy to my EC2 instance
uses: easingthemes/ssh-deploy#v2.1.5
env:
SSH_PRIVATE_KEY: ${{ secrets.EC2_SSH_KEY }}
SOURCE: "./"
REMOTE_HOST: "ec2-34-213-48-149.us-west-2.compute.amazonaws.com"
REMOTE_USER: "ec2-user"
TARGET: "/home/ec2-user/SampleExpressApp"
Details of the ssh deploy GitHub Action, used above.
Real final edit
A year later, I finally got around to making the tutorial: https://github.com/Andrew-Chen-Wang/cookiecutter-django-ec2-github.
I found a Medium tutorial that also deserves some light if anyone wants to use Code Pipeline (there's a couple of differences; I store my files on GitHub while the Medium tutorial is on S3. I create a custom VPC that the other author doesn't).
Earlier final edit
AWS has finally made a neat tutorial for CodeDeploy w/ GitHub repository: https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-github-prerequisites.html take a look there and enjoy :)
Like the ECS tutorial, we're using Parameter Store to store our secrets. The way AWS previous wanted us to grab secrets was via a script in a bash script: https://aws.amazon.com/blogs/mt/use-parameter-store-to-securely-access-secrets-and-config-data-in-aws-codedeploy/
For example:
password=$(aws ssm get-parameters --region us-east-1 --names MySecureSQLPassword --with-decryption --query Parameters[0].Value)
password=`echo $password | sed -e 's/^"//' -e 's/"$//'`
mysqladmin -u root password $password
New edit (24 December 2020): I think I've nailed it. Below I pointed to Donate Anything for AWS ECS. I've moved to a self deploying setting. If you take a look at bin/scripts, I'm taking advantage of supervisord and gunicorn (for Python web development). But in context of EC2, you can simply point your AppSpec.yml to those scripts! Hope that helps everyone!
Before I start:
This is not a full answer. Not a complete walkthrough, but a lot of hints and some code that will help you with setting up certain AWS stuff like ALB and your files in your repo for this to work. This answer is more like several clues jumbled together from my sprint run trying to make ECS work last night.
I also don't have enough points to neither comment nor chat soo... here's the best thing I can offer.
Quick links (you should probably just skip these two points, though):
Check this out: https://docs.aws.amazon.com/codedeploy/latest/userguide/instances-ec2-configure.html
I don't have enough points to comment or chat... This won't be a full answer, as well, though, as I'm trying to first finish an ECS deploy from GH before moving on to EC2 from GH. Anyhow...
One last edit: this will sound like a marketing ploy but a correct implementation with GitHub actions and workflow_dispatch is located at Donate Anything's GitHub repository. You'll find the same ECS work located below in there. Do note that I changed my GitHub action to use Docker Hub since it was free (and to me cheaper if you're going to use ECS since AWS ECR is expensive).
Edit: The ECS deployment works now. Will start working on the EC2 deployment soon.
Edit 2: I added Donate Anything repo. Additionally, I'm not sure if direct EC2 deployment, at least for me, is viable since install scripts would kinda be weird. However, I still haven't found the time to get to EC2. Again, if anyone is willing to share their time, please do so and contribute!
I do want to warn everyone that SECURITY GROUPS are very important. That clogged me for a long time, so make sure you get them right. In the ECS tutorial, I teach you how I do it.
Full non-full answer:
I'm working on this issue right now in this repo and another for ECS here using GitHub actions. I haven't started too far on the EC2 one, but the basic rundown for testing is this:
CRUCIAL
You need to try and deploy from the AWS CLI first. This is because AWS Actions does not have a dedicated action for deploying to EC2 yet.
Write down each of these statements. We're going to need them later for the GitHub action.
Some hints when testing this AWS setup:
Before using CodeDeploy, you need an EC2 instance, an Application Load Balancer (you'll find it under Elastic Load Balancer), and a target group (which you create DURING the ALB setup). Go to target groups, right click on the group, and register your instance.
To deploy from CodeDeploy, create a new application. Create a new deployment group. I think, for your setup, you should do the in-place deployment type rather than the Blue/Green deployment type.
Finally, testing on the CLI, you should run the code you see here: https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-wordpress-deploy-application.html#tutorials-wordpress-deploy-application-create-deployment-cli
Do note, you may want to start from here (using S3 as a location to store your latest code. You can delete it afterwards anyways, as I believe DELETE requests don't incur charges): https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-wordpress-upload-application.html I personally don't know if that GitHub OAuth integration works. I tried once before (very amateur though, i.e. no clue what I was doing before) and nothing happened, soo... I'd just stick with that tutorial.
How your test rundown will look like:
For me, for my ECS repo, I just went a full 10 hours straight trying to configure everything properly step by step like the GitHub action. For you, you should do the same. Imagine you're the code: figure out where you need to start from.
Aha! I should probably figure out CodeDeploy first. Let's right an appspec.yaml file first! The appspec file is how CodeDeploy will work on the hooks for everything. Unfortunately, I'm current going through that problem here but that's because the EC2 and ECS syntax for AppSpec files are different. Luckily, EC2 doesn't have any special areas. Just get your files and hooks right. An example from my test:
version: 0.0
os: linux
files:
- source: /
destination: /code
hooks:
BeforeInstall:
- location: aws_scripts/install_dependencies
timeout: 300
runas: root
ApplicationStop:
- location: aws_scripts/start_server
runas: root
The GitHub action:
What you'll need at minimum:
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# TODO Change your AWS region here!
aws-region: us-east-2
The checking out of code is necessary to... well... get the code.
For the configuration of AWS credentials, you'll want to add AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to your GitHub secrets with a proper IAM credential. For this, I believe the only IAM role needed is for full CodeDeploy stuff.
Deploying the code:
This is when that test code that you should've tried before reaching this step is for. Now that your workflow is setup, let's paste the code from the CLI into your action.
- name: Deploying with CodeDeploy
id: a-task
env:
an-environment-variable: anything you want
run: |
echo "Your CLI code should be placed here"
Sorry if this was confusing, not what you're looking for, or wanted a complete tutorial. I, too, haven't actually gotten this to work, but it's also been awhile since I last tried, and the last time I tried, I didn't even know what an EC2 instance was... I just did a standalone EC2 instance and used rsync to transfer my files. Hopefully what I've written was several clues that can guide you very easily to a solution.
If you got it to work, please share it on here: https://github.com/Andrew-Chen-Wang/cookiecutter-django-ec2-gh-action so that no one else has to suffer the pain of AWS deployment...
First, you need to go through this tutorial on AWS to set up your EC2 server, as well as configure the Application and Deployment Group in CodeDeploy: Tutorial: Use CodeDeploy to deploy an application from GitHub
Then, you can use the following workflow in GitHub Actions to deploy your code on push. You essentially use the AWS CLI to create a new deployment. Store the AWS credentials for the CLI in GitHub Secrets.
Here is an example for deploying a Node app:
name: Deploy to AWS
on:
push:
branches: [ main ]
jobs:
deploy:
name: Deploy AWS
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [12.x]
app-name: ['your-codedeploy-application']
deployment-group: ['your-codedeploy-deploy-group']
repo: ['username/repository-name']
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: Install dependencies
run: npm install
- name: Build app
run: npm run build
- name: Install AWS CLI
run: |
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-session-token: ${{ secrets.AWS_SESSION_TOKEN }}
aws-region: us-east-1
- name: Deploy to AWS
run: |
aws deploy create-deployment \
--application-name ${{ matrix.app-name }} \
--deployment-config-name CodeDeployDefault.OneAtATime \
--deployment-group-name ${{ matrix.deployment-group }} \
--description "GitHub Deployment for the ${{ matrix.app-name }}-${{ github.sha }}" \
--github-location repository=${{ matrix.repo }},commitId=${{ github.sha }}

Gitlab CI failing cannot find aws command

I am trying to set up a pipeline that builds my react application and deploys it to my AWS S3 bucket. It is building fine, but fails on the deploy.
My .gitlab-ci.yml is :
image: node:latest
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
S3_BUCKET_NAME: $S3_BUCKET_NAME
stages:
- build
- deploy
build:
stage: build
script:
- npm install --progress=false
- npm run build
deploy:
stage: deploy
script:
- aws s3 cp --recursive ./build s3://MYBUCKETNAME
It is failing with the error:
sh: 1: aws: not found
#jellycsc is spot on.
Otherwise, if you want to just use the node image, then you can try something like Thomas Lackemann details (here), which is to use a shell script to install; python, aws cli, zip and use those tools to do the deployment. You'll need AWS credentials stored as environment variables in your gitlab project.
I've successfully used both approaches.
The error is telling you AWS CLI is not installed in the CI environment. You probably need to use GitLab’s AWS Docker image. Please read the Cloud deployment documentation.

how to deploy helm chart from gitlab to eks?

I created a kubernetes cluster and linked it with eks.
I created also an helm chart and .gitla-ci.yml.
I want to add a new step to deploy my app using helm to the cluster, but I don't find a recent tutorial. All tutorials use gitlab-auto devops.
The image is hosted on gitlab.
How could I do to achieve this task ?
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: test
USER_GITLAB: kosted
APP_NAME: mebooks
REPO: gara-mebooks
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
stages:
- deploy
k8s-deploy:
stage: deploy
image: dtzar/helm-kubectl:3.1.2
only:
- develop
script:
# Read certificate stored in $KUBE_CA_PEM variable and save it in a new file
- echo $KUBE_URL
- kubectl config set-cluster gara-eks-cluster --server="$KUBE_URL" --certificate-authority="$KUBE_CA_PEM"
- kubectl get pods
In the gitlab console I got
The connection to the server localhost:8080 was refused - did you
specify the right host or port? Running after_script 00:01 Uploading
artifacts for failed job 00:02 ERROR: Job failed: exit code 1
1 - Create arn role or user on IAM from your aws console
2 - connect to your bastion and add the arn role/user in the ConfigMap aws-auth
you can follow this to understand how it works (you are not the creator of the cluster paragraph) : https://aws.amazon.com/fr/premiumsupport/knowledge-center/eks-api-server-unauthorized-error/
3- In your gitlab ci you just have to add this if it is a user you have created :
k8s-deploy:
stage: deploy
image: you need an image with aws + kubectl + helm
only:
- develop
script:
- aws --version
- aws --profile default configure set aws_access_key_id "your access id"
- aws --profile default configure set aws_secret_access_key "your secret"
- helm version
- aws eks update-kubeconfig --name NAME-OF-YOUR-CLUSTER --region eu-west-3
- helm upgrade init
- helm upgrade --install my-chart ./my-chart-folder
If you created a role note a user, you have just to do:
k8s-deploy:
stage: deploy
image: you need an image with aws + kubectl + helm
only:
- develop
script:
- aws --version
- helm version
- aws eks update-kubeconfig --name NAME-OF-YOUR-CLUSTER --region eu-west-3 -arn
- helm upgrade init
- helm upgrade --install my-chart ./my-chart-folder
Here I am adding my method, which is generic and can be used in any K8S environment without AWS CLI.
First, you need to convert your Kube Config to a base64 string:
cat ~/.kube/config | base64
Add the result string as a variable to your CI/CD pipeline settings of the project/group. In my example I used kube_config. Read more on how to add variables here.
Here is my CI YAML file:
stages:
# - build
# - test
- deploy
variables:
KUBEFOLDER: /root/.kube
KUBECONFIG: $KUBEFOLDER/config
k8s-deploy-job:
stage: deploy
image: dtzar/helm-kubectl:3.5.0
before_script:
- mkdir ${KUBEFOLDER}
- echo ${kube_config} | base64 -d > ${KUBECONFIG}
- helm version
- helm repo update
script:
- echo "Deploying application..."
- kubectl get pods
#- helm upgrade --install my-chart ./my-chart-folder
- echo "Application successfully deployed."
Inspired by:
https://about.gitlab.com/blog/2017/09/21/how-to-create-ci-cd-pipeline-with-autodeploy-to-kubernetes-using-gitlab-and-helm/

Howto run a script within a pipe?

I am using the pipe atlassian/aws-s3-deploy:0.4.0 in my Bitbucket pipeline to deploy to aws s3. This works well, but I need to set Cache-Control only for the index.html
How do I run code within the pipe, so that the aws cli tool is still available? It should not be another step, as the deployment process should be a single one.
My Current Script looks like this:
image: node:10.15.3
pipelines:
default:
- step:
name: Build
caches:
- node
script:
- npm install
- npm run build
artifacts:
- dist/**
- step:
name: Deploy
trigger: manual
script:
- pipe: atlassian/aws-s3-deploy:0.4.0
variables:
AWS_DEFAULT_REGION: 'eu-central-1'
S3_BUCKET: '***'
LOCAL_PATH: 'dist'
- aws s3 cp dist/index.html s3://***/index.html --cache-control no-cache,no-store
Credentials are provided via project secret variables.
Thank you!!
You could just install the aws cli in the same step:
- step:
name: Deploy
trigger: manual
# use python docker image so pip is available
image: python:3.7
script:
- pipe: atlassian/aws-s3-deploy:0.4.0
variables:
AWS_DEFAULT_REGION: 'eu-central-1'
S3_BUCKET: '***'
LOCAL_PATH: 'dist'
# install the aws cli
- pip3 install awscli --upgrade --user
- aws s3 cp dist/index.html s3://***/index.html --cache-control no-cache,no-store

gitlab CI create docker image and push to AWS

I setup a docker registry (ECR) on AWS. From my gitlab repository I'd like to setup a CI to automatically create images and push them to the repository.
I was following the following tutorial to setup everything, but when running the example, I receive the error
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
My yml file looks like this
image: docker:latest
variables:
REPOSITORY_URL: <aws-url>/<registry>/outsite-slackbot
services:
- docker:dind
before_script:
- apk add --no-cache curl jq python py-pip
- pip install awscli
stages:
- build
build:
stage: build
script:
- $(aws ecr get-login --no-include-email --region eu-west-1)
There is no problem with the Dockerfile, you can't be connected to docker daemon by the way. So check these steps:
Are you logged in as a root? (sudo su or sudo -i)
Start Docker service (service docker start)
Then follow the tutorial :)