In AWS Codestar, how to configure branch deployments to specific environments? - amazon-web-services

I just created a brand new AWS Codestar project.
As far as I can tell, that Codestar is just a dashboard that integrates multiple AWS products.
There is one thing that I don't know how to configure yet, and it is branch deployments.
In my git repository, I have 3 branches: master, develop and staging
In an ideal world, master deploys to production, develop to the development environment and staging to the QA environment.
I don't know how to configure this pipeline using AWS, and I haven't been able to locate to relevant documentation in their developers portal.
This is my buildspec.yml file just in case it can be configured there:
version: 0.2
phases:
install:
commands:
- echo Installing NPM Packages...
- npm install
build:
commands:
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
artifacts:
type: zip
files:
- template-export.yml
This is a project that uses AWS API Gateway to route requests to AWS Lambda functions if that matters.

Sadly AWS CodePipline doesn't support passing in the git branch. Last year they have only added support to pass the git commit sha1 (more can be found here).
I'd suggest you follow the CodePipline docs here, to create 3 pipelines one for each branch (you can even create a special buildspec_dev.yaml or buildspec_prod.yaml, check out more examples here).

Related

Upload jar to Lambda when I do CodeCommit in AWS

When i push changes in AWS CodeCommit Repo, I want to make JAR file with mvn install command for that Java Code and upload it to AWS Lambda function. Location of that Jar file should be inside src/main/target. Can anyone suggest buildspec.yaml file?
Assuming that you're using AWS SAM (Serverless Application Model), this is as simple as calling a single command in the post_build section of your buildspec.yaml. Example:
version: 0.2
phases:
install:
runtime-versions:
java: corretto8
pre_build:
commands:
- mvn clean
build:
commands:
- mvn install
post_build:
commands:
- sam deploy --stack-name lambda-java --no-confirm-changeset
artifacts:
files:
- target/lambda-java.jar
discard-paths: no
Please note though that you'll also have to set up a mechanism that kicks off the build process when you push any changes to your repository. The easiest way doing this is using AWS CodePipeline, as that nicely integrates with CodeCommit. Simply create a new pipeline, choose your existing CodeCommit repository where the Java-based Lambda is stored, and select CodeBuild as the build provider (skip the deploy stage).
Also note that your CodeBuild service role will have to have the appropriate permissions to deploy the Lambda function. As SAM is leveraged, this includes permissions to upload to S3 and update the corresponding CloudFormation stack (see stack-name parameter above).
From here on, whenever you push any changes to your repo, CodePipeline will trigger a build using CodeCommit, which will then deploy a new version of your Lambda via the sam deploy command in your buildspec.yaml.

Deploy code directly to AWS EC2 instance using Github Actions

As the title says I am trying to deploy my Laravel-Angular application directly from Github to AWS EC2 instance using Github Actions.
In my application there are 3 Angular 8+ projects which are needed to be build before deployment. Where as laravel does not need to be build.
The solutions that are available suggests to use AWS Elastic Beanstalk to deploy code. But, if that is to be done how to attach an elastic beanstalk to an existing instance is not clear enough.
Is there a way to deploy code to AWS EC2 without using Elastic Beanstalk?
Here is my Github Actions build.yml :
name: Build Develop Branch
on:
push:
branches: [ develop ]
pull_request:
branches: [ develop ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [14.x]
steps:
- name: Code Checkout
uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: App 1 npm install
run: npm install
working-directory: angular-app-1
- name: App 1 Build
run: npm run build:staging
working-directory: angular-app-1
- name: App 2 npm install
run: npm install
working-directory: angular-app-2
- name: App 2 Build
run: node node_modules/#angular/cli/bin/ng build --configuration=staging
working-directory: angular-app-2
- name: App 3 npm install
run: npm install
working-directory: angular-app-3
- name: App 3 Build
run: node node_modules/#angular/cli/bin/ng build --configuration=staging
working-directory: angular-app-3
Is there a way to deploy code to AWS EC2 without using Elastic Beanstalk?
I found a simple way to deploy to EC2 instance (or to any server that accepts rsync commands over ssh) using GitHub Actions.
I have a simple file in the repo's .github/workflows folder, which GitHub Actions runs to deploy to my EC2 instance whenever a push is made to my GitHub repo.
No muss, no fuss, no special incantations or Byzantine AWS configuration details.
File .github/workflows/pushtoec2.yml:
name: Push-to-EC2
on: push
jobs:
deploy:
name: Push to EC2 Instance
runs-on: ubuntu-latest
steps:
- name: Checkout the code
uses: actions/checkout#v1
- name: Deploy to my EC2 instance
uses: easingthemes/ssh-deploy#v2.1.5
env:
SSH_PRIVATE_KEY: ${{ secrets.EC2_SSH_KEY }}
SOURCE: "./"
REMOTE_HOST: "ec2-34-213-48-149.us-west-2.compute.amazonaws.com"
REMOTE_USER: "ec2-user"
TARGET: "/home/ec2-user/SampleExpressApp"
Details of the ssh deploy GitHub Action, used above.
Real final edit
A year later, I finally got around to making the tutorial: https://github.com/Andrew-Chen-Wang/cookiecutter-django-ec2-github.
I found a Medium tutorial that also deserves some light if anyone wants to use Code Pipeline (there's a couple of differences; I store my files on GitHub while the Medium tutorial is on S3. I create a custom VPC that the other author doesn't).
Earlier final edit
AWS has finally made a neat tutorial for CodeDeploy w/ GitHub repository: https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-github-prerequisites.html take a look there and enjoy :)
Like the ECS tutorial, we're using Parameter Store to store our secrets. The way AWS previous wanted us to grab secrets was via a script in a bash script: https://aws.amazon.com/blogs/mt/use-parameter-store-to-securely-access-secrets-and-config-data-in-aws-codedeploy/
For example:
password=$(aws ssm get-parameters --region us-east-1 --names MySecureSQLPassword --with-decryption --query Parameters[0].Value)
password=`echo $password | sed -e 's/^"//' -e 's/"$//'`
mysqladmin -u root password $password
New edit (24 December 2020): I think I've nailed it. Below I pointed to Donate Anything for AWS ECS. I've moved to a self deploying setting. If you take a look at bin/scripts, I'm taking advantage of supervisord and gunicorn (for Python web development). But in context of EC2, you can simply point your AppSpec.yml to those scripts! Hope that helps everyone!
Before I start:
This is not a full answer. Not a complete walkthrough, but a lot of hints and some code that will help you with setting up certain AWS stuff like ALB and your files in your repo for this to work. This answer is more like several clues jumbled together from my sprint run trying to make ECS work last night.
I also don't have enough points to neither comment nor chat soo... here's the best thing I can offer.
Quick links (you should probably just skip these two points, though):
Check this out: https://docs.aws.amazon.com/codedeploy/latest/userguide/instances-ec2-configure.html
I don't have enough points to comment or chat... This won't be a full answer, as well, though, as I'm trying to first finish an ECS deploy from GH before moving on to EC2 from GH. Anyhow...
One last edit: this will sound like a marketing ploy but a correct implementation with GitHub actions and workflow_dispatch is located at Donate Anything's GitHub repository. You'll find the same ECS work located below in there. Do note that I changed my GitHub action to use Docker Hub since it was free (and to me cheaper if you're going to use ECS since AWS ECR is expensive).
Edit: The ECS deployment works now. Will start working on the EC2 deployment soon.
Edit 2: I added Donate Anything repo. Additionally, I'm not sure if direct EC2 deployment, at least for me, is viable since install scripts would kinda be weird. However, I still haven't found the time to get to EC2. Again, if anyone is willing to share their time, please do so and contribute!
I do want to warn everyone that SECURITY GROUPS are very important. That clogged me for a long time, so make sure you get them right. In the ECS tutorial, I teach you how I do it.
Full non-full answer:
I'm working on this issue right now in this repo and another for ECS here using GitHub actions. I haven't started too far on the EC2 one, but the basic rundown for testing is this:
CRUCIAL
You need to try and deploy from the AWS CLI first. This is because AWS Actions does not have a dedicated action for deploying to EC2 yet.
Write down each of these statements. We're going to need them later for the GitHub action.
Some hints when testing this AWS setup:
Before using CodeDeploy, you need an EC2 instance, an Application Load Balancer (you'll find it under Elastic Load Balancer), and a target group (which you create DURING the ALB setup). Go to target groups, right click on the group, and register your instance.
To deploy from CodeDeploy, create a new application. Create a new deployment group. I think, for your setup, you should do the in-place deployment type rather than the Blue/Green deployment type.
Finally, testing on the CLI, you should run the code you see here: https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-wordpress-deploy-application.html#tutorials-wordpress-deploy-application-create-deployment-cli
Do note, you may want to start from here (using S3 as a location to store your latest code. You can delete it afterwards anyways, as I believe DELETE requests don't incur charges): https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-wordpress-upload-application.html I personally don't know if that GitHub OAuth integration works. I tried once before (very amateur though, i.e. no clue what I was doing before) and nothing happened, soo... I'd just stick with that tutorial.
How your test rundown will look like:
For me, for my ECS repo, I just went a full 10 hours straight trying to configure everything properly step by step like the GitHub action. For you, you should do the same. Imagine you're the code: figure out where you need to start from.
Aha! I should probably figure out CodeDeploy first. Let's right an appspec.yaml file first! The appspec file is how CodeDeploy will work on the hooks for everything. Unfortunately, I'm current going through that problem here but that's because the EC2 and ECS syntax for AppSpec files are different. Luckily, EC2 doesn't have any special areas. Just get your files and hooks right. An example from my test:
version: 0.0
os: linux
files:
- source: /
destination: /code
hooks:
BeforeInstall:
- location: aws_scripts/install_dependencies
timeout: 300
runas: root
ApplicationStop:
- location: aws_scripts/start_server
runas: root
The GitHub action:
What you'll need at minimum:
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# TODO Change your AWS region here!
aws-region: us-east-2
The checking out of code is necessary to... well... get the code.
For the configuration of AWS credentials, you'll want to add AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to your GitHub secrets with a proper IAM credential. For this, I believe the only IAM role needed is for full CodeDeploy stuff.
Deploying the code:
This is when that test code that you should've tried before reaching this step is for. Now that your workflow is setup, let's paste the code from the CLI into your action.
- name: Deploying with CodeDeploy
id: a-task
env:
an-environment-variable: anything you want
run: |
echo "Your CLI code should be placed here"
Sorry if this was confusing, not what you're looking for, or wanted a complete tutorial. I, too, haven't actually gotten this to work, but it's also been awhile since I last tried, and the last time I tried, I didn't even know what an EC2 instance was... I just did a standalone EC2 instance and used rsync to transfer my files. Hopefully what I've written was several clues that can guide you very easily to a solution.
If you got it to work, please share it on here: https://github.com/Andrew-Chen-Wang/cookiecutter-django-ec2-gh-action so that no one else has to suffer the pain of AWS deployment...
First, you need to go through this tutorial on AWS to set up your EC2 server, as well as configure the Application and Deployment Group in CodeDeploy: Tutorial: Use CodeDeploy to deploy an application from GitHub
Then, you can use the following workflow in GitHub Actions to deploy your code on push. You essentially use the AWS CLI to create a new deployment. Store the AWS credentials for the CLI in GitHub Secrets.
Here is an example for deploying a Node app:
name: Deploy to AWS
on:
push:
branches: [ main ]
jobs:
deploy:
name: Deploy AWS
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [12.x]
app-name: ['your-codedeploy-application']
deployment-group: ['your-codedeploy-deploy-group']
repo: ['username/repository-name']
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: Install dependencies
run: npm install
- name: Build app
run: npm run build
- name: Install AWS CLI
run: |
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-session-token: ${{ secrets.AWS_SESSION_TOKEN }}
aws-region: us-east-1
- name: Deploy to AWS
run: |
aws deploy create-deployment \
--application-name ${{ matrix.app-name }} \
--deployment-config-name CodeDeployDefault.OneAtATime \
--deployment-group-name ${{ matrix.deployment-group }} \
--description "GitHub Deployment for the ${{ matrix.app-name }}-${{ github.sha }}" \
--github-location repository=${{ matrix.repo }},commitId=${{ github.sha }}

Gitlab CI failing cannot find aws command

I am trying to set up a pipeline that builds my react application and deploys it to my AWS S3 bucket. It is building fine, but fails on the deploy.
My .gitlab-ci.yml is :
image: node:latest
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
S3_BUCKET_NAME: $S3_BUCKET_NAME
stages:
- build
- deploy
build:
stage: build
script:
- npm install --progress=false
- npm run build
deploy:
stage: deploy
script:
- aws s3 cp --recursive ./build s3://MYBUCKETNAME
It is failing with the error:
sh: 1: aws: not found
#jellycsc is spot on.
Otherwise, if you want to just use the node image, then you can try something like Thomas Lackemann details (here), which is to use a shell script to install; python, aws cli, zip and use those tools to do the deployment. You'll need AWS credentials stored as environment variables in your gitlab project.
I've successfully used both approaches.
The error is telling you AWS CLI is not installed in the CI environment. You probably need to use GitLab’s AWS Docker image. Please read the Cloud deployment documentation.

Deploying an AWS CodeStar project on a different account

AWS CodeStar lets you spin up CodePipelines and CodeCommit repos to support your project. If I want to build a project in CodeStar and then take the resultant package, how can I deploy that package into another account?
For example, the basic "Python Web Service Lambda" CodeStar template generated these files,
$ ls ./ -R
./:
buildspec.yml index.py README.md template.yml tests
./tests:
test_handler.py
This notably lacks the templates for setting up the CodePipeline that deploys the code. Thus I am left to figure out how to deploy it myself.
How can I deploy the CodeStar templates onto a different AWS account?
You can not. Currently (2019-01-10) The CodeStar functionality is limited. CodeStar Projects can only be created via the web console and can not be imported / exported / codified. Sadly CodeStar is only a web console feature at this time.

CodePipeline buildspec and multiple build actions

A simple buildspec like:
version: 0.2
phases:
install:
commands:
- (cd lambda/src; npm install)
- aws cloudformation package --template-file lambda/sam.yml --s3-bucket skynet-lambda --output-template-file SkynetLambdaPackaged.yml
artifacts:
type: zip
files:
- SkynetLambdaPackaged.yml
Works fine when I have one action in my build stage. But what if I want to have more build actions for example: I want to build my api server and frontend files in parallel. How do I model this?
UPDATE
In CodePipeline I can create actions that run in parallel like below, how is this modeled in buildspec? Or isit impossible?
You can use two different CodeBuild projects from the same source as two separate parallel actions in your CodePipeline.
For this to happen, you can use two buildspec files in your source.
e.g.
buildspec-frontend.yml
phases:
install:
commands:
- (cd frontend/src; npm run build)
- aws s3 sync frontend/dist s3://<insert s3 bucket url here>/ --delete
buildspec-backend.yml
phases:
install:
commands:
- (cd lambda/src; npm install)
- aws cloudformation package --template-file lambda/sam.yml --s3-bucket skynet-lambda --output-template-file SkynetLambdaPackaged.yml
Then, create a frontend CodeBuild project that uses the frontend buildspec. Repeat for the backend.
Then, when you go to your Build stage in your CodePipeline, use the two CodeBuild projects as parallel actions.
Update: The information below is now irrelevant since I misunderstood the question.
If your frontend can be deployed to s3, just add its deployment commands where you put your api deployment commands.
e.g.
phases:
install:
commands:
- (cd lambda/src; npm install)
- aws cloudformation package --template-file lambda/sam.yml --s3-bucket skynet-lambda --output-template-file SkynetLambdaPackaged.yml
- (cd frontend/src; npm run build)
- aws s3 sync frontend/dist s3://<insert s3 bucket url here>/ --delete
If your frontend is not on s3, just replace those lines with your own frontend deployment commands.
CodeBuild executes those commands in sequence. If you really need to run them in parallel, there are many ways to do it.
My preference is to put the commands in a Makefile and call them from your buildspec.yml (e.g. make --jobs 2 backend frontend).
enter image description here
From my understanding, if you just have one source, you cannot have two buildspec file because you can only name that file as buildspec. Maybe you can try "Insert build commands" option