Deploy to AWS EC2 from AWS S3 via Bitbucket Pipelines - amazon-web-services

I have a requirement to do CI/CD using Bitbucket Pipelines.
We use Maven to build our code on Bitbucket pipelines and push the artifacts (jars) to AWS S3. The missing link is to figure out a way to get the artifacts from S3 and deploy to our EC2 instance.
It should all work from Bitbucket Pipelines yml - hopefully using Maven plugins.
For pushing the artifacts to S3 we use:
<groupId>com.gkatzioura.maven.cloud</groupId>
<artifactId>s3-storage-wagon</artifactId>
Is there a way/plugin that will download the artifact from S3's bucket and deploy it to EC2 instance specific folder and perhaps call a sh script to run the jars?
Thank you!

Use AWS Code Deploy (https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html) to deploy it to the EC2 instance. The trigger for code deploy would be the S3 bucket that you deploy your jars to. You will need to turn S3 versioning on to make it work. Code Deploy has it's own set of hooks that you can use to perform any shell command or run any bat files on the EC2.

Related

Install and initialise google-cloud-cli in AWS CodeBuild buildspec.yml

I am trying to push docker container image built in AWS CodeBuild project to GCP Artifact Registry. In order to push image from AWS managed Ubuntu CodeBuild env, I will need to install and initialise google-cloud-cli. However, to authenticate/activate the CLI using a service account, it requires a service-account-key.json file containing the service account credentials as mentioned here: https://cloud.google.com/container-registry/docs/advanced-authentication.
I would like to avoid having to setup a EFS just to pass a json file to the build server. What is the best way to authenticate google-cloud-cli using a service account without having to use a json file?

AWS reference solution for a serverless SaaS system - Error: Template file not found - template.yml

I'm trying to deploy an AWS reference solution using AWS Cloud9, that demonstrates a sample SaaS system. The repo for the solution is here:
https://github.com/aws-samples/aws-saas-factory-ref-solution-serverless-saas/blob/main/DOCUMENTATION.md
There's a deployment file that can be run in order to bootstrap the example system in AWS - it's /aws-saas-factory-ref-solution-serverless-saas/deployment.sh within the repo.
Part of the deployment script builds bootstrap resources using the file /aws-saas-factory-ref-solution-serverless-saas/server/bootstrap-template.yaml, which is a CloudFormation template. The script then deploys bootstrap resources based on the configuration file /aws-saas-factory-ref-solution-serverless-saas/server/samconfig-bootstrap.toml
The relevant parts of ./deployment.sh that perform the build and bootstrap are as follows. The build and deployment utilise the AWS 'Serverless Application Model' (sam):
sam build -t bootstrap-template.yaml --use-container --region=$REGION
sam deploy --config-file samconfig-bootstrap.toml --region=$REGION --parameter-overrides AdminEmailParameter=$1
When I run the ./deployment.sh on a Cloud 9 instance on EC2 (in Singapore region), I get the following error during the common resource bootstrapping stage:
[11034] Failed to execute script main
sam deploy --config-file samconfig-bootstrap.toml --region=ap-southeast-1 --parameter-overrides AdminEmailParameter=chris.halcrow#elenium.com
Error: Template file not found at /home/ec2-user/environment/aws-saas-factory-ref-solution-serverless-saas/server/template.yml
I'm not seeing anything when I search the code for template.yml, so I'm not sure what's attempting to reference this. The file isn't present in the solution.
The guide here describes how to quickly set up an AWS Cloud9 environment running on an EC2 instance, into which the solution can be cloned from its Github repo in order to run the ./deployment.sh script - https://catalog.us-east-1.prod.workshops.aws/workshops/b0c6ad36-0a4b-45d8-856b-8a64f0ac76bb/en-US/pre-requisites/12-own-aws-account
After looking at the build-spec file (buildspec.yaml files are used to provide meta-data for CI/CD in AWS CodeBuild), it seems that the AWS team is testing the deployment using CI/CD Pipelines and forgot to update the shell script.
You on the other hand don't need to worry about the the build-spec file.
Solution
Go and just change template.yml to tenant-template.yaml
I found out that template.yml is an artifact that's normally produced during the build process - see SAM deploy command reference
sam deploy
By default when you use this command, the AWS SAM CLI assumes that
your current working directory is your project's root directory. The
AWS SAM CLI first tries to locate a template file built using the sam
build command, located in the .aws-sam subfolder, and named
template.yaml.
It seems that the template.yml wasn't being created due to insufficient memory size of the EC2 instance that Cloud9 was running in.
The solution for this was provided by #ujwalbukka in the following git issue, within the repo for the reference solution:
https://github.com/aws-samples/aws-saas-factory-ref-solution-serverless-saas/issues/19
To get the SaaS reference solution to run in Cloud9, I cloned an additional repo inside Cloud9 (the repo for the AWS workshop for the SaaS reference solution). I cloned this using git clone https://github.com/aws-samples/aws-serverless-saas-workshop.git
After cloning that repo, I ran through the steps here to run scripts that exist in the workshop repo, that increase the Cloud9 EC2 instance size and install the prerequisites for running the reference solution.

How to upload and deploy zip file to AWS elastic beanstalk via CLI?

I do not want to use the console. No manual processes. I need a command line version, something that I can code in my continuous deployment scripts.
As part of the build process, I can output a ZIP file (be it on my local machine or in CI process, e.g: via bitbucket pipelines or AWS codedeploy VM instance).
I want a command like:
aws eb deploy my-app ./server.zip
That is, first upload my chosen zip file and then deploy it (doesn't have to be one command).
The official eb deploy does not seem to support this, nor have I been able to find any other method to do this.
Any ideas would be much appreciated :)
I don't think eb CLI supports uploading a ZIP and updating an environment but you can use a combination of AWS CLI commands.
Upload the ZIP to S3
Create an application version
Update the environment
Example,
aws s3 cp deploy.zip s3://mybucket/deploy.zip
aws elasticbeanstalk create-application-version --application-name my-app --version-label 12345 --source-bundle S3Bucket="mybucket",S3Key="deploy.zip"
aws elasticbeanstalk update-environment --application-name my-app --environment-name MyApp-env --version-label 12345
I was looking for this answer as well. I was able to find some AWS documentation that lets you use the Elastic Beanstalk CLI configuration to upload the zip.
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-configuration.html#eb-cli3-artifact
Deploying an artifact instead of the project folder
You can tell the EB CLI to deploy a ZIP file or WAR file that you generate as part of a separate build process by adding the following lines to .elasticbeanstalk/config.yml in your project folder.
deploy:
artifact: path/to/buildartifact.zip
If you configure the EB CLI in your Git repository, and you don't > commit the artifact to source, use the --staged option to deploy the latest build.
~/eb$ eb deploy --staged
I tested and it did work for me!

Elastic Beanstalk CLI upload version without deploying

Is there a way with EB CLI to upload a zip package to the environment but not actually deploy it? This can be done on the web console but I would like to have the output of each build in the CI server to be uploaded to Elastic Beanstalk via EB CLI but only selected builds actually be deployed (this step will be manual in the web console)
The EB CLI does not have the functionality for this process. I would suggest using the AWS CLI to push the local zip into S3 then to upload using the Create Application Version method directly, like this:
aws s3 cp app.zip s3://staging-bucket
aws elasticbeanstalk create-application-version --application-name app-name --version-label deploy-app --source-bundle S3Bucket=staging-bucket,S3Key=app.zip
Does this help your problem?

Is there a way to deploy to Elastic Beanstalk if you are not using git?

I am developing an Elastic Beanstalk app. It is a Scala web application, built with sbt. I want to deploy the resulting WAR from the command line to an existing environment.
All I can find is the eb CLI which appears to require you to use git: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-getting-started.html
Is there not a way to simply specify a WAR and environment name to perform the deployment?
What is the best workaround otherwise? I can upload to S3 from the command line and then use the web app to choose that file, but it's a bit more painful than I wanted.
You can use Elastic Beanstalk CLI (eb) instead of AWS CLI. Just run eb create to create a new environment and eb deploy to update your environment.
You can set specific artifact (your *.war file), by configuring the EB CLI (read: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-configuration.html#eb-cli3-artifact):
You can tell the EB CLI to deploy a ZIP or WAR file that you generate
as part of a separate build process by adding the following lines to
.elasticbeanstalk/config.yml in your project folder.
deploy:
artifact: path/to/buildartifact.zip
I found a way - use the aws CLI instead. First upload to S3 (I actually use s3cmd) then create an application version:
$ aws elasticbeanstalk create-application-version --application-name untaggeddb --version-label myLabel --source-bundle S3Bucket="bucketName",S3Key="key.war"
I believe the application version can then be deployed with update-environment also using the aws CLI.