AWS cli create deployment without appspec file - amazon-web-services

Is there a way to run AWS Codedeploy without the use of an appspec.yml file?
I am looking for a way to create a 100% purely command line way of running create-deployment without the use of any yml files in S3 bucket

I found examples with YAML input but not with JSON input online. While YAML has its advantages, sometimes JSON is easier to work with in my opinion (in bash/gitlab CI scripts for example).
The way to call aws deploy using JSON without the use of S3 and constructing the Appspec content in a variable:
APPSPEC=$(echo '{"version":1,"Resources":[{"TargetService":{"Type":"AWS::ECS::Service","Properties":{"TaskDefinition":"'${AWS_TASK_DEFINITION_ARN}'","LoadBalancerInfo":{"ContainerName":"react-web","ContainerPort":3000}}}}]}' | jq -Rs .)
Note the jq -Rs . at the end: the content should be a JSON-as-String and not be part of the actual JSON. Using jq we escape the JSON. Replace the variables as needed (AWS_TASK_DEFINITION_ARN, ContainerName and ContainerPort etc.)
REVISION='{"revisionType":"AppSpecContent","appSpecContent":{"content":'${APPSPEC}'}}'
And finally we can create the deployment with the new revision:
aws deploy create-deployment --application-name "${AWS_APPLICATION_NAME}" --deployment-group-name "${AWS_DEPLOYMENT_GROUP_NAME}" --revision "$REVISION"
Tested on aws-cli/2.4.15

Unfortunately there is no way to perform CodeDeploy without the use of an appsepc file.
You can use CodePipeline to deploy your assets to an S3 bucket (which does not require an appspec). But if they're then going down to an EC2 instance you would need to find your own way to have them be pulled down.

It's possible to create a deployment without appspec.yaml files in S3 for AWS Lambda/ECS deployments.
With AWS Cli V2: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/deploy/create-deployment.html
aws deploy create-deployment --cli-input-yaml file://code-deploy.yaml
Where code-deploy.yaml would have the following structure (example for ecs service):
applicationName: 'code-deploy-app'
deploymentGroupName: 'code-deploy-deployment-group'
revision:
revisionType: AppSpecContent
appSpecContent:
content: |
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "[YOUR_TASK_DEFINITION_ARN]"
LoadBalancerInfo:
ContainerName: "ecs-service-container"
ContainerPort: 8080

Related

AWS Cloudformation | How to Configure Lambda to Use Latest Code in S3 Bucket

Tests3bucketLambda:
Type: "AWS::Lambda::Function"
Properties:
Code:
S3Bucket: TestS3Bucket
S3Key: Tests3.zip
FunctionName: "test-lambda-function"
Handler: lambda-function-s3.lambda_handler
Role: !GetAtt LambdaExecutionRole.Arn
Runtime: python3.6
Issue: When I update the new code that is zipped and uploaded to the S3 bucket during code build, but the change is not deployed to the existing lambda functions.
If you deploy new code to the object with the same key, CF will not treat it like change, since template itself hasn't been modified. There are few ways to mitigate this.
Use bucket versioning and provide object version along with object key: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html
Code:
S3Bucket: TestS3Bucket
S3Key: Tests3.zip
S3ObjectVersion: blablabla....
Modify your object key on each deployment, with timestamp for example
Code:
S3Bucket: TestS3Bucket
S3Key: Tests3_2021-05-06T17:15:55+00:00.zip
Use automated tools like Terraform or AWS CDK to take care of these things
If you want the lambda to automatically pickup the latest code then it is not possible by using cloud formation.
To do that you could synch the code file to a s3 bucket and then try the approach mentioned here How can AWS Lambda pick the latest versions of of the script from S3 . I was able to achieve it and have mentioned the solution there.
Expanding on Oleksii's answer, I'll just add that I use a Makefile and an S3 bucket with versioning to handle this issue. A version-enabled S3 bucket creates a new object and a new version number every time a modified file is uploaded (keeping all the old versions and their version numbers). If you don't want a dependency on make in your build/deploy process this won't be of interest to you.
Make can examine the filesystem and look for updated files to trigger a target action based an updated file (as a dependency).
Here's a Makefile for a simple stack with one lambda function. The relevant parts of the Cloudformation (CFN) file will be shown below.
.DEFAULT_GOAL := deploy
# Bucket must exist and be versioning-enabled
lambda_bucket = lambda_uploads
deploy: lambda.zip
#set -e ;\
lambda_version=$$(aws s3api list-object-versions \
--bucket $(lambda_bucket) --prefix lambda.zip \
--query 'Versions[?IsLatest == `true`].VersionId | [0]' \
--output text) ;\
echo "Running aws cloudformation deploy with ZIP version $$lambda_version..." ;\
aws cloudformation deploy --stack-name zip-lambda \
--template-file test.yml \
--parameter-overrides LambdaVersionId=$$lambda_version \
--capabilities CAPABILITY_NAMED_IAM
lambda.zip: lambda/lambda_func.js
#zip lambda.zip lambda
#aws s3 cp lambda.zip s3://$(lambda_bucket)
The deploy target has a dependency on the lambda.zip target which itself has a dependency on lambda_func.js. This means that the rule for lambda.zip must be valid before the rule for deploy can be run.
So, if lambda_func.js has a timestamp newer than the lambda.zip file, an updated zip file is created and uploaded. If not, the rule is not executed because the lambda function has not been updated.
Now the deploy rule can be run. It:
Uses the AWS CLI to get the version number of the latest (or newest) version of the zip file.
Passes that version number as a parameter to Cloudformation as it deploys the stack, again using the AWS CLI.
Some quirks in the Makefile:
The backslashes and semicolons are required to run the deploy rule as one shell invocation. This is needed to capture the lambda_version variable for use when deploying the stack.
The --query bit is an AWS CLI capability used to extract information from JSON data that has been returned from the command. jq could also be use here.
The relevant parts of the Cloudformation (YAML) file look like this:
AWSTemplateFormatVersion: 2010-09-09
Description: Test new lambda ZIP upload
Parameters:
ZipVersionId:
Type: String
Resources:
ZipLambdaRole: ...
ZipLambda:
Type: AWS::Lambda::Function
Properties:
FunctionName: zip-lambda
Role: !GetAtt ZipLambdaRole.Arn
Runtime: nodejs16.x
Handler: index.handler
Code:
S3Bucket: lambda_uploads
S3Key: lambda.zip
S3ObjectVersion: !Ref ZipVersionId
MemorySize: 128
Timeout: 3
The zip file is uniquely identified by S3Bucket, S3Key, and the S3ObjectVersion. Note that, and this is important, if the zip file was not updated (the version number remains the same as previous deploys) Cloudformation will not generate a change set--it requires a new version number to do that. This is the desired behavior--there is no new deploy unless the lambda has been updated.
Finally you'll probably want to put a lifecycle policy on the S3 bucket so that old versions of the zip file are periodically deleted.
These answers to other questions informed this answer.
This is a bit old, but this is in need of a concrete answer for those who are starting off.
Oleksii's answer is a correct guideline. However, The way to implement Option 2 would be as follows.
I used Java, but the same logic can apply to python too.
In your case imagine your cloud formation template for lambda that you pasted is named as cloud_formation_lambda.yml
Now in the code build stage where you are preparing this artifact that you mention Tests3 in your case. Prepare it with a unique identifier appended such as the epoch.
Then all you need to do in either the build phase or post-build phase is to use some simple linux command to accommodate those name changes.
The first step would be to rename your built artifact to append the unique value such as epoch
Use a sed command to replace the occurrence of Tests3 in your cloud formation template
Thus the buildspec.yml that implements this will be as follows
phases:
install:
runtime-versions:
java: corretto17
build:
commands:
- echo $(date +%s) > epoch.txt
- mvn package
post_build:
commands:
- mv target/Tests3.jar target/Tests3-$(cat epoch.txt).jar
- sed -i "s/Tests3.jar/Tests3-$(cat epoch.txt).jar/g" cloud_formation_lambda.yml
artifacts:
files:
- target/Tests3-$(cat epoch.txt).jar
- cloud_formation_lambda.yml

CI/CD with AWS CodePipeline for a Django App

Currently I have a AWS Codecommit repository and an AWS Elastic Beanstalk enviroment in which I upload updates with the EB CLI, using eb deploy.
I have some config files that are ignored in .gitignore, I want to establish a AWS CodePipeline so when I push changes to repository, automatically run the test functions and upload the changes directly to Elastic Beanstalk
I tried implementing a simple pipeline where I push code to CodeCommit and Deploys to Elastic Beantstalk but I get the following error:
2019-09-09 11:51:45 UTC-0500 ERROR "option_settings" in one of the configuration files failed validation. More details to follow.
2019-09-09 11:51:45 UTC-0500 ERROR You cannot remove an environment from a VPC. Launch a new environment outside the VPC.
2019-09-09 11:51:45 UTC-0500 ERROR Failed to deploy application.
This is the *.config file that isn't in Codecommit
option_settings:
aws:ec2:vpc:
VPCId: vpc-xxx
Subnets: 'subnet-xxx'
aws:elasticbeanstalk:environment:
EnvironmentType: SingleInstance
ServiceRole: aws-xxxx
aws:elasticbeanstalk:container:python:
WSGIPath: xxx/wsgi.py
aws:elasticbeanstalk:healthreporting:system:
SystemType: enhanced
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: xxxxsettings
SECRET_KEY: xxxx
DB_NAME: xxxx
DB_USER: xxxx
DB_PASSWORD: xxxx
DB_HOST: xxx
DB_PORT: xxxx
aws:autoscaling:launchconfiguration:
SecurityGroups: sg-xxx
I noticed some syntax that is a little different from the above:
Subnets: value has '' around them, could this be causing the issue and if you have this here, are '' supposed to be around the other values ?
From the config file it's look like that you are using single instance. For Single instance you don't need to specify autoscaling launch configuration. Just remove the last two line it will work fine.
I think from what I have been reading is that I should not commit my config files, but add them in CodeBuild so it generates the .zip file that would be deployed to ElasticBeanstalk.

AWS CodePipeline Action execution failed

I'm trying to hook my GitHub repo with S3 so every time there's a commit, AWS CodePipeline will deploy the ./<path>/public folder to a specified S3 bucket.
So far in my pipeline, the Source works (hooked to GitHub and picks up new commits) but the Deploy failed because: Action execution failed
BundleType must be either YAML or JSON.
This is how I set them up:
CodePipeline
Action name: Source
Action provider: GitHub
Repository: account/repo
Branch: master
GitHub webhooks
CodeDeploy
Compute type: AWS Lambda
Service role: myRole
Deployment settings: CodeDeployDefault.LambdaAllAtOnce
IAM Role: myRole
AWS Service
Choose the service that will use this role: Lambda / CodeDeploy
Select your use case: CodeDeploy
Policies: AWSCodeDeployRole
I understand that there must be a buildspec.yml file in the root folder. I've tried using a few files I could find but they don't seem to work. What did I do wrong or how should I edit the buildspec file to do what I want?
Update
Thanks to #Milan Cermak. I understand I need to do:
CodePipeline:
Stage 1: Source: hook with GitHub repo. This one is working.
Stage 2: Build: use CodeBuild to grab only the wanted folder using a buildspec.yml file in the root folder of the repo.
Stage 3: Deploy: use
Action Provider: S3
Input Artifacts: OutputArtifacts (result of stage 2).
Bucket: the bucket that hosts the static website.
CodePipeline works. However, the output contains only files (.html) not folders nested inside the public folder.
I've checked this and figured how to remove path of a nested folder with discard-paths: yes but I'm unable to get all the sub-folders inside the ./<path>/public folder. Any suggestion?
CodeBuild use buildspec, but CodeDeploy use appspec.
Is there any appspec file?
You shouldn't use CodeDeploy, as that's a service for automation of deployments of applications, but rather CodeBuild, which executes commands and prepares the deployment artifact for further use in the pipeline.
These commands are in thebuildspec.yml file (typically in the root directory of the repo, but it's configurable). For your use case, it won't be too complicated, as you're not compiling anything or running tests, etc.
Try this as a starting point:
version: 0.2
phases:
build:
commands:
- ls
artifacts:
files:
- public/*
The phases section is required, that's why it's included (at least, thanks to the ls command, you'll see what files are present in the CodeBuild environment), but it's not interesting for your case. What is interesting is the artifacts section. That's where you define what is the output of the CodeBuild phase, i.e. what gets passed further to the next step in the pipeline.
Depending on how you want to have the files structured (for example, do you want to have the public directory also in the artifact or do you only want to have the files themselves, without the parent dir), you might want to use other configuration that's possible in the artifacts section. See the buildspec reference for details.
Remember to use the output artifact of the CodeBuild step as the input artifact of the Deploy to S3 step.
Buildspec is for CodeBuild as t_yamo pointed out.
You are using CodeDeploy which uses an appspec.yml file, which looks something like this for my config.
version: 0.0
os: linux
files:
- source: /
destination: /path/to/destination
hooks:
BeforeInstall:
- location: /UnzipResourceBundle.sh
ApplicationStart:
- location: /RestartServer.sh
timeout: 3600
UnzipResourceBundle.sh is just a bash script which can be used to do any number of things.
#!/bin/bash
// Do something
You can find a sample for the AppSpec.yml file from Amazon Documentation here - https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-example.html#appspec-file-example-lambda
CodePipeline recently announced a deploy to S3 action: https://aws.amazon.com/about-aws/whats-new/2019/01/aws-codepipeline-now-supports-deploying-to-amazon-s3/

How to create and zip a docker container for AWS Lambda

I'm trying to create and then zip a Docker container to upload to S3 to be run by an AWS Lambda function. I was trying to work off an article but the instructions are sparse (https://github.com/abhisuri97/auto-alt-text-lambda-api).
I've installed Docker and the Amazon Linux image but I don't know how to create a Docker container that contains the github repo, and then zip it so that it can be accessed by Lambda.
This is what I've tried to piece together from other tutorials:
git clone https://github.com/abhisuri97/auto-alt-text-lambda-api.git
cd auto-alt-text-lambda-api
docker run -v -it amazonlinux:2017.12
zip -r -9 -q ~/main.zip
Any help would be greatly appreciated.
The instructions aren't clear but I suspect the reference to Docker is just for testing. You don't need Docker to run an AWS Lambda function. You will need an AWS API Gateway API though to execute the Lambda function over HTTPS.
I'd recommend starting with a CloudFormation stack using the AWS Serverless Application Mode (https://docs.aws.amazon.com/lambda/latest/dg/serverless_app.html).
Create an S3 bucket for the zip file and create a CloudFormation template similar to:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
LambdaFunction:
Type: AWS::Serverless::Function
Properties:
Handler: application.predict
Runtime: python2.7
Events:
HttpGet:
Type: Api
Properties:
Path: 'auto-alt-text-api'
Method: get
Package the Lambda function with:
aws cloudformation package --template-file template.yaml --output-template-file template-out.yaml --s3-bucket <your-bucket> --s3-prefix <your-prefix>
Then deploy it with:
aws cloudformation deploy --template-file template-out.yaml --stack-name auto-alt-text-lambda-api-stack --capabilities CAPABILITY_IAM
You will probably have to add IAM roles and Lambda permissions to the template for the application to work properly.

What is the syntax in serverless yml file to deploy lambda to multiple regions?

I have a requirement to deploy my lambda artifact to 3 different regions. I am using serverless framework.
My .yml file looks like this:
provider:
name: aws
runtime: nodejs4.3
stage: dev
region: us-east-1
AFAIK it's impossible to configure deployment to multiple regions via serverless.yml. However, you can do it via the cli, one region at a time:
serverless deploy --stage production --region eu-central-1
serverless deploy --stage production --region eu-west-1
...
You may want to automate it using your own script, implement it as a plugin, or submit a feature proposal.