Output AWS CLI YAML output to console - amazon-web-services

I am using the AWS CLI and CloudFormation to create a new S3 bucket.
Here is my yaml file:
AWSTemplateFormatVersion: '2010-09-09'
Description: Creates an S3 bucket
Parameters:
BucketName:
Description: Name of the Bucket
Type: String
Resources:
ArtifactBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Outputs:
ArtifactBucket:
Value: !Sub ${BucketName}
BucketArn:
Value: !GetAtt ArtifactBucket.Arn
Description: Arn of the new bucket
I run it with the following cli command in a terminal window:
aws cloudformation deploy --stack-name brendan-s3 \
--template-file ComposeEveryApp/create-s3-bucket.yaml \
--profile compose-staging \
--parameter-overrides BucketName=brendan
Everything works fine. Here is the new bucket displayed in the AWS console:
I'd like to display the Arn of the new bucket (as shown above) in the terminal window. How do I do that?

The command aws cloudformation deploy is only instructing the CloudFormation service to start the deployment and not actually waiting for the deployment to finish. Hence there is no link between the Outputs section and the return value of the command you're executing on the CLI.
If you want the Outputs of a cloudformation stack, you'll have to use the describe-stacks command, and you'll need to combine it with a client side filter using --query if you want to only output that specific value.
You can find more info on this SO question.

You can use describe-stacks command. One of its return values will be outputs of your stack.

Related

Lambda code does not get zipped during `package` command when using substacks

I am using CloudFormation for creating lambda functions. The lambda functions are stored in a separate file and then recreated using aws cloudformation package command. This works fine and the stack gets deployed successfully:
# Filename: auth/auth.yml
# Lambda JS file: auth/lambda-pre-signup.js
Resources:
## Other resources here
MyPreSignupLambda:
Type: AWS::Lambda::Function
Properties:
Architectures:
- arm64
Code: 'lambda-pre-signup.js'
Handler: 'lambda-pre-signup.handler'
Runtime: nodejs16.x
PackageType: Zip
Role: !GetAtt MyRole.Arn
Command:
aws cloudformation package --template-file auth.yml --s3-bucket my-bucket --output-template-file generated-auth.yml
aws cloudformation deploy --template-file generated-auth.yml --stack-name test-stack --capabilities CAPABILITY_IAM
However, when I create a root stack template and reference lambda, I get an error:
Resource handler returned message: "Could not unzip uploaded file. Please check your file, then try to upload again. (Service: Lambda, Status Code: 400, Request ID: xxxxx)"
When I check the S3 bucket for the uploaded file, the source code is there but it is not zipped (I can download and directly view the code without needing to unzip it).
Here is my current CF template for root stack:
# Filename: root.yml
Resources:
MyAuth:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: ./auth/auth.yml
Command:
aws cloudformation package --template-file root.yml --s3-bucket my-bucket --output-template-file generated-root.yml
aws cloudformation deploy --template-file generated-root.yml --stack-name test-root-stack --capabilities CAPABILITY_IAM
Is there some option in the package command to make sure that the uploaded lambda code is zipped?
EDIT: Wrote a wrong argument

AWS Cloudformation | How to Configure Lambda to Use Latest Code in S3 Bucket

Tests3bucketLambda:
Type: "AWS::Lambda::Function"
Properties:
Code:
S3Bucket: TestS3Bucket
S3Key: Tests3.zip
FunctionName: "test-lambda-function"
Handler: lambda-function-s3.lambda_handler
Role: !GetAtt LambdaExecutionRole.Arn
Runtime: python3.6
Issue: When I update the new code that is zipped and uploaded to the S3 bucket during code build, but the change is not deployed to the existing lambda functions.
If you deploy new code to the object with the same key, CF will not treat it like change, since template itself hasn't been modified. There are few ways to mitigate this.
Use bucket versioning and provide object version along with object key: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html
Code:
S3Bucket: TestS3Bucket
S3Key: Tests3.zip
S3ObjectVersion: blablabla....
Modify your object key on each deployment, with timestamp for example
Code:
S3Bucket: TestS3Bucket
S3Key: Tests3_2021-05-06T17:15:55+00:00.zip
Use automated tools like Terraform or AWS CDK to take care of these things
If you want the lambda to automatically pickup the latest code then it is not possible by using cloud formation.
To do that you could synch the code file to a s3 bucket and then try the approach mentioned here How can AWS Lambda pick the latest versions of of the script from S3 . I was able to achieve it and have mentioned the solution there.
Expanding on Oleksii's answer, I'll just add that I use a Makefile and an S3 bucket with versioning to handle this issue. A version-enabled S3 bucket creates a new object and a new version number every time a modified file is uploaded (keeping all the old versions and their version numbers). If you don't want a dependency on make in your build/deploy process this won't be of interest to you.
Make can examine the filesystem and look for updated files to trigger a target action based an updated file (as a dependency).
Here's a Makefile for a simple stack with one lambda function. The relevant parts of the Cloudformation (CFN) file will be shown below.
.DEFAULT_GOAL := deploy
# Bucket must exist and be versioning-enabled
lambda_bucket = lambda_uploads
deploy: lambda.zip
#set -e ;\
lambda_version=$$(aws s3api list-object-versions \
--bucket $(lambda_bucket) --prefix lambda.zip \
--query 'Versions[?IsLatest == `true`].VersionId | [0]' \
--output text) ;\
echo "Running aws cloudformation deploy with ZIP version $$lambda_version..." ;\
aws cloudformation deploy --stack-name zip-lambda \
--template-file test.yml \
--parameter-overrides LambdaVersionId=$$lambda_version \
--capabilities CAPABILITY_NAMED_IAM
lambda.zip: lambda/lambda_func.js
#zip lambda.zip lambda
#aws s3 cp lambda.zip s3://$(lambda_bucket)
The deploy target has a dependency on the lambda.zip target which itself has a dependency on lambda_func.js. This means that the rule for lambda.zip must be valid before the rule for deploy can be run.
So, if lambda_func.js has a timestamp newer than the lambda.zip file, an updated zip file is created and uploaded. If not, the rule is not executed because the lambda function has not been updated.
Now the deploy rule can be run. It:
Uses the AWS CLI to get the version number of the latest (or newest) version of the zip file.
Passes that version number as a parameter to Cloudformation as it deploys the stack, again using the AWS CLI.
Some quirks in the Makefile:
The backslashes and semicolons are required to run the deploy rule as one shell invocation. This is needed to capture the lambda_version variable for use when deploying the stack.
The --query bit is an AWS CLI capability used to extract information from JSON data that has been returned from the command. jq could also be use here.
The relevant parts of the Cloudformation (YAML) file look like this:
AWSTemplateFormatVersion: 2010-09-09
Description: Test new lambda ZIP upload
Parameters:
ZipVersionId:
Type: String
Resources:
ZipLambdaRole: ...
ZipLambda:
Type: AWS::Lambda::Function
Properties:
FunctionName: zip-lambda
Role: !GetAtt ZipLambdaRole.Arn
Runtime: nodejs16.x
Handler: index.handler
Code:
S3Bucket: lambda_uploads
S3Key: lambda.zip
S3ObjectVersion: !Ref ZipVersionId
MemorySize: 128
Timeout: 3
The zip file is uniquely identified by S3Bucket, S3Key, and the S3ObjectVersion. Note that, and this is important, if the zip file was not updated (the version number remains the same as previous deploys) Cloudformation will not generate a change set--it requires a new version number to do that. This is the desired behavior--there is no new deploy unless the lambda has been updated.
Finally you'll probably want to put a lifecycle policy on the S3 bucket so that old versions of the zip file are periodically deleted.
These answers to other questions informed this answer.
This is a bit old, but this is in need of a concrete answer for those who are starting off.
Oleksii's answer is a correct guideline. However, The way to implement Option 2 would be as follows.
I used Java, but the same logic can apply to python too.
In your case imagine your cloud formation template for lambda that you pasted is named as cloud_formation_lambda.yml
Now in the code build stage where you are preparing this artifact that you mention Tests3 in your case. Prepare it with a unique identifier appended such as the epoch.
Then all you need to do in either the build phase or post-build phase is to use some simple linux command to accommodate those name changes.
The first step would be to rename your built artifact to append the unique value such as epoch
Use a sed command to replace the occurrence of Tests3 in your cloud formation template
Thus the buildspec.yml that implements this will be as follows
phases:
install:
runtime-versions:
java: corretto17
build:
commands:
- echo $(date +%s) > epoch.txt
- mvn package
post_build:
commands:
- mv target/Tests3.jar target/Tests3-$(cat epoch.txt).jar
- sed -i "s/Tests3.jar/Tests3-$(cat epoch.txt).jar/g" cloud_formation_lambda.yml
artifacts:
files:
- target/Tests3-$(cat epoch.txt).jar
- cloud_formation_lambda.yml

Cloudformation CLI Parameters Using Deploy Command

I'm having an issue getting the hang of using cli parameters with cloudformation deploy. I'm trying to pass in the name for the S3 bucket that I want to create, and the cli is complaining when I use --parameters to do this:
aws cloudformation deploy --template-file ../infrastructure.yml --stack-name stripe-python --parameters ParameterKey=S3BucketNameParameter,ParameterValue=lambda-artifacts-948d01bc80800b36
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument subcommand: Invalid choice, valid choices are:
push | register
deregister | install
Obviously, omitting the parameter doesn't work either:
aws cloudformation deploy --template-file ../infrastructure.yml --stack-name stripe-python
An error occurred (ValidationError) when calling the CreateChangeSet operation: Parameters: [S3BucketNameParameter] must have values
When I look at the documentation for cloudformation deploy, it seems to not support --parameters but instead --parameter-overrides, which I've also tried with no success:
aws cloudformation deploy --template-file ../infrastructure.yml --stack-name stripe-python --parameter-overrides S3BucketNameParameter=lambda-artifacts-948d01bc80800b36
An error occurred (ValidationError) when calling the CreateChangeSet operation: Parameters: [S3BucketNameParameter] must have values
So, I'm kind of stumped here. Here's the template file's contents:
cat ../infrastructure.yml
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Lambda application that calls the Stripe API to tokenize and charge credit cards
Parameters:
S3BucketNameParameter:
Type: String
Description: Bucket name for deployment artifacts
Resources:
S3Bucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
BucketName: !Ref S3BucketNameParameter
Any suggestions on the correct approach here?
This works for me:
aws cloudformation deploy --template-file infrastructure.yml --stack-name stripe-python --parameter-overrides S3BucketNameParameter=lambda-artifacts-948d01bc80800b36
It may come down to awscli version (ie check the version you are running and the doc for that)
aws --version
aws-cli/2.0.44 Python/3.8.5 Darwin/18.7.0 source/x86_64

How to create and zip a docker container for AWS Lambda

I'm trying to create and then zip a Docker container to upload to S3 to be run by an AWS Lambda function. I was trying to work off an article but the instructions are sparse (https://github.com/abhisuri97/auto-alt-text-lambda-api).
I've installed Docker and the Amazon Linux image but I don't know how to create a Docker container that contains the github repo, and then zip it so that it can be accessed by Lambda.
This is what I've tried to piece together from other tutorials:
git clone https://github.com/abhisuri97/auto-alt-text-lambda-api.git
cd auto-alt-text-lambda-api
docker run -v -it amazonlinux:2017.12
zip -r -9 -q ~/main.zip
Any help would be greatly appreciated.
The instructions aren't clear but I suspect the reference to Docker is just for testing. You don't need Docker to run an AWS Lambda function. You will need an AWS API Gateway API though to execute the Lambda function over HTTPS.
I'd recommend starting with a CloudFormation stack using the AWS Serverless Application Mode (https://docs.aws.amazon.com/lambda/latest/dg/serverless_app.html).
Create an S3 bucket for the zip file and create a CloudFormation template similar to:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
LambdaFunction:
Type: AWS::Serverless::Function
Properties:
Handler: application.predict
Runtime: python2.7
Events:
HttpGet:
Type: Api
Properties:
Path: 'auto-alt-text-api'
Method: get
Package the Lambda function with:
aws cloudformation package --template-file template.yaml --output-template-file template-out.yaml --s3-bucket <your-bucket> --s3-prefix <your-prefix>
Then deploy it with:
aws cloudformation deploy --template-file template-out.yaml --stack-name auto-alt-text-lambda-api-stack --capabilities CAPABILITY_IAM
You will probably have to add IAM roles and Lambda permissions to the template for the application to work properly.

Importing AWS information from CloudFormation to CodeBuild

I have a pipeline in AWS with Codestar, CodeBuild, CloudFormation, etc.
I am trying to figure out how to get information from the CloudFormation step returned to the CodeBuild step. Let me break it down:
I have a buildspec.yml for CodeBuild
# buildspec.yml
...
phases:
...
build:
commands:
- aws cloudformation package --region $REGION --template template.yml --s3-bucket $S3_BUCKET --output-template $OUTPUT_TEMPLATE
The above kicks off a CloudFormation build using our template.yml
# template.yml
...
S3BucketAssets:
Type: AWS::S3::Bucket
...
At this point, it creates a unique name for an S3 bucket. Awesome. Now, for step 2 in my buildspec.yml for CodeBuild, I want to push items to the S3 bucket just created in the CloudFormation template. BUT, I don't know how to get the dynamically created name of the S3 bucket from the CloudFormation template. I want something similar to:
# buildspec.yml
...
phases:
...
build:
commands:
# this is the same step as shown above
- aws cloudformation package --region $REGION --template template.yml --s3-bucket $S3_BUCKET --output-template $OUTPUT_TEMPLATE
# this is the new step
- aws s3 sync dist_files/ s3://{NAME_OF_THE_NEW_S3_BUCKET}
How can I accomplish getting the dynamically named S3 bucket so that I can push to it?
I am aware that within a CloudFormation template, you can reference the S3 bucket name with something like !GetAtt [ClientWebAssets, WebsiteURL]. But, I do not know how to get that information out of the cloudformation template and back into the codebuild template.
You could move to using CodePipeline. Stage one would be deploying the application via cloudformation with an output artifact being the stack creation output
http://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#action-requirements