Cloudformation CLI package macro invalid template file path error - amazon-web-services

I am attempting to use Cloudformation Explode macro for adding multiple routes to a route table destination. Using the macro is demonstrated here: https://github.com/awslabs/aws-cloudformation-templates/tree/master/aws/services/CloudFormation/MacrosExamples/Explode
Transform: AWS::Serverless-2016-10-31
Resources:
MacroFunction:
Type: AWS::Serverless::Function
Properties:
Runtime: python3.7
PackageType: Zip
CodeUri: s3://<my bucket name>/macros/lambda/
Handler: explode.handler
Macro:
Type: AWS::CloudFormation::Macro
Properties:
Name: Explode
FunctionName:
Fn::GetAtt: MacroFunction.Arn
$ aws cloudformation package --template-file s3://<my bucket name>/macros/macro.yaml --s3-bucket <my bucket arn> --output-template-file packaged.yaml
Invalid template path s3://********************/macros/macro.yaml
$ aws cloudformation package --template-file https://<mybucketname.s3.amazonaws.com/macros/macro.yaml --s3-bucket <my bucket arn> --output-template-file packaged.yaml
Invalid template path https://*************.s3.amazonaws.com/macros/macro.yaml
I have the tried both the URI and URL inputs for the template file path parameter but have been unable to get the command to take the --template-file parameter data that I pass into it. I have the artifacts located in an S3 bucket. Meaning their exists an object folder in my cloudformation templates bucket called macros/ which contains the template macro.yaml in it in addition to the /lambda folder which further contains explode.py. My macro.yaml references the S3 URI for lambda in the CodeUri property.
Not sure what else to try. I am using the cloudshell for this operation as I am limited to doing it from the management console and not my local machine so I do not have the template nor the lambda code stored in a local path directory. Is this the problem? Thank you in advance.

Related

Proper way to deploy a node/express app to lamda function

I am looking to deploy a standard node / express app to a lamda function and use it as the back end of my code. I have the API Gateway set up, and everything seems to be working fine as I get a "hello world" response when I hit the API end point for the back end.
The issue is that I need to upload new iterations of the back end and I don't know how to push the code from my local repo or github or anywhere else onto the lambda server/function.
This page says that you should zip it and push it, but that isn't real descriptive. It almost looks like it would create a new lambda each time that you use it.
zip function.zip index.js
aws lambda create-function --function-name my-function \
--zip-file fileb://function.zip --handler index.handler --runtime nodejs12.x \
--role arn:aws:iam::123456789012:role/lambda-ex
Then, there's using same to build and deploy the node app. This seems like a lot of work and setup for a simple app like this. I can set up the front end to deploy to an S3 bucket each time I push to master. Can I do something like that with lambda?
I personally would use the SAM cli. This allows you do build, package and deploy your lambda locally with use of a Cloud Formation template - IMHO a better way to go for your application stack.
You define all of your resources -api gateway, resources, routes, methods, integrations and permissions in the template. Then use sam deploy to not only create your infrastructure, but deploy your application code when it changes.
Here is an example SAM Cloud Formation template that you place in the root of your repo (this assumes a lambda function that needs to access a dynamo db table).
AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Description: Your Stack Name
Resources:
DynamoTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: my-table
KeySchema:
AttributeName: id
KeyType: HASH
BillingMode: PAY_PER_REQUEST
Lambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: your-lambda-name
CodeUri: app/
Handler: app.handler
Runtime: nodejs14.x
Environment:
Variables:
DYNAMO_TABLE: !Ref DynamoTable
Policies:
- AWSLambdaExecute
- Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- dynamodb:GetItem
- dynamodb:Query
- dynamodb:Scan
- dynamodb:PutItem
- dynamodb:UpdateItem
Resource:
- !Sub 'arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${DynamoTable}'
Events:
ApiEvent:
Type: Api
Properties:
Path: /path
Method: get
Now deploying your code (or updated stack as you add / change resources) is as simple as:
sam deploy --template-file template.yml --stack-name your-stack-name --resolve-s3 --region your-region
the sam deploy does a lot in one line:
I include the --template-file parameter - but it is really not necessary in this case because the file is called "template.yml" - which the deploy command will look for.
the "--resolve-s3" option will automatically create an s3 bucket to upload your lambda function code to versus you having to define a bucket (and create it) outside of the template. The alternative would be to specify a bucket "--s3-bucket". However you would have to create that BEFORE attempting to create the stack. This is fine - but you should take this into account when you go to delete your stack. The bucket will not be included in that process and you need to ensure the bucket is deleted in addition to the stack.
I typically add in a Makefile and do my build, prune, test, etc as part of my build and deploy.
i.e.
publish:
npm run build && \
npm test && \
npm prune --production && \
sam deploy --stack-name my-stack resolve-s3 --region us-east-1
then
make publish
More than what you asked for I realize - but perhaps someone else will find utility in this.
Instead of using the aws lambda create-function command, you can use aws lambda update-function-code to update an existing lambda:
aws lambda update-function-code \
--function-name <function-name> \
--zip-file fileb://function.zip

aws cloudformation package transforms some local paths to S3 uris but not all

Summary:
I use local paths to reference code for lambda functions and a state machine in the template.yml file describing my cloudformation setup. Transforming these to S3 uris with aws cloudformation package works for the lambda functions, but not the state machine that I'm trying to add to the setup.
Details:
I have a SAM/Cloudformation template, template.yml, that relies on paths that apply to my local repo to access both lambda functions and state machine setup files.
template.yml:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31' # specifies that this is a SAM template, will be transformed to CFN on build
Resources:
WriteDataToDb:
Type: AWS::Serverless::Function
Properties:
CodeUri: src <- here
Handler: writer.write_to_db
...
ProcessDataStateMachine:
Type: AWS::Serverless::StateMachine
Properties:
DefinitionSubstitutions:
WriteDataToDbFunctionArn: !GetAtt WriteDataToDb.Arn
DefinitionUri: statemachine/dataprocessor.json <- here
...
I use the aws cloudformation package command to translate my template.yml file into a pure cloudformation file, packaged.yml, and to store the relevant code in a S3 bucket, deploymentpackages (dummy name). The references to the lambda functions in packaged.yml have correctly been translated into S3 uris. The uri for the state machine file however, is not converted to an S3 uri but remains as the same local path.
packaged.yml:
...
WriteDataToDb:
Type: AWS::Serverless::Function
Properties:
CodeUri: s3://deploymentpackages/33fc42027aae846c97ca8e13fec1bba7 <- translated
Handler: writer.write_to_db
...
ProcessDataStateMachine:
Type: AWS::Serverless::StateMachine
Properties:
DefinitionSubstitutions:
WriteDataToDbFunctionArn:
Fn::GetAtt:
- WriteDataToDb
- Arn
DefinitionUri: statemachine/dataprocessor.json <- not translated
Then when I try to create a change set from my packaged.yml file, I get the error 'DefinitionUri' is not a valid S3 Uri of the form 's3://bucket/key', which of course makes sense since it isn't.
My repo is organized in the following way:
├── src
│   ├── __init__.py
│   ├── ...
│   └── writer.py
├── statemachine
│   └── dataprocessor.json
├── template.yml
and I have verified that both the src and statemachine folders make it to the deploymentpackages bucket.
Why would the aws cloudformation package command work for the lambda uri but not the state machine one?
Sadly, DefinitionUri for AWS::Serverless::StateMachine is not supported for such substitutions. In contrast CodeUri is supported, thus its correctly changed.
The supported properties and resources are:
BodyS3Location property for the AWS::ApiGateway::RestApi resource
Code property for the AWS::Lambda::Function resource
CodeUri property for the AWS::Serverless::Function resource
DefinitionS3Location property for the AWS::AppSync::GraphQLSchema resource
RequestMappingTemplateS3Location property for the AWS::AppSync::Resolver resource
ResponseMappingTemplateS3Location property for the AWS::AppSync::Resolver resource
DefinitionUri property for the AWS::Serverless::Api resource
Location parameter for the AWS::Include transform
SourceBundle property for the AWS::ElasticBeanstalk::ApplicationVersion resource
TemplateURL property for the AWS::CloudFormation::Stack resource
Command.ScriptLocation property for the AWS::Glue::Job resource
DefinitionS3Location property for the AWS::StepFunctions::StateMachine resource
I was under the impression when I asked this question that aws cloudformation package and sam package were equivalent, since aws cloudformation package was able to transform local lambda function paths into S3 uris without problems.
I am using Codebuild to handle the transformation, and here is the buildspec.yml file when I was having the state machine DefinitionUri error:
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- aws cloudformation package --template-file template.yml --s3-bucket deploymentpackages --output-template-file packaged.yml --region eu-central-1
artifacts:
files:
- packaged.yml
As it turns out, I really did need to use sam package or sam deploy, so here is a buildspec.yml file that successfully transforms the DefinitionUri from a local path to an S3 uri, in case anyone else might hit the same wall:
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- pip install --user aws-sam-cli
- USER_BASE_PATH=$(python -m site --user-base)
- export PATH=$PATH:$USER_BASE_PATH/bin
- sam build -t template.yml
- sam package --template-file .aws-sam/build/template.yaml --s3-bucket deploymentpackages --output-template-file packaged.yml --region eu-central-1
artifacts:
files:
- packaged.yml
Inspired by this answer.

Dynamically create resource names in AWS SAM using parameters

I want to dynamically create names for my resources in my Cloud Formation stack when using AWS SAM if this is possible.
E.g. when I package or deploy I want to be able to add soemthing to the command line like this:
sam package --s3-bucket..... --parameters stage=prod
When in my template.yml file somehow to do something like this:
Resources:
OrdersApi:
Type: AWS::Serverless::Function
Properties:
FunctionName: orders-api-${stage}
CodeUri: ./src/api/services/orders/
...
Note for the OrdersApi property of FunctionName I want to dynamically set it to orders-api-prod which is the value I attempted to pass in on the CLI.
I can do this quite easily using the Serverless Framework but I can't quite work out how to do it with SAM.
You can use functions like Sub to construct resource names in CloudFormation. Something along the lines:
Parameters:
stage:
Type: String
Default: dev
AllowedValues:
- dev
- prod
Resources:
OrdersApi:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub 'orders-api-${stage}'
The answer posted by lexicore is correct and you can form values in certain parts of the template.yaml file using the !Sub function e.g.
FunctionName: !Sub 'orders-api-${stage}'
The missing part of why this wouldn't work is that you need to pass the parameters through to the sam deploy command in a specific format. From reading the AWS docs, sam deployis shorthand for aws cloudformation deploy.... That command allows you to pass parameters using the following syntax:
aws cloudformation deploy .... --parameter-overrides stage=dev
This syntax can also be used with the sam deploy command e.g.
sam deploy --template-file packaged.yml ..... --parameter-overrides stage=dev
Note that in this example stage=dev applies to the Parameters section of the template.yaml file e.g.
Parameters:
stage:
Type: String
AllowedValues:
- dev
- stage
- prod
This approach allowed me to pass in parameters and dynamically change values as the cloud formation stack was deployed.

Using AWS::CodeBuild::Project Environment variable in cloudformation template of repository

I want to create a continous delivery pipeline for a Lambda function.
As shown in this docs, the custom environment variables of AWS::CodeBuild::Project can be used in buildspec.yaml like:
aws cloudformation package --template-file template.yaml --s3-bucket $MYEVVARKEY --output-template-file outputtemplate.yaml
Wanted to use those CodeBuild Project environment variables in the SAM template of the repository also. As shown below, I tried with dollar signs, but it did not get it as a key but as a text as it is:
# template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
TimeFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: $MY_FN_NAME_ENV_VAR
Role: $MY_ROLE_ARN_ENV_VAR
Handler: index.handler
Runtime: nodejs8.10
CodeUri: ./
So, is it possible to utilize CodeBuild Project environment variables in SAM template, if so what's the notation required to achieve that?
CloudFormation can't refer to environment variables, doesn't matter SAM or plain. What you can do is to pass environment variables as parameters via shell in CodeBuild buildspec.yaml file (--parameters ParameterKey=name,ParameterValue=${MY_ENV_VAR}).
Remember to add corresponding parameter to your Parameters section.
If you use aws cloudformation deploy then you should use --parameter-overrides, which is a little simpler form:
--parameter-overrides \
YourParam=${YOUR_ENV_VAR} \
Foo=Bar \

set static name of lambda deployed with sam local

Im following the helloworld example from the sam local repo:
aws-sam-local\samples\hello-world\python
But here is my template.yaml:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function
Resources:
MyLambda:
Type: 'AWS::Serverless::Function'
Properties:
Handler: lambda_function.lambda_handler
Runtime: python3.6
CodeUri: .
Description: ''
MemorySize: 128
Timeout: 3
Role: 'arn:aws:iam::123345:role/myrole'
I package it up:
sam package --template-file template.yaml --s3-bucket BUCKET --output-template-file packaged-template.yaml
And deploy!
sam deploy --template-file packaged-template.yaml --stack-name test-sam-local --capabilities CAPABILITY_IAM --region REGION
And it works, so thats great, but here is the name of the lambda it created:
test-sam-local-MyLambda-SOME_GUID
Do I have control over that name? I want the name of the function to be statically defined and clobbered whenever this is redeployed (to a function with that same name).
Use:
FunctionName: MyLambda under properties. Reference (Here)