TLDR: Is there a way to -- using a single Cloudformation template -- deploy a lambda function with code in S3 or ECR?
Say I have two Lambda functions.
Function A has code in an S3 bucket and relies on a Lambda Layer
Function B is a containerized function with the image in an ECR instance
Here's my deployment setup for the two functions:
function-a/
s3-bucket.template // CFN Stack template for S3 bucket
lambda-function.template // CFN Stack template for lambda function
deploy.sh // Script that creates S3 bucket template,
builds/uploads code,
and creates lambda function stack
function-b/
ecr.template // CFN Stack template for ECR instance
lambda-function.template // CFN Stack template for lambda function
deploy.sh // Script that creates ECR,
builds/uploads docker image,
and creates lambda function stack
Results: 4 Cloudformation stacks, 1 S3 bucket, 1 ECR, 2 Lambda functions
I find this amount of configuration setup for these two functions needlessly complex.
I understand that buckets and registries need to exist. But I don't want to explicitly define/deploy/manage them using extra build steps.
What else I looked at: I checked AWS SAM -- but SAM also doesn't absolve me from managing the code deployment myself. I used AWS CDK which actually abstracts this away. But for certain reasons I don't want to use CDK here atm. I do not want to use the Serverless framework.
I'm disappointed that most of the examples from the documentation of Cloudformation and SAM just end up creating buckets and registries manually. This doesn't seem like a scalable way to handle it for many environments. This isn't Infrastructure-as-Code.
Is there a simpler way?
The S3 bucket and ECR would be reused for future functionality. So I think of it as two shared resources (S3 code bucket and ECR) and then two new resources (the new Lambda functions).
Most likely you'll have a stack of shared items, things are used by everything but don't structurally change that much. Then another stack of application functions, which will likely change more often. Separating these two different types of things is a good idea.
Related
In CDK while creating pipeline it create new buckets for artifacts. Is there any way to use pre-existing bucket for every pipeline ?
I recently came across the same issue and split my CDK application into multiple stacks. There is an example AWS provides but it probably is a bit overkill what they do with the interface etc.
A good solution I found was to split my application into 2 stacks: one for the S3 buckets and one for everything else. That way I have 2 scripts in my repo, one that does cdk deploy for the S3 stack and one cdk deploy for the other stack (for all other resources except S3 buckets).
The other good thing is that in CDK if you want to use that S3 construct in your code, you can now just pass it the bucket in your S3 stack (ie. not much code to change, just the reference to it) so it all still says in the same application, just has a separate deployment.
I have a node CDK project with some python lambdas, I have put some code into the handler of the lambda that i have specified in my stack this being the execute function. I add some gibberish into the start of that function so it would fail or not be valid. I type cdk synth this still generates a template, shouldn't this do some validation on the lambdas, if not how do we validate these lambdas before deploying?
Thanks
From the AWS documentation:
The Toolkit provides the ability to convert one or more AWS CDK stacks
to AWS CloudFormation templates and related assets (a process called
synthesis) and to deploy your stacks to an AWS account.
The cdk synth does not do any additional validation on the underlying Cloudformation resources -- it simply converts the CDK code into Cloudformation templates.
You have to add in this functionality yourself before deployment. One way to achieve this could be running a local SAM test suite.
Lets say I have a CloudFormation stack running, which creates and deploys an Lambda function. In the AWS Console, if I connect my Lambda function to an API in API Gateway, will my CloudFormation Template be updated immediately if the Lambda function successfully integrates with the API?
It's a one way traffic from Cloudformation to resources.
Meaning if you modify your Cloudformation template and update the stack then the resources that were created by Cloudformation get modified/updated. However the other way is not true. Meaning if you modify your resources the Cloudformation template does not get updated.
Moreover, as a good practice you should avoid modifying the resources directly because you may end up breaking the Cloudformation's update stack functionality for that that stack
I've been working with CloudFormation YAML for awhile and have found it to be comprehensive - until now. I'm struggling in trying to use SAM/CloudFormation to create a Lambda function that is triggered whenever an object is added to an existing S3 bucket.
All of the examples I've seen thus far seem to require that you create the bucket in the same CloudFormation script as you create the Lambda function. This doesn't work for me, because we have a design goal to be able to use CloudFormation redeploy our entire stack to different regions or AWS accounts and quickly stand up our application. S3 bucket names must be globally unique, so if I create the bucket in CloudFormation, the script will break when I try to deploy it to a different region/account. I could probably get around this by creating buckets with the account name/region in the name, but that's just not desirable from a bucket sprawl perspective.
So, does anyone have a solution for creating a Lambda function in CloudFormation that is triggered by objects being written to an existing S3 bucket?
Thanks!
This is impossible, according to the SAM team. This is something which the underlying CloudFormation service can't do.
There is a possible workaround, if you implement a Custom resource which would trigger a separate Lambda function to modify the existing bucket and link it to the Lambda function that you want to deploy.
As "implement a Custom Resource" isn't very specific: Here is an AWS github repo with scaffold code to help write it, and then you declare something like the following in your template (where LambdaToBucket) is the custom function you wrote. I've found that you need to configure two things in that function: one is a bucket notification configuration on the bucket (saying tell Lambda about changes), the other is a Lambda Permission on the function (saying allow invocations from S3).
Resources:
JoinLambdaToBucket:
Type: Custom::JoinLambdaToExistingBucket
Properties:
ServiceToken: !GetAtt LambdaToBucket.Arn
We're building an API using AWS SAM. Build on the Lambda Node Template in CodeStar. Things were going well until our template.yml file became too big. Whenever the code is pushed and CloudFormation starts to execute the change set and create a stack for the SAM endpoints, it fails and rolls back to the last successful build.
It seems that we have too many resources that exceeds the CloudFormation limit per stack.
I tried splitting the template file and edited the buildspec to handle two template files and do two AWS CloudFormation package commands and added another artifact. But it didn't work either. As only the first template is recognized and only one stack is created.
I can't find a way to make an automated deployment that creates multiple stacks.
I'd appreciate some input into this and suggestions to handle such a scenario.
Thanks in advance.
You should try using the nested stacks pattern. Instead of splitting your current stack into multiple parallel stacks, you will create a parent stack that will in turn create multiple child stacks.
More information here.
AWS SAM (as of SAM v1.9.0) supports nested applications which map to nested CloudFormation stacks which gets around the 200 resource limit. (AWS::Serverless::Application transforms into a AWS::CloudFormation::Stack)
https://github.com/awslabs/serverless-application-model/releases/tag/v1.9.0
The main subject to see is what is the components you have in your sam template ? is there any dependencies ? is all Functions shares the same API Gateway or not ? is all functions access DynamoDB table ?
In my case, I split the SAM by API [ API Gateway + functions ( CRUD)] in a mono repo way, each folder contains its sam template.
If you have a shared service like Redis, or SNS, SQS, you can have a separate stack with the export import Feature to import the ARN of the service.