Still waiting for actual AWS support for this:
https://github.com/aws-samples/aws-serverless-samfarm/issues/5
How is this supposed to work?
My use case is I have an API Gateway fronted lambda that writes events to an SNS topic. I have another lambda that is subscribed to that topic.
Could these lambdas be in separate repos? Yes. Is part of the purpose of using a pub/sub pattern to separate these two applications in the first place? Yes.
But this is a simple app. The topic won't be shared with other functions and the whole thing is self contained. It should all be deployed together ideally all in the same template.
I can easily add all the functions I want to my SAM template but how do I deploy them? Should they each have a different CodeURI? That mean I need to script copying and install each lambdas dependencies into a different folder then point the CodeURIs for each lambda in the template to the different folder.
Is there any better support for this?
You can have as many AWS::Serverless::Function resources in a single template as you want as long as they have a unique logical id.
If you prefer to keep several lambda functions in a single repository you will have to provide different CodeUri for each lambda. For instance CodeUri: ./handler-my-lambda-one.zip and CodeUri: ./handler-my-lambda-two.zip.
Usually, it's a good practice to have a Makefile in your repository that would have a build target responsible for preparing handler-my-lambda-*.zip something like:
build:
rm -rf node_modules
npm install
zip -r handler.zip index.js lib node_modules
and a deploy target that would package your code (upload code to s3) and deploy cloudformation.
The package command is responsible for uploading the zip artifact specified in CodeUri and replace it with s3 URL in the sam.out.yaml
deploy:
aws cloudformation package \
--template-file sam.yaml \
--output-template-file sam.out.yaml \
--s3-bucket my-artifact-bucket-name
aws cloudformation deploy \
--template-file sam.out.yaml
Since you decided to have multiple lambdas in a single repository probably you would have two build commands for each lambda function and some cd ... logic to change working directory per function
Related
I am trying to convert a SAM project to a cloudformation template in order to call
cloudformation.createStack()
to create multiple stacks when a lambda is invoked. So far I can upload the SAM project with
sam build
sam package
But the size of the S3 is to big and I am getting errors. What are the steps to correctly upload the cloudformation template?
These pre-reqs need to be met before continuing:
Install the SAM CLI.
Create an Amazon S3 bucket to store the serverless code artifacts that the SAM template generates. At a minimum, you will need permission to put objects into the bucket.
The permissions applied to your IAM identity must include iam:ListPolicies.
4.You must have AWS credentials configured either via the AWS CLI or in your shell's environment via the AWS_* environment variables.
5.Git installed.
6.Python 3.x installed.
(Optional) Install Python's virtualenvwrapper.
Source Link:- https://www.packetmischief.ca/2020/12/30/converting-from-aws-sam-to-cloudformation/
I am hoping for some clarification around some terms I have been seeing on the web as it pertains to AWS and specifically lambdas. For starters, I would like the know how the commands sam build/deploy work versus setting up a CodeBuild job. Do I need a CodeBuild job to run those commands? What files specifically does the sam deploy command look for? Does it look for serverless.yml or template.yml or both? What is a sam.yml file or are they antiquated?
I have an app with a CodeBuild pipeline for a lambda, but I am expanding my repo to contain multiple lambdas and thinking about putting a serverless.yml file in each lambda directory, but I don't want to create a CodeBuild job and buildspec for each one. I assume sam deploy searches for all template.yml and serverless.yml files and constructs your stack as a whole (and updates only what needs to be updated?)
App is in Node if curious using API Gateway. Any insight would be appreciated.
I will try to give brief answers:
What does sam deploy do: It will zip the code and create cloudformation yaml file into .aws-sam folder and run cloudformation deploy.
Do we need CodeBuild to run same deploy: We still need some server to run sam deploy or build with node installed, which could be a local machine or remote server or a CodeBuild environment.
Do we need multiple templates? All Lambdas can be created in single template. But there is limit of 150 resources in cloudformation. if we have too many functions and APIs in single template, we will easily hit that limit. Each api might get converted into multiple cloud-formation resources. ex: 1 lambda function can be iam roles, cloudwatch logs, api routes, methods, integration, event source, etc.
Does sam deploy always looks for template.yaml By default yes, but can be easily overridden by passing --template-file sam deploy --template-file template-x.yml
Only changed resources are updated? Cloudformation update-stack updates only the resources that are changed.
As part of packaging the SAM application, the application published to s3 bucket as shown below:
sam package --template-file sam.yaml --s3-bucket mybucket --output-template-file output.yaml
Why sam package provides --s3-bucket option? Is this mandatory option?
What is the purpose of publishing artifacts to s3 bucket?
--s3-bucket option in sam package command is mandatory. What the command does is that it takes your local code, uploads it to S3 and returns transformed template where source location of your local code has been replaced with the S3 bucket URI (URI of object - zipped code - in the S3 bucket).
Main advantage of uploading artifact to S3 is that it is faster to deploy code that already sits within AWS network than send it through the Internet during deployment.
Another thing is that plain CloudFormation let's you inline lambda function code without pushing it to S3 but there are limitations to this approach. If your lambda function needs to use external libraries that are not part of AWS provided lambda environment for a particular runtime or your function's size is big then you still need to zip your function's code together with its dependencies and upload it to S3 before continuing. SAM just makes this easier for you so that you don't have to do this step manually.
Im reading this doc: https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/lambda-build-test-severless-app.html
I created a serverless app using the "Blog API using DynamoDB" template.
When I publish from VS it deploys it to aws as a serverless app, but what commands is it running? How can I publish it from the command line (without VS)?
When I look at the serverless.template file the project comes with I just see parameter and resource definitions for AWS::Serverless::Functions and the dynamodb table- where is the pointer/config that register this as a "Application" in the lambda console- and not just a bunch of functions?
It's using the Serverless Application Model (or SAM for short). It's an abstraction on top of standard Cloudformation templates - it allows you to declare serverless application resources in a more succinct way. It also comes with a CLI. My guess would be that's what's running behind the scenes.
You can try it by yourself. After installing the SAM CLI, run sam build, sam package and sam deploy. That should get you off the ground.
sam build --template serverless.template # --use-container if necessary, needs Docker
sam package --output-template-file packaged.yml --s3-bucket ARTIFACTS_BUCKET
sam deploy --template-file packaged.yml --stack-name my-serverless-app --capabilities CAPABILITY_IAM
I followed this tutorial https://aws.amazon.com/blogs/devops/aws-building-a-secure-cross-account-continuous-delivery-pipeline/ to set up cross account deployments of our lambdas using Cloudformation as my automation tool.
I'm using the pipeline in this repo: https://github.com/awslabs/aws-refarch-cross-account-pipeline/blob/master/ToolsAcct/code-pipeline.yaml (pipeline starts at line 207) and the pipeline in question is in the ToolsAccount/ directory;
I am able to successfully deploy the first lambda; however, any subsequent deployment replaces the old lambda but I want to have lambda_1 and lambda_2 both present in the console not just the latest one.
To deploy the second lambda, out of all 6 steps from the tutorial, I rerun step 4 and 5 of the tutorial like below:
4.In the Tools account, which hosts AWS CodePipeline, execute this CloudFormation template. This creates a pipeline, but does not add permissions for the cross accounts (Dev, Test, and Prod)
aws cloudformation deploy --stack-name sample-lambda-pipeline \
--template-file ToolsAcct/code-pipeline.yaml \
--parameter-overrides DevAccount=ENTER_DEV_ACCT TestAccount=ENTER_TEST_ACCT \
ProductionAccount=ENTER_PROD_ACCT CMKARN=FROM_1st_STEP \
S3Bucket=FROM_1st_STEP--capabilities CAPABILITY_NAMED_IAM
5.In the Tools account, execute this CloudFormation template, which give access to the role created in step 4. This role will be assumed by AWS CodeBuild to decrypt artifacts in the S3 bucket. This is the same template that was used in step 1, but with different parameters.
aws cloudformation deploy --stack-name pre-reqs \
--template-file ToolsAcct/pre-reqs.yaml \
--parameter-overrides CodeBuildCondition=true
After running both of these steps to deploy the second lambda, it successfully deploys it but replaces the other lambda that was already deployed earlier in the console. *
How can I keep the existing lambda while deploying new ones and
have all lambdas present in the console and not just the latest one
that was deployed?
*
My guess would be that by rerunning step 4 and 5, I'm creating a changeset of the previously deployed lambda and thus it will keep replacing the old lambda in the console.
If my guess is correct, then how can I re-use the same pipeline but deploy different lambdas with it without replacing the previously deployed lambdas?
Is there an attribute of the cloudformation pipeline resource that I'm
missing?
It sounds like you're trying to use a single pipeline to deploy multiple different independent services / projects. This will cause problems when you "switch" projects because the template won't contain resources from the other project and therefore CloudFormation will think these resources need to be removed.
You can either:
Add all the lambda functions together in a single template
Setup a separate pipeline per set of functions