As part of packaging the SAM application, the application published to s3 bucket as shown below:
sam package --template-file sam.yaml --s3-bucket mybucket --output-template-file output.yaml
Why sam package provides --s3-bucket option? Is this mandatory option?
What is the purpose of publishing artifacts to s3 bucket?
--s3-bucket option in sam package command is mandatory. What the command does is that it takes your local code, uploads it to S3 and returns transformed template where source location of your local code has been replaced with the S3 bucket URI (URI of object - zipped code - in the S3 bucket).
Main advantage of uploading artifact to S3 is that it is faster to deploy code that already sits within AWS network than send it through the Internet during deployment.
Another thing is that plain CloudFormation let's you inline lambda function code without pushing it to S3 but there are limitations to this approach. If your lambda function needs to use external libraries that are not part of AWS provided lambda environment for a particular runtime or your function's size is big then you still need to zip your function's code together with its dependencies and upload it to S3 before continuing. SAM just makes this easier for you so that you don't have to do this step manually.
Related
I am trying to convert a SAM project to a cloudformation template in order to call
cloudformation.createStack()
to create multiple stacks when a lambda is invoked. So far I can upload the SAM project with
sam build
sam package
But the size of the S3 is to big and I am getting errors. What are the steps to correctly upload the cloudformation template?
These pre-reqs need to be met before continuing:
Install the SAM CLI.
Create an Amazon S3 bucket to store the serverless code artifacts that the SAM template generates. At a minimum, you will need permission to put objects into the bucket.
The permissions applied to your IAM identity must include iam:ListPolicies.
4.You must have AWS credentials configured either via the AWS CLI or in your shell's environment via the AWS_* environment variables.
5.Git installed.
6.Python 3.x installed.
(Optional) Install Python's virtualenvwrapper.
Source Link:- https://www.packetmischief.ca/2020/12/30/converting-from-aws-sam-to-cloudformation/
I am very new to AWS.
I have task where I need to write Code pipeline to copy files from CodeCommit to S3 bucket, but condition is only update/deleted files should get copied. I have already written pipeline, but it copies all the files to S3 bucket.
I tried writing AWS Lambda + Python code, but it works the same. I used below chunk of code,
https://medium.com/#michael.niedermayr/using-aws-codecommit-and-lambda-for-automatic-code-deployment-to-s3-bucket-b35aa83d029b
Any help or suggestion appreciated
Instead of using the S3 deploy action to copy your files, use a CodeBuild action in your CodePipeline.
In the CodeBuild buildspec, use 'aws s3 sync' command with the '--delete' switch to sync files to the S3 bucket. Make sure the CodeBuild Service role has permission to write to the S3 bucket.
I am hoping for some clarification around some terms I have been seeing on the web as it pertains to AWS and specifically lambdas. For starters, I would like the know how the commands sam build/deploy work versus setting up a CodeBuild job. Do I need a CodeBuild job to run those commands? What files specifically does the sam deploy command look for? Does it look for serverless.yml or template.yml or both? What is a sam.yml file or are they antiquated?
I have an app with a CodeBuild pipeline for a lambda, but I am expanding my repo to contain multiple lambdas and thinking about putting a serverless.yml file in each lambda directory, but I don't want to create a CodeBuild job and buildspec for each one. I assume sam deploy searches for all template.yml and serverless.yml files and constructs your stack as a whole (and updates only what needs to be updated?)
App is in Node if curious using API Gateway. Any insight would be appreciated.
I will try to give brief answers:
What does sam deploy do: It will zip the code and create cloudformation yaml file into .aws-sam folder and run cloudformation deploy.
Do we need CodeBuild to run same deploy: We still need some server to run sam deploy or build with node installed, which could be a local machine or remote server or a CodeBuild environment.
Do we need multiple templates? All Lambdas can be created in single template. But there is limit of 150 resources in cloudformation. if we have too many functions and APIs in single template, we will easily hit that limit. Each api might get converted into multiple cloud-formation resources. ex: 1 lambda function can be iam roles, cloudwatch logs, api routes, methods, integration, event source, etc.
Does sam deploy always looks for template.yaml By default yes, but can be easily overridden by passing --template-file sam deploy --template-file template-x.yml
Only changed resources are updated? Cloudformation update-stack updates only the resources that are changed.
Still waiting for actual AWS support for this:
https://github.com/aws-samples/aws-serverless-samfarm/issues/5
How is this supposed to work?
My use case is I have an API Gateway fronted lambda that writes events to an SNS topic. I have another lambda that is subscribed to that topic.
Could these lambdas be in separate repos? Yes. Is part of the purpose of using a pub/sub pattern to separate these two applications in the first place? Yes.
But this is a simple app. The topic won't be shared with other functions and the whole thing is self contained. It should all be deployed together ideally all in the same template.
I can easily add all the functions I want to my SAM template but how do I deploy them? Should they each have a different CodeURI? That mean I need to script copying and install each lambdas dependencies into a different folder then point the CodeURIs for each lambda in the template to the different folder.
Is there any better support for this?
You can have as many AWS::Serverless::Function resources in a single template as you want as long as they have a unique logical id.
If you prefer to keep several lambda functions in a single repository you will have to provide different CodeUri for each lambda. For instance CodeUri: ./handler-my-lambda-one.zip and CodeUri: ./handler-my-lambda-two.zip.
Usually, it's a good practice to have a Makefile in your repository that would have a build target responsible for preparing handler-my-lambda-*.zip something like:
build:
rm -rf node_modules
npm install
zip -r handler.zip index.js lib node_modules
and a deploy target that would package your code (upload code to s3) and deploy cloudformation.
The package command is responsible for uploading the zip artifact specified in CodeUri and replace it with s3 URL in the sam.out.yaml
deploy:
aws cloudformation package \
--template-file sam.yaml \
--output-template-file sam.out.yaml \
--s3-bucket my-artifact-bucket-name
aws cloudformation deploy \
--template-file sam.out.yaml
Since you decided to have multiple lambdas in a single repository probably you would have two build commands for each lambda function and some cd ... logic to change working directory per function
New'ish to AWS very new to CI/CD. And have a question re: deploying a Deploying React Website to an S3 Bucket.
I've got me a Git repo that contains a React web app.
I've setup an AWS CodePipeline project, which polls the master branch of the repo looking for commits.
It then triggers the AWS CodeBuild project which builds the react app as defined in the buildspec.yml
In the example/tutorial I've followed the buildspec.yml had the following...
post_build:
commands:
- aws s3 cp dist s3://${S3_BUCKET} --recursive
...which copies the build output to the destination S3 Bucket.
It all works great, however this assumes that the S3 Bucket is already there.
Question: Which step should be responsible for creating the destination S3 Bucket and what should I be using to do so?
I'm thinking that perhaps it should be a CodeDeploy with another Cloudformation Template
Just after a little guidance before going down the wrong route :)
Many thanks.
Ok, so I think I found the tutorial you were referring to: https://www.karelbemelmans.com/2017/01/deploying-a-hugo-website-to-amazon-s3-using-aws-codebuild/
Can you specify two post_build commands? You could just create the bucket first which might fail if the bucket already exists but who cares right? Or you could check if the bucket exists and only if it doesn't create it.
Here's the s3 command you need to create a bucket:
https://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html
There's an API for list buckets but I can't post it because this new user doesn't have 10 reputation yet unfortunately.
Good luck,
-Asaf