Configure serverless framework not to upload to S3? - amazon-web-services

Need to deploy a serverless function in aws lambda using serverless. Serverless uses aws Cloud formation to build the stack completely and uploads the module into S3. It uploads by default into S3, but the intended file is less than 10 mb which could be attached in aws lambda directly. How to configure the serverless.yml to achieve the scenario.

This is not possible.
You've asked serverless to create a CloudFormation template that creates some lambdas. When AWS executes the template, it executes it in the cloud away from your computers local files. Thats why your code is packaged, uploaded to S3, and made available for CloudFormation use.
CloudFormation does allow for code to be inline in the template but serverless does not support this. And there is no way to ask CloudFormation to create a lambda without code attached for manual upload at a later date.
Frankly the cost to have the additional bucket and a few small files is minimal (if any). If the concern is the additional deployment bucket, you can specify a deployment bucket name for multiple serverless deployments.

Related

Can we set up AWS Lambda function locally without using AWS Sam?

I have set up a lambda function using AWS Sam CLI that I am using for local development. After development, I have deployed this function to AWS console for production from my IDE (visual studio code). After deployment, when I see It has created several other resources such as cloudformation, Api Gateway, and a few others.
What's problem?
I am seeking a way through which I can deploy only my Lambda code that doesn't create other resources like Api gateway, etc. Is there any way that allow me to only create lambda function on local environment and then I want to push my code to AWS console.
Moreover, when I use AWS Sam the size of my Lambda code also increased incredibly. When I created the same Lambda manually on AWS Console it consumes only a few kbs but when I created Lambda using AWS Sam it's size ramped up to 25MB.
If someone know a better way to do this please elaborate.
You can see my concerns the following:
Create Lambda function on local machine for development
I don't want to shift my Lambda function manually from local environment to AWS Console.
Also assign the Lambda function with specific permissions
What are the best practices for this? If someone is still confuse please ask anything in the comment section.
Use local devtools which simplify cloud infrastructure local development and automate the process of deployment.
Check for example Altostra local devtools:
VS Code extension to build AWS infrastructure in visual way, including configuration of each AWS resource (extension is available on official VS Code marketplace)
CLI (available on official npm marketplace) - automatically package all your code and cloud infrastructure and push it to your AWS account. After push you can also deploy your project directly from your local dev environment to AWS - Altostra automatically generates CloudFormation from your visual design and deploy the stack to your AWS account. All permissions are generated automatically as well.
Notice, you need to open an account on Altostra to be able to do all described in #2 (account is free).
You can upload your application to aws lambda in 3 way :
1- Create a zip file and upload via console (project files can not exceed 250mb)
2- Upload your files to s3 and reference it (doable)
3- Create docker images and upload it (the easiest way)
The best way is to upload as container images because you can upload files/dependencies up to 10gb inside the docker image.
After you create your docker images, you can test it locally too. Please check :
https://docs.aws.amazon.com/lambda/latest/dg/images-test.html

aws CodeDeploy vs aws Lambda

I have use case in amazon cloud, i'm using fargate cluster and cloudformation.
I want to do continuous deployment i.e on new image upload trigger i want to update the cloudformation stack with this new image, also run this automated deployment when client wants using manual trigger.
What should i use for continuous deployment, aws code deploy or aws lambda.
aws CodeDeploy has a provider CloudFormation with limited option and less control i believe.
aws lambda has a great control over CloudFormation client through its boto api.
I also read somewhere that when you get some limitations in CodeDeploy or CodePipeline you can integrate lambda to get rid of this limitation. So why not use lambda in the first place for continuous deployment only.
I'm very convinced about aws lambda over aws CodeDeploy after doing some research, However, i'm open for comments and suggestions.
You can use both of them to achieve perfect CI-CD implementation
If image gets uploaded the Lambda will be triggered and Lambda will be having your configurations and parameters
Using that it will call CodeDeploy to build your ECR images and It will get deployed to your Farget cluster
You can also achieve your second need using this implementation, manual trigger when client wants
In lambda you can trigger manualy passing parameters runtime
I hope this helps you

Azure DevOps AWS Serverless application deployment - S3 bucket per environment

I have followed this guide on how to separately package and deploy a .NET Core serverless application on AWS.
I'm trying to deploy to different environments, which I can achieve by parameterising the CloudFormation template and creating different stages in the Azure DevOps release pipeline, each stage passing different parameters to the template. However, there is one aspect that's confusing me a little bit. From the guide, an AWS Lambda .NET Core Deployment task is added to create a deployment package in the CI part. In that task, a S3 bucket name is specified, which is where the compiled code will be uploaded to. However, this means that one single S3 bucket will contain uploaded code for all environments. What if I wanted to upload the code to a different S3 bucket, depending on the environment being deployed to? Or is it normal practice to have just one S3 bucket in this scenario?
I thought of creating the bucket at the release pipeline stages, but the task to package the serverless application (during CI) requires a bucket name to be supplied.
Am I going the wrong way about this?

AWS Cloud9: deploy only one Lambda function at a time

I am trying to deploy Lambda functions using AWS Cloud9. When I press deploy, all of my functions are deployed/synced at the same time rather than just the one I selected when deploying. Same thing when right clicking on the function and pressing deploy. I find this quite annoying and wondering if there is any work around?
When you click deploy Cloud9 runs aws cloudformation package and aws cloudformation deploy on your template.yaml file in the background. (source: I developed the Lambda integration for AWS Cloud9).
Because all your files are bundled into one serverless application and there is only one CloudFormation stack they can only be all deployed at once with CloudFormation.
If you're only making a code change for one function and are not modifying any configuration settings you could update that one function from the command line with the command:
zip -r - . | aws lambda update-function-code --function-name <function-name>`
Run this in the same folder as your template.yaml file, replacing <function-name> with it's full generated name like cloud9-myapp-myfunction-ABCD1234 (You can see the full name under the remote functions list in the AWS Resources panel).
In AWS Cloud9, Lambda functions are created within serverless applications and are therefore deployed via CloudFormation. With CloudFormation, the whole stack is deployed at once, so all functions are deployed together (see this discussion for more info).

CodeDeploy to S3

I have a site in a S3 bucket, configured for web access, for which I run an aws s3 sync command every time I push on a specific git repository (I'm using Gitlab at the moment).
So if I push to stable branch, a Gitlab runner performs the npm start build command for building the site, and then aws s3 sync to synchronize to a specific bucket.
I want to migrate to CodeCommit and use pure AWS tools to do the same.
So far I was able to successfully setup the repository, create a CodeBuild for building the artifact, and the artifact is being stored (not deployed) to a S3 bucket. Difference is that I can't get it to deploy to the root folder of the bucket instead of a subfolder, seems like the process is not made for that. I need it to be on a root folder because of how the web access is configured.
For the deployment process, I was taking a look at CodeDeploy but it doesn't actually let me deploy to S3 bucket, it only uses the bucket as an intermediary for deployment to a EC2 instance. So far I get the feeling CodeDeploy is useful only for deployments involving EC2.
This tutorial with a similar requirement to mine, uses CodePipeline and CodeBuild, but the deployment step is actually a aws s3 sync command (same as I was doing on Gitlab), and the actual deployment step on CodePipeline is disabled.
I was looking into a solution which involves using AWS features made for this specific purpose, but I can't find any.
I'm also aware of LambCI, but to me looks like what CodePipeline / CodeBuild is doing, storing artifacts (not deploying to the root folder of the bucket). Plus, I'm looking for an option which doesn't require me to learn or deploy new configuration files (outside AWS config files).
Is this possible with the current state of AWS features?
Today AWS has announced as a new feature the ability to target S3 in the deployment stage of CodePipeline. The announcement is here, and the documentation contains a tutorial available here.
Using your CodeBuild/CodePipeline approach, you should now be able to choose S3 as the deployment provider in the deployment stage rather than performing the sync in your build script. To configure the phase, you provide an S3 bucket name, specify whether to extract the contents of the artifact zip, and if so provide an optional path for the extraction. This should allow you to deploy your content directly to the root of a bucket by omitting the path.
I was dealing with similar issue and as far as I was able to find out, there is no service which is suitable for deploying app to S3.
AWS CodeDeploy is indeed for deploying code running as server.
My solution was to use CodePipeline with three stages:
Source which takes source code from AWS CodeCommit
Build with AWS CodeBuild
Custom lambda function which after successful build takes artifact from S3 artifact storage, unzip it and copies files to my S3 website host.
I used this AWS lambda function from SeamusJ https://github.com/SeamusJ/deploy-build-to-s3
Several changes had to be made, I used node-unzip-2 instead of unzip-stream for unziping artifict from s3.
Also I had to change ACLs in website.ts file
Uploading from CodeBuild is currently the best solution available.
There's some suggestions on how to orchestrate this deployment via CodePipeline in this answer.