I want to migrate a huge Serverless project, created with the Serverless Framework, from v0.5 to v1 and my biggest concerns are that resources (DynamoDB tables) that were deployed in sls 0.5 version will be deleted or modified if I would try to deploy from sls version v1.
It is known fact that v1 is not compatible with 0.5... So is it possible to migrate 0.5 resources to 1.0 without breaking the cloud formation structure of DynamoDB tables in AWS? In another word: how to migrate 0.5 resources to 1.0 in a safely manner?
Edit: I have full AWS API gateway in front.
Important: Please try this on a non-production environment first.
Don't do an sls remove on the v0.5 project.
Rewrite your API Gateway and Lambda functions in serverless v1.x but don't include the DynamoDB resources. This means v1.x will only deploy API Gateway endpoints and AWS Lambda functions.
In your Lambda handlers, use the same DynamoDB tables as before.
I'd consider looking into blue green deployments. For DynamoDB you can utilize streams to ensure data is in sync. You mentioned server less but it's hard to recommend a solution there without knowing if you're just doing lambda or if you've got an API gateway in front. In those cases you might want to look into stage variables.
Related
We are a Terraform shop for standing up our infrastructure on AWS and I am using AWS SAM Local to:
Test AWS Lambdas locally without having to deploy on the cloud.
I can also run integration tests on locally running lambda function as it will call downstream services that are running in the cloud.
I am curious about serverless-offline. I don't have much experience with the npm serverless library and wondering if others have any experience how it compares to SAM Local? Does it have same capabilities that I am able to accomplish with AWS SAM Local?
The sam local cli command and the Serverless Offline plugin work in similar ways. Both run a Docker instance and emulate API Gateway and Lambda. Additionally, Serverless Framework supports other platforms, unlike SAM Local.
The biggest advantage of using one or another is the ability to test your Serveless functions locally with the tool that you are currently using. So if you are using AWS SAM, the sam local will be the best option, similar if you are using the Serverless Framework, as the best option will be using the Serverless-offline plugin.
Serverless Framework included offline testing long before SAM Local arrived, so maybe you can find options that are not available yet using SAM local. sam local can have some advantages, such as template validation.
Both systems use Node.js and support API Gateway and Lambda, but neither currently supports DynamoDB execution, so you need to work in a way to make your DynamoDb available locally.
If you want to decide if the best option for you is the AWS SAM or the Serveless Framework, you can take a look in comparison like this one: Comparing AWS SAM with the Serverless framework.
Our company is exploring the use of AWS CDK. Our app is composed of an Angular Front End and an ASP.NET Core 3.1 Back End. Our Angular Front End is deployed as a static site in an S3 bucket. Back End is deployed as a Lambda with an API Gateway to allow for public API calls. The database is an Aurora RDS instance.
The app is a core product that will have some client-specific config when deployed into a client-specific AWS environment. Code-wise we are looking to keep the core product the same.
Based on the reading I did, placing CDK code alongside app code would allow us to define Constructs that correspond to app components (e.g. Front End S3 Buckets, Back End Lambda + API Gateway, RDS setup, etc.), all located in the same repo. I was thinking of producing artifacts (e.g. Nugets, binaries, zips) from the core app and cdk builds.
Now a client-specific setup would consume artifacts created from the main repository build to create a client-specific build, composed of core app + base AWS CDK constructs. I can imagine building AWS CDK stacks that use the ones created in the core repo adding client-specific configurations. Still unsure how to do the core app + client-specific config but am wondering if anyone either solved this problem or has suggestions.
I think you can start with AWS CDK Best Practices with special attention to Coding best practices section. Second, you can refer to AWS Solution Architect article who describes Recommended AWS CDK project structure for Python applications. By default, I understand that python is not Nextjs, though you can see find the general principles in it. Third, you can use a better way to structure AWS CDK projects around Nested Stacks and start to convert the Stacks into Libraries and re-using the code.
We built an API and integrated cloud function as backend. Till now we were deploying cloud function first and API gateway later. Is there a best way to club these two services and deploy it as a whole?
It's 2 different products and no, you can't tie them and deploy in the same time.
Cloud Functions build a container based your code and can take more or less time according to the number of dependencies and the type of language (required compilation (Java or Go) or not).
API Gateway requires to deploy a new config and to deploy it on a gateway. And it takes a while to achieve these.
So, no link between the product and not the same deployment duration. The right pattern here is to use versioning. You can deploy a service before the others (Cloud Functions before API Gateway) for minor change (doesn't break the existing behavior).
For breaking change, I recommend you to not update the existing functions but to create a new one. The advantage is to have the capacity to continue to have the 2 versions in parallel, and a rapid rollback in case of issue. Same thing for API Gateway, create a new gateway for a new version.
There is a following setup:
2 lambda functions, deployed using serverless.yml
custom domain (e.g. api.mydomain.com) attached to API Gateway
2 stages (dev and prod)
CNAME configuration in my domain to point to abcdefg.cloudfront.net
There's a high-level task to update two lambda functions without the downtime for the API that they are serving. How to do it using serverless framework?
Note: there are two ways to manage lambda deployments: stages and aliases (versions). Currently aliases do not work in serverless (there's a fork that hotfixes the issue, but it does not matter atm).
There is no downtime when updating a lambda function using the Serverless Framework, simply by running sls deploy.
The function code is zipped and uploaded to Lambda, and when that is completed, CloudFormation will update the Lambda configuration to point to the new code. There is no downtime in this process.
Recently I have been looking into AWS Lambdas and how to build Serverless API using .Net Core. From what I understand, you can do it in 2 different ways.
1) Write multiple separate Lambdas in C# and deploy them to AWS. Requests come in via API gateway and each lambda acts as an endpoint.
2) Build a Serverless Web API using .Net core. When you create the serverless Web API project a Lambda is automatically created which becomes the entry point to the Web API.
Are there any limitations of 1 vs 2, or use cases where one approach might be beneficial over other? Or is it just 2 different ways of achieving the same thing?
I don't think your options are correct. The two options for building a Lambda backed API are:
1- Build lambdas and deploy them independently to AWS in one or more project. Then Manually create API Gateway endpoints that point to your one or more lambdas.
2- Use a Serverless project to combine your lambdas in one project. Define your endpoints in that project and have Cloudformation create the API Gateway endpoints and hook them up to your lambdas on deployment.
As far as pros and cons,
Option 1:
Pros: has the flexibility of deploying lambdas independently, also you can configure your API Gateway endpoints however way you want without having to understand Cloudformation definition syntax which took some ramp up time in my experience.
Cons: If you have a lot of lambdas this becomes a management nightmare. Also your endpoint definition is not in the source code and changes to the endpoint configuration will not be tracked.
Option 2:
Pros: If you figure out Cloudformation or if you want to go with the default configuration deploying a lambda and hooking it to an API Gateway endpoint is super easy. AWS will create the endpoint for you and will create dev and prod stages, policies, IAM roles, etc. This being deployed by Cloudformation directly from Visual Studio causes the whole deployment and all related objects to fall under the same "Stack" in AWS Cloudformation which can be changed, repliocated, or deleted very easily. Also your Infrastructure is now code and changes to it are auditable in your git repo.
Cons: The biggest con in my opinion is the fact that the stack doesn't span the VS Solution but rather just the project, so all your lambdas have to live in the same project which means that if you have a lot they will end up all in one monolith lambda binary. The generated large project binary will costing you memory runtime on AWS and efficiency problems. The other con is that if you want to have specific or out of the ordinary API Gateway you will need to understand Cloudformation syntax to change your serverless.template file.
Conclusion: My preferred solution was to divide my true application into smaller chunks of related lambdas based on the API object and place these lambdas in a few Serverless application projects. For example I have an order project which contains all lambdas related to the order API, and a Product project that contains the lambdas related to product API etc. Both of them would live in the same solution and would get deployed separately. I'm currently working on a way to deploy the entire solution at once.