How to develop serverless as a team with AWS AppSync? - amazon-web-services

I have a general question about developing serverless applications and AWS AppSync in particular. We're thinking about going serverless for a small project and I'm wondering how people generally set up their development environment when creating a "serverless" application.
I've seen that the serverless framework provides some capabilities to run lambda's locally, but as far as I can see, the available appsync-plugin does not provide full "offline"-functionality for AppSync.
I'm curious to know how other teams do serverless development? Does everybody have their own AWS-side setup? Just a general development-instance of everything? I'm grateful for any opinion and input!

In our setup, everyone can have get their own personal serverless stage for developing their API. I'm interested in trying to run development offline but didn't get to that yet.
When we push to master, our CodePipeline will start building to our integration test stage. By default, our services (our app is split to many subdomains) are configured to use the integration test API. That API should be relatively stable for development. We can switch to the personal API when developing API.
We use common DynamoDB tables, streams and Elasticsearch instances for all development stages. DynamoDB tables and indexes are deployed with serverless in development side and in production side they are maintained manually.
Our production and beta stages are in a separate AWS account.

This may have been updated since this was asked but serverless-appsync-plugin now states:
You can use serverless-appsync-offline to autostart an AppSync Emulator which depends on Serverless-AppSync-Plugin with DynamoDB and Lambda resolver support
Which I believe is what you are looking for.

Joining late but here is what you are looking for:
https://github.com/bboure/serverless-appsync-simulator
It offers full support for dynamodb, http, elasticsearch and Lambda resovlers.
serverless-appsync-offline should be considered deprecated as it is not maintanined anymore and it relies on an archived repo as well

Serverless is one way to do it. SAM Local is also another option, with the SAM CLI. I've used it with some success although it wasn't quite as straightforward as I would have liked. Seems like the development environments are a bit new for serverless.

Related

Building Serverless applications on Google Cloud

I've been building serverless applications on AWS for the past few years, utilizing services such as Lambda, DynamoDB, SNS, SQS, Kinesis, etc., relying on the Serverless framework for local development and deployments. Due to professional reasons, I have now to switch to Google Cloud and I've been exploring the serverless space in that platform. Unfortunately, at first glance it doesn't seem to be as mature as AWS, which I don't know whether it's true or just caused by my lack of expertise. The reasons that make me claim that are basically the following:
There is no logical grouping of functions and resources: on AWS, Lambda functions are grouped in Applications, and can be deployed as a whole via SAM or the Serverless framework, which also allow creating any associated resource (databases, queues, event buses, etc.). It seems that on GCP functions are treated as individual entities, which makes managing them and orchestrating them harder.
Lack of tooling: both the SAM cli and the Serverless framework provide tools for local development and deployments. I haven't found anything on GCP like the former (the Functions Framework seems to cover it partially, but it doesn't handle deployments), and even though that the latter supports GCP, it's missing basic features, such as creating resources other than functions. What is more, GCP is not in the core framework and the plugin is looking for maintainers.
Less event sources: Lambda is directly integrated with a long list of services. On the other hand, Cloud Functions is integrated with just a few services, making HTTP triggers the only option on most cases. It seems they're trying to address this issue with Eventarc, but I don't think it's generally available yet.
Does anybody have any tips on how to setup a local environment for this kind of applications and how to manage them effectively?
Here some documentation that might be helpful for your case, even though required to take a deep look into it.
Configure Serverless VPC Access (which i think applies for 'setting up your local environment').
Cloud Run Quick start (which contains how to built and deploy serverless services with GCP Cloud Run using node.js, python, java, etc.

Advice on local Lambda testing for data pipeline that involves SQS and DynamoDB

Other devs and I are currently testing/building lambda functions for cleaning data that flows from S3 -> SQS -> Data Router Lambda(python), DynamoDB Rules Engine, and then a text processor in Lambda. We're currently working on the AWS platform but I'm trying to test this part of the data pipeline locally.
Ideally simulating S3 and SQS and dumping the zip files and running it through the lambda function. Currently toying with the SAM-CLI and Visual Studio, but nothing's stuck yet. Any tips?
There are several ways you can approach (local) testing of your AWS application:
Use unit tests for the different parts of your "pipeline", mocking the other parts like DynamoDB, SQS, etc.
Use something like LocalStack.
Every developer has their own "developer environment" in AWS. You could for example prefix every resource with the name of the developer (john_processing_lambda). You deploy to AWS and run integration tests from your local machine. You can achieve something like this with tools like Terraform, which allow you to "dynamically" name resources and for example add prefixes with the developers name.
Personally, I think running "AWS on your local machine" via Docker containers or tools like LocalStack not really satisfying. We had the best results with a combination of option 1 and option 3. Both have the upside that you can use the same tests in your CI/CD pipeline.
Furthermore, not running in the actual cloud (AWS) always bears the risk of "forgetting" something. Most notably IAM permissions. So everything runs fine on your local machine, but then it does not work on AWS.
Deploying a separate environment for every developer, so that they can play around with the actual resources and run tests directly in AWS, would be my recommendation. This paired with solid unit tests should yield the best results.
The downside of developer environments in AWS is that a developer has to deploy their code to AWS every time they want to test something. So making deployments fast is important. I found that with sufficient experience, you don't need to deploy that often anymore and this becomes less of an issue. Nevertheless, developer satisfaction in your team is important, so make sure to make this as smooth as possible.

Serverless framework CLI vs GUI. Eg. AWS console

Why would anyone use Serverless framework CLI to write the lambda functions or deploy them when we have AWS console GUI? Are there any specific benefits out of it?
Normally you don't develop and deploy a lambda function in isolation, instead it is one part of your cloud infrastructure. That can include other lambdas, S3 buckets, databases, API Gateways, IAM roles, environment variables and much more.
Serverless framework is allows you to write your infrastructure as code. For AWS services, it translates serverless.yaml config files into AWS cloudformation files, and from there deploys any new or updated services you define. You lambda function is just one part of that.
A major benefit of writing and deploying this way is that you can use your favourite editor locally, and can check your code into version control (i.e. git). This is not just for your lambda code, but also your infrastructure config i.e. serverless.yaml and associated files.
The Serverless Framework is more than just a replacement for the AWS Console (GUI). You can definitely set everything up via the AWS console for a Serverless application but how do you share that with your team? What if you wish to deploy that repeatedly into multiple applications? The Serverless Framework gives you a configuration file, usually called serverless.yml, where you define all the services within AWS (and other vendors, there is support for more than just AWS) and then you use the CLI to perform functions on this configuration file such as deploy, invoke and lot more.
Then there are the Serverless Framework plugins designed by the community around the project to make other tasks even easier such as unit testing, configuration of S3 buckets, CloudFront and domains to make frontend deployment easier and a lot, lot more.
Lastly, but most importantly, there is a professional product provided in addition to the open source framework that you can use to add on monitoring, deployment management, troubleshooting, optimisation, CI/CD and too many other benefits to list here.
Definitely, if you are doing a big project the Serverless framework has a lot of benefits, imagine you developing an MVC c# project with notepad. How do you feel about that?
The framework are done to make our life ( for developers ) very much easier.

Mixing Terraform and Serverless Framework

It's more of an open question and I'm just hoping for any opinions and suggestions. I have AWS in mind but it probably can relate also to other cloud providers.
I'd like to provision IaaC solution that will be easily maintainable and cover all the requirements of modern serverless architecture. Terraform is a great tool for defining the infrastructure, has many official resources and stable support from the community. I really like its syntax and the whole concept of modules. However, it's quite bad for working with Lambdas. It also raises another question: should code change be deployed using the same flow as infrastructure change? Where to draw the line between code and infrastructure?
On the other hand, Serverless Framework allows for super easy development and deployment of Lambdas. It's strongly opinionated when it comes to the usage of resources but it comes with some many out-of-the-box features that it's worth it. It shouldn't really be used for defining the whole infrastructure.
My current approach is to define any shared resources using Terraform and any domain-related resources using Serverless. Here I have another issue that is related to my previous questions: deployment dependency. The simple scenario: Lambda.1 adds users to Cognito (shared resource) which has Lambda.2 as a trigger. I have to create a custom solution for managing the deployment order (Lambda.2 has to be deployed first, etc.). It's possible to hook up the Serverless Framework deployment into Terraform but then again: should the code deployment be mixed with infrastructure deployment?
It is totally possible to mix the two and I have had to do so a few times. How this looks actually ends up being simpler than it seems.
First off, if you think about whatever you do with the Serverless Framework as developing microservices (without the associated infrastructure management burden), that takes it one step in the right direction. Then, what you can do is decide that everything that is required to make that microservice work internally is defined within that microservice as a part of the services configuration in the serverless.yml, whether that be DynamoDB tables, Auth0 integrations, Kinesis streams, SQS, SNS, IAM permissions allocated to functions, etc. Keep that all defined as a part of that microservice. Terraform not required.
Now think about what that and other microservices might need to interact with more broadly. They aren't critical for that services internal operation but are critical for integration into the rest of the organisations infrastructure. This includes things like deployment IAM roles used by the Serverless Framework services to deploy into CloudFormation, Relational Databases that have to be shared amongst multiple services and resources, networking elements (VPC's, Security Groups, etc), monolithic clusters like ElasticSearch and Redis ... all of these elements are great candidates for definition outside of the Serverless Framework and work really well with Terraform.
Any resource would be able to connect to these Terraform defined resource as needed, unlike that hard association such as Lambda functions triggered off of an API Gateway endpoint.
Hope that helps

Creating a local dev environment for appsync

We have a react mobile frontend and an AWS appsync backend (DynamoDB, step functions, lambdas, graphql, auth)
Is there an easy way to do this? We have an application and backend in production, and now want to make some changes to our GraphQL components (e.g. schema). Ideally, I would like to have an offline environment which mimics that deployed on AWS.
I found this Is there a way to test AppSync code locally and/or in CI/CD?, which didn't really have any clear answers. It seems setting up a duplicate environment on AWS (pretty much a staging envirFor lambdas, we have played around with Serverless a little.
We don't use CloudFormation today (maybe we should?), for lambdas, we have played around with Serverless a little, but had issues testing locally with authentication and I think DynamoDB. Ultimately, we just ended up using the AWS console to create components, and then something like Cloud9's IDE to build and debug before deploying to production. I don't like the fragmented dev experience. Lambdas weren't too bad because of Cloud9, but as for GraphQL doesn't seem to have the equivalent.
Eager to learn what the best practices are, and how best (and easy) it is to setup a good dev environment.
Thanks
If you already have a working Production schema and are looking to simulate a Dev environment, you would have to replicate it manually for now.
We recently launched Amplify Console, to specifically address the best practices around CI/CD, and manage your API across stages. A recommended practice would be to use Amplify CLI which internally uses CloudFormation nested stacks to simplify the process of creating and maintaining your AWS AppSync APIs. In addition to this, Amplify CLI also gives you out of the box scaffolding for your Request/Response Mapping Templates in CloudFormation with just a simple annotated schema.
You could use some of these tools as a recommended practice for maintaining cloud resources. We are also actively working towards enhancing the Developer Experience for some of these workflows.