How should I test my "Serverless" (API Gateway/Lambda/ECS) applications? - amazon-web-services

I am using AWS API Gateway with Lambda/ECS for compute and Cognito for users. But I find it really hard to test such applications. With AWS SAM Local I maybe able to test simple Lambda and API gateway functionality but if I use things like API Gateway authorizers I find it hard to test these end to end.
Looks like to test such applications, I need an entire new setup just for testing? I mean like a separate API Gateway with Lambda/ECS cluster/Cognito user pool just to enable testing? This seems very slow, and I think I will not be able to get things like a code coverage report anymore?

Disclaimer: I'm fairly new to AWS Lambda/ECS/Cognito so take this with a grain of salt.
Unit Tests: SAM Local or some other local docker hosting with a unit testing library (mocha) would be good for this because:
Speed. All your tests should execute quickly against a lambda function
Example : wildrydes with mocha
Integration Tests: Once you stage your changes, there's a bunch of options calling the API. I'd start off with postman to run the API tests and you can chain them together or run them in command line if needed.
End to End (E2E) tests: If the API is your front end then there might not be any difference between E2E and API tests. UI, Voice, Chat front ends differ significantly as do the options so I'll suggest some options:
UI : Selenium (has the most support and options available to you including docker images: Selenium Hub or standalone)
Voice: Suggestions?
Text: Suggestions?
Step functions :
allows you to visualize each step
retries when there are errors
allows you to diagnose and debug problems
X-Ray: collects data about requests that your app serves, and provides tools you can use to view
As for code coverage, I'm not sure how you currently do code coverage. Something like this npm run coverage, maybe?

I am assuming you are using cloudformation for deploying such an extensive stack and the following answer is based on that assumption.
Thus in addition to the #lloyd's answer, I would like to add that you can add custom resources within your cloudformation template for testing each individual lambdas or even api endpoints.
Also for lambda, you can use Deployment Preferences Hooks to test your serverless lambdas before and after moving your lambda to the new version.
https://github.com/awslabs/serverless-application-model/blob/release/v1.8.0/docs/safe_lambda_deployments.rst

Related

can I write such a API code that runs on serverless (aws lambda) and the same code can runs on ec2?

I am looking for a language / framework or a method by which I can build API / web application code such that it can run on Serverless compute's like aws lambda and the same code runs on a dedicated compute system like lightsail or EC2.
First I thought of using Docker to do this but AWS Lambda entry point is a specific function signature which is very different than Spring Controllers. Is there a solution available currently?
So basically when I run it in lambda - it will have cold start issue, later when the app is ready or get popular I would like to move it to a EC2 instance for better performance and higher traffic load.
I want to start right in this situation so that later it can be easy to port and resolve the performance issue's
I'd say; no this is not possible easily.
When you are building an api that you'd want to run on lambda's you most likely will be using an API Gateway which takes care of your routing to different lambda functions (best practice). So the moment you would me working on an api like this migrating to EC2 would be a nightmare as you would need to rebuild the whole application a more of a monolith application which could run on EC2.
I would honestly commit to either run it on EC2/Containers or run it on Lambda, if cold start is your main issue with Lambda's you might wanna look into Lambda Snapstart for Java or use another language like Typescript/Python.
After some correct keywords in google I finally got what I was looking for, checkout this blog and code library shared by AWS which helps you convert the request and response of the request as per the framework required http request
Running APIs Written in Java on AWS Lambda: https://aws.amazon.com/blogs/opensource/java-apis-aws-lambda/
Repo Code: https://github.com/awslabs/aws-serverless-java-container
Thanks Ricardo for your response - will do check out Lambda Snapstart for sure and try it as well. I have not tested out this completely but it looks promising to some extent.

How to perform integration testing on AWS Step Function

I have a REST API in API Gateway with lambda proxy integration. The Lambda will invoke a Step Function workflow asynchronously and will return an ID in the payload. These AWS resources are deployed and managed by AWS CDK.
My question is, is there a proper way to perform integration test? There are two approaches I have in mind:
Call the REST API endpoint, and make assertions on side effects. But since the workflow is executed asynchronously, the test needs to continuously poll until side effects become visible.
According to this blog https://www.10printiamcool.com/step-function-integration-testing-with-cdk, it seems like we can use CDK to deploy a test stack with mocking the dependent resources (e.g Lambda). But this sounds more of like unit test.
I am not sure if there are any better options. Any thoughts?
I understand you want integration tests on your Step Function in the context of a serverless CDK app.
Your pass criteria for the Step Function include certain async backend side-effects in addition to a 200 API response.
Given that context, here are some ideas on two related topics:
How to engineer the Step Function tests
How about testing your Step Function's integration ... with another Step Function? A TestSfn would map through
test cases, in turn calling the API with various inputs in one Task and checking for expected side-effects in another Task.
After all, Step Functions are really good at orchestrating step-wise, async workflows in parallel, which is what your use case demands. The tests pass if TestSfn succeeds. The execution history console and logs give great visibility to diagnose test failures.
Test environments
The serverless + CDK setup makes it easy, fast and cheap to adopt the best practice multi-account strategy and spin-up and spin-down full, non-prod deployments of your entire app to test on.
You can perform ad hoc testing in a day-to-day dev environment and cdk destroy at the end of the day. And/or build CDK CI/CD pipeline that deploys to your prod environment on push to main if tests pass: [pull from github] -> [deploy stacks to TEST account] -> [seed test data] -> [run tests] -> [destroy TEST stacks] -> [deploy stacks to PROD acccount].

How to use AWS CLI to create a stack from scratch?

The problem
I'm approaching AWS, and the first test project will be a website, but i'm struggling on how to approach the resource and the tools to accomplish this.
AWS documentation is not really beginner-friendly, so to me it is like to being punched in the face at the first boxe training session.
First attempt
I've installed bot AWS and SAM cli tools, so what I would expect is to be able to create an empty stack at first and adding the resource one by one as the specifications are given/outlined, but instead what I see is that i need to give a template to the tool to create the new stack, but that means I need to know how to write it beforehand and therefore the template specifications for each resource type.
Second attempt
This lead me to create the stack and the related resources from the online console to get the final stack template, but then I need to test every new resource or any updated resource locally, so I have to copy the template from the online console to my machine and run the cli tools with this, but obviously it is not the desired development flow.
What I expected
Coming from a standard/classical web development I would expect to be able to create the project locally, test the related resources locally, version it, and delegate the deployment to the pipeline.
So what?
All this made me understand that "probably" I'm missing somenthing on how to use the aws cli tools and how the development for an aws-hosted application is meant to be done.
I'm not seeking for a guide on specific resource types like every single tutorial I've found online, but something on a higher level on how to handle a project development on aws, best practices and stuffs like that, I can then dig deeper on any resource later when needed.
AWS's Cloud Development Kit ticks the boxes on your specific criteria.
Caveat: the CDK has a learning curve in line with its power and flexibility. There are much easier ways to deploy a web app on AWS, like the higher-level AWS Amplify framework, with abstractions tailored to front-end devs who want to minimise the mental energy spent on the underlying infrastructure.
Each of the squillion AWS and 3rd Party deploy tools is great for somebody. Nevertheless, looking at your explicit requirements in "What I expected", we can get close to the CDK as an objective answer:
Coming from a standard/classical web development
So you know JS/Python. With the CDK, you code infrastructure as functions and classes, rather than 500 lines of YAML as with SAM. The CDK's reference implementation is in Typescript. JS/Python are also supported. There are step-by-step AWS online workshops for these and the other supported languages.
create the project locally
Most of your work will be done locally in your language of choice, with a cdk deploy CLI command to
bundle the deployment artefacts and send them up to the cloud.
test the related resources locally
The CDK has built-in testing and assertion support.
version it
"Deterministic deploy" is a CDK design goal. Commit your code and the generated deployment artefacts so you have change control over your infrastructure.
delegate the deployment to the pipeline
The CDK has good pipeline support: i.e. a push to the remote main branch can kick off a deploy.
AWS SAM is actually a good option if you are just trying to get your feet wet with AWS. SAM is an open-source wrapper around the aws-cli, which allows you to create aws resources like Lambda in say ~10 lines of code vs ~100 lines if you were to use the aws-cli directly. Yes, you'll need to learn SAM specific things like SAMtemplate and SAM-cli but it is pretty straightforward using this doc.
Once you get the hang of it, it would be easier to start looking under the hood of what/how SAM is doing things and get into the weeds with aws-cli if you wanted. Which will then allow you to build out custom solutions (using aws-cli) for your complex use cases that SAM may not support. Caveat: SAM is still pretty new and has open issues that could be a blocker for advanced features/complex use cases.

How can I automate the end-to-end testing of my serverless web app?

So my app stack looks like this in prod:
Backend: AWS API Gateway + Lambda + DynamoDB + ElastiCache(redis)
Backend - algo: Long running process - dockerized Java app running on ECS (Fargate)
Frontend: Angular app, served from S3
I'd like to use https://www.cypress.io/ for end-to-end testing and I'd like to use https://circleci.com/ for my build server.
How do I go about creating an environment to allow the end-to-end tests to run?
Options:
1) Use Terraform to script the infrastructure and create/tear down a whole environment every time we run the end-to-end tests. This sounds like a huge overhead in terms of spin up time. Also the environment creation and setup being fully scripted sounds like a lot of work!
2) Create a dedicated, long lived environment that we deploy to incrementally. This sounds like it'll get messy - not ideal for a place to run tests.
3) Make it so we can run the environment locally. So perhaps use use AWS's SAM or something like this project https://github.com/gertjvr/serverless-plugin-simulate
That last option may also answer the question of the local dev environment setup however everything that mocks serverless tech locally seems to be in beta and I'm concerned that if I go down that road I might hit some issues after investing a lot of time....
"Also the environment creation and setup being fully scripted sounds like a lot of work" - it is. its also the correct thing to do. it allows you to not only version your code but the environments that the code runs in. automating your deployment is more than just your code. i'd recommend this.
You can use the serverless framework to encode your app as infrastructure as Code and create tests
https://serverless.com
https://serverless.com/framework/docs/providers/aws/guide/testing
On my side, I split my testing strategy as below:
Api:
- Unit test: (use your language favorite framework)
- Integration test: It depends on your InfraAsCode choice, if you use SAM or Serverless framework, you will then be able to inject event directly to your function locally. If you want to add integration part like DynamoDB or S3 interaction, you should consider using LocalStack (https://github.com/localstack/localstack) to emulate those services.
Front:
- For that part, I always mock API Requests using Stub and only test front end part (I already have tested api part previously). And then you will be able to use cypress or an other framework.
How about using endly e2e and automation runner,
It allows you to build testing workflow to automate build, deployment, data population and validate (NoSQL: DynamoDB, Firebase, or SQL: MySQL, BigQuery,PostgreSQL, etc), logs (cloud watch), message bus (SNS, SQS, Cloud Pus/Sub), triggering backrond or sending HTTP reques.
You can find some lambda, cloud function/ here
Or some more production project with e2e:
storage mirror
data ingestion
data sync

How to write automated tests when using cloud APIs?

I'm adding to an open source project that uses in this case some Azure cloud functionality, but the same general problem is applicable for any cloud API. I want to write tests for my code, but the results of the test are reliant on something having happened in the cloud service I'm using, and in order for this to happen, I need to supply credentials to the cloud service. In a private project, I could certainly just add my cloud credentials to the testing environment, but for public/open source projects, I can't do this. I can test locally easily enough, but this project uses CI (as do many OSS projects), so this can't really be done.
One approach seems to be using mock or something similar, but that doesn't actually seem to test that things are happening as they should be, and strikes me as a mostly pointless method to achieve 100% coverage.
Are there any 'virtual test cloud' environments that can be spun up to create an identical interface to the cloud service in question, but only for testing? How do these deal with side effects (the code in question creates a DNS entry, and ideally would test for the actual existence of a DNS entry using the system's resolver rather than another cloud call)?
How do people do this kind of testing?
I start with a spike solution to learn how to pass the required credentials. With this knowledge, I can TDD an acceptance test to call a simple API and get a "success" result.
I exclude the credentials from my repository. Instead, I include a template file with instructions.
From there, I drop down to unit tests to TDD sending requests and receiving responses. I don't test actual communication with any service. Instead:
Test the contents of requests.
Create responses and test how they're handled. This makes it really easy to test all sorts of error conditions.
Once I've TDD'd credentials, requests, and responses, I use what I call a spike test to confirm that everything is in fact working. Basically, this uses non-automated confirmation in anything I can quickly hack together.