I have a REST API in API Gateway with lambda proxy integration. The Lambda will invoke a Step Function workflow asynchronously and will return an ID in the payload. These AWS resources are deployed and managed by AWS CDK.
My question is, is there a proper way to perform integration test? There are two approaches I have in mind:
Call the REST API endpoint, and make assertions on side effects. But since the workflow is executed asynchronously, the test needs to continuously poll until side effects become visible.
According to this blog https://www.10printiamcool.com/step-function-integration-testing-with-cdk, it seems like we can use CDK to deploy a test stack with mocking the dependent resources (e.g Lambda). But this sounds more of like unit test.
I am not sure if there are any better options. Any thoughts?
I understand you want integration tests on your Step Function in the context of a serverless CDK app.
Your pass criteria for the Step Function include certain async backend side-effects in addition to a 200 API response.
Given that context, here are some ideas on two related topics:
How to engineer the Step Function tests
How about testing your Step Function's integration ... with another Step Function? A TestSfn would map through
test cases, in turn calling the API with various inputs in one Task and checking for expected side-effects in another Task.
After all, Step Functions are really good at orchestrating step-wise, async workflows in parallel, which is what your use case demands. The tests pass if TestSfn succeeds. The execution history console and logs give great visibility to diagnose test failures.
Test environments
The serverless + CDK setup makes it easy, fast and cheap to adopt the best practice multi-account strategy and spin-up and spin-down full, non-prod deployments of your entire app to test on.
You can perform ad hoc testing in a day-to-day dev environment and cdk destroy at the end of the day. And/or build CDK CI/CD pipeline that deploys to your prod environment on push to main if tests pass: [pull from github] -> [deploy stacks to TEST account] -> [seed test data] -> [run tests] -> [destroy TEST stacks] -> [deploy stacks to PROD acccount].
Related
edit: Turns out the solution is in the docs. I had bog standard normal 'sam' installed but I needed what they call the 'public preview version' AKA 'sam-beta-cdk'. With this installed the API can be started locally with sam-betacdk start-api and works well. While I appreciate the answers which suggest that development should be done using purely TDD I feel there is also value in this more interactive, manual mode as it permits quicker exploration of the problem space.
I'm trying to build my first app with CDK + Typescript using API Gateway, Lambdas and DynamoDB. I have built a couple of Lambdas and deployed them and they work fine live on the web. However I don't want a minute long deploy cycle and various associated AWS costs as part of my workflow. What I want is to be able to test my API locally.
I have struggled to find docs on how to do this. Amazon seem to recommend using the SAM CLI here so that is what I've been trying.
The docs claim running sam local xyz runs cdk synth to make a "could assembly" in ./aws-sam/build but I see no evidence of this. Instead what I get is a complaint that sam could not find a 'template.yml'. So I manually run cdk synth > template.yml which creates one in the root folder. Then I run sam local start-api and it seems happy to start up.
Then I try and hit my test lambda using CURL: curl 'http://127.0.0.1:3000/test' I get {"message":"Internal server error"} and a huge ugly stack trace in the console that is running sam local start-api
The lambda is this...
exports.handler = async function() {
console.log("WooHoo! Test handler ran")
return {statusCode: 200, headers: {"Content-Type": "application/json"}, body: "Test handler ran!"}
}
Start of the huge ugly stack trace...
Mounting /home/user/code/image-cache/asset.beeaa749e012b5921018077f0a5e4fc3ab271ef1c191bd12a82aa9a92148782e as /var/task:ro,delegated inside runtime container
START RequestId: 99f53642-b294-4ce5-a1b4-8c967db80ce1 Version: $LATEST
2021-09-15T12:33:37.086Z undefined ERROR Uncaught Exception {"errorType":"Runtime.ImportModuleError","errorMessage":"Error: Cannot find module 'test'\nRequire stack:\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js","stack":["Runtime.ImportModuleError: Error: Cannot find module 'test'","Require stack:","- /var/runtime/UserFunction.js","- /var/runtime/index.js"," at _loadUserApp (/var/runtime/UserFunction.js:100:13)"," at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)",
The end of the huge ugly stack trace...
Invalid lambda response received: Lambda response must be valid json
So it would seem sam local start-api can't find test and throws and error which means the API gateway doesn't get a valid 'lambda response'. So far this has not helped me chase down the problem :/ It certainly seems aware that test is a route, as trying to hit other endpoints gives the classic {"message":"Missing Authentication Token"} but it chokes hard trying to fulfill it despite me having both functions/test.ts and the compiled functions/test.js present.
I have the test route and handler defined in my CDK stack definition like so...
const testLambda = new lambda.Function(this, "testLambdaHandler", {
runtime: lambda.Runtime.NODEJS_14_X,
code: lambda.Code.fromAsset("functions"),
handler: "test.handler"
})
api.root
.resourceForPath("test")
.addMethod("GET", new apigateway.LambdaIntegration(testLambda))
I considered posting my template.yml but that is even longer than the big ugly error message so I haven't.
So I have three questions (well actually a million but I don't want to be too cheeky!)
Is this actually the canonical way of locally testing apps made with CDK
If so, where am I going wrong?
If not, what is the better/proper way?
Lambda handlers are just functions. They do not need any special environment to function - they are called at a specific point in the Lambda Invocation process, and provided an event (a json object) and a context (another json object)
You can (and should!) unit test them just like any other individual function in your language/testing framework.
As #Lucasz mentioned, you should rely on the fact that, if set up properly, API gateway and Lambda will interact the same way every time. Once you have run one end to end test and you know that the basics work, any further implementation can be done trough unit testing
There are numerous libraries for mocking AWS service calls in unit testing, and there are plenty of good practice work arounds for the services that are more difficult to mock (ie: its difficult to mock a Lambda call from inside another lambda - but if you wrap that lambda call in its own function, you can mock the function itself to return whatever you want it to - and this is good practice for testing as well!)
using jest, in a coded unit test, you can call the lambda handler, give it stubbed or mocked event json, as well as a context json (probably just blank as youre not using it) and the lambda handler will act just like any other function with two parameters you've ever written, including returning what you want it to return.
You must be doing something wrong with your file directory. Where is your index.js located? If you generate the template.json, is the directory correct?
Also in what directory do you execute the Sam local command?
The thing with testing your serverless application is you don't have to test your full application. You need to count on AWS that API gateway, dynamodb and lambda is perfectly working.
The only thing you need to test is the logic you implemented.
In here you make sure your function prints out something and returns a 200. That's all you have to do.
Look into 'jest' for testing js.
If you want to test cdk you should into https://docs.aws.amazon.com/cdk/latest/guide/testing.html
Also "running Aws locally" is not good practice. it's never the same as how it's running in real life aka the cloud. You use plugins for this, tools for that... Local is not the same as in the cloud.
If you have any more questions, feel free to ask.
I'm starting to teach myself serverless development, using AWS Lambda and the Serverless CLI. So far, all is going great. However, I've got a snag with acceptance testing.
What I'm doing so far is:
Deploy stack to AWS with a generated stage name - I'm using the CI job ID for this
Run all the tests against this deployment
Remove the deployment
Deploy the stack to AWS with the "Dev" stage name
This is fine, until I need some data.
Testing endpoints without data is easy - that's the default state. So I can test that GET /users/badid returns a 404.
What's the typical way of setting up test data for the tests?
In my normal development I do this by running a full stack - UI, services, databases - in a local docker compose stack and the tests can talk to them directly. Is that the process to follow here - Have the tests talk directly to the varied AWS data stores? If so, how do you handle multiple (DynamoDB) tables across different CF stacks, e.g. for testing the UI?
If that's not the normal way to do it, what is?
Also, is there a standard way to clear out data between tests? I can't safely test a search endpoint if the data isn't constant for that test, for example. (If data isn't cleared out then the data in the system will be dependent on the order the tests run in, which is bad)
Cheers
Since, this is about Acceptance tests - those should be designed to care less of the architecture (tech side) and more of the business value. After all, such tests are supposed to be Black box. Speaking from experience with both, SLS or mSOA, the test setup and challenges are quite similar.
What's the typical way of setting up test data for the tests?
There are many ways/patterns to do the job here, depending on your context. The ones that most worked for me are:
Database Sandbox to provide a separate test database for each test run.
Table Truncation Teardown which truncates the tables modified during the test to tear down the fixture.
Fixture Setup Patterns will help you build your prerequisites depending of test run needs
You can look at Fixture Teardown Patterns for a
standard way to clear out data between tests
Maybe, you don't need to
Have the tests talk directly to the varied AWS data stores
as you might create an unrealistic state, if you can just hit the APIs/endpoints to do the job for you. For example, instead of managing multiple DynamoDB instances' PutItem calls - simply hit the register new user API. More info on Back door manipulation layer here.
I have many AWS Lambda using Java 8. We are using Blue/Green deployment for all Lambda which is having Smoke/Live aliases. We are using Jenkins to deploy aws lambda with below steps
Check out: which is to checkout lambda source from git.
Build & Unit test with Junit .
Code Coverages with Jacoco
Deploy it using Smoke alias.
Now we want to perform Smoke Test for the lambda against Smoke alias
If smoke test cases passes, we will promote Smoke alias to Live alias.
For the step 5, could you please advice if we have approaches to perform "smoke test" for a lambda?
I would think we need to actually execute the lambda itself (not junit) but if so actual business rules ran and then it can generate many things output to targets such as dynamodb and s3 ...
So share best practices you have for your real project. Thanks.
I'm thinking should I add a special param which will be passed through Smoke tests and then the lambda itself has a logic to deal with that param.
I have struggled with this concept as well.
Assuming you are externalizing your configurations (e.g. DynamoDB tables, S3 locations, etc) via something like environment variables or SSM Parameters: ideally you would have your "smoke" or staging versions of the Lambda point to smoke test (i.e. non-production) resources.
One problem with using aliases is you can not have different environment variables for different aliases.
With that in mind, the typical approach for smoke/integration testing lambdas is to abandon using aliases deploy the staging resources as different/separate functions from your production resources.
This can be done more easily if you have a SAM/Cloudformation template that can deploy your lambdas and their dependencies so you can easily setup development, smoketest and production stacks. You will want to create a parameter for a prefix/suffix that you can give the resources to differentiate the different deployments.
When you are satisfied with your smoke testing results, you simply deploy the tested version of the lambda code to your production lambdas.
I am using AWS API Gateway with Lambda/ECS for compute and Cognito for users. But I find it really hard to test such applications. With AWS SAM Local I maybe able to test simple Lambda and API gateway functionality but if I use things like API Gateway authorizers I find it hard to test these end to end.
Looks like to test such applications, I need an entire new setup just for testing? I mean like a separate API Gateway with Lambda/ECS cluster/Cognito user pool just to enable testing? This seems very slow, and I think I will not be able to get things like a code coverage report anymore?
Disclaimer: I'm fairly new to AWS Lambda/ECS/Cognito so take this with a grain of salt.
Unit Tests: SAM Local or some other local docker hosting with a unit testing library (mocha) would be good for this because:
Speed. All your tests should execute quickly against a lambda function
Example : wildrydes with mocha
Integration Tests: Once you stage your changes, there's a bunch of options calling the API. I'd start off with postman to run the API tests and you can chain them together or run them in command line if needed.
End to End (E2E) tests: If the API is your front end then there might not be any difference between E2E and API tests. UI, Voice, Chat front ends differ significantly as do the options so I'll suggest some options:
UI : Selenium (has the most support and options available to you including docker images: Selenium Hub or standalone)
Voice: Suggestions?
Text: Suggestions?
Step functions :
allows you to visualize each step
retries when there are errors
allows you to diagnose and debug problems
X-Ray: collects data about requests that your app serves, and provides tools you can use to view
As for code coverage, I'm not sure how you currently do code coverage. Something like this npm run coverage, maybe?
I am assuming you are using cloudformation for deploying such an extensive stack and the following answer is based on that assumption.
Thus in addition to the #lloyd's answer, I would like to add that you can add custom resources within your cloudformation template for testing each individual lambdas or even api endpoints.
Also for lambda, you can use Deployment Preferences Hooks to test your serverless lambdas before and after moving your lambda to the new version.
https://github.com/awslabs/serverless-application-model/blob/release/v1.8.0/docs/safe_lambda_deployments.rst
So my app stack looks like this in prod:
Backend: AWS API Gateway + Lambda + DynamoDB + ElastiCache(redis)
Backend - algo: Long running process - dockerized Java app running on ECS (Fargate)
Frontend: Angular app, served from S3
I'd like to use https://www.cypress.io/ for end-to-end testing and I'd like to use https://circleci.com/ for my build server.
How do I go about creating an environment to allow the end-to-end tests to run?
Options:
1) Use Terraform to script the infrastructure and create/tear down a whole environment every time we run the end-to-end tests. This sounds like a huge overhead in terms of spin up time. Also the environment creation and setup being fully scripted sounds like a lot of work!
2) Create a dedicated, long lived environment that we deploy to incrementally. This sounds like it'll get messy - not ideal for a place to run tests.
3) Make it so we can run the environment locally. So perhaps use use AWS's SAM or something like this project https://github.com/gertjvr/serverless-plugin-simulate
That last option may also answer the question of the local dev environment setup however everything that mocks serverless tech locally seems to be in beta and I'm concerned that if I go down that road I might hit some issues after investing a lot of time....
"Also the environment creation and setup being fully scripted sounds like a lot of work" - it is. its also the correct thing to do. it allows you to not only version your code but the environments that the code runs in. automating your deployment is more than just your code. i'd recommend this.
You can use the serverless framework to encode your app as infrastructure as Code and create tests
https://serverless.com
https://serverless.com/framework/docs/providers/aws/guide/testing
On my side, I split my testing strategy as below:
Api:
- Unit test: (use your language favorite framework)
- Integration test: It depends on your InfraAsCode choice, if you use SAM or Serverless framework, you will then be able to inject event directly to your function locally. If you want to add integration part like DynamoDB or S3 interaction, you should consider using LocalStack (https://github.com/localstack/localstack) to emulate those services.
Front:
- For that part, I always mock API Requests using Stub and only test front end part (I already have tested api part previously). And then you will be able to use cypress or an other framework.
How about using endly e2e and automation runner,
It allows you to build testing workflow to automate build, deployment, data population and validate (NoSQL: DynamoDB, Firebase, or SQL: MySQL, BigQuery,PostgreSQL, etc), logs (cloud watch), message bus (SNS, SQS, Cloud Pus/Sub), triggering backrond or sending HTTP reques.
You can find some lambda, cloud function/ here
Or some more production project with e2e:
storage mirror
data ingestion
data sync