I have many AWS Lambda using Java 8. We are using Blue/Green deployment for all Lambda which is having Smoke/Live aliases. We are using Jenkins to deploy aws lambda with below steps
Check out: which is to checkout lambda source from git.
Build & Unit test with Junit .
Code Coverages with Jacoco
Deploy it using Smoke alias.
Now we want to perform Smoke Test for the lambda against Smoke alias
If smoke test cases passes, we will promote Smoke alias to Live alias.
For the step 5, could you please advice if we have approaches to perform "smoke test" for a lambda?
I would think we need to actually execute the lambda itself (not junit) but if so actual business rules ran and then it can generate many things output to targets such as dynamodb and s3 ...
So share best practices you have for your real project. Thanks.
I'm thinking should I add a special param which will be passed through Smoke tests and then the lambda itself has a logic to deal with that param.
I have struggled with this concept as well.
Assuming you are externalizing your configurations (e.g. DynamoDB tables, S3 locations, etc) via something like environment variables or SSM Parameters: ideally you would have your "smoke" or staging versions of the Lambda point to smoke test (i.e. non-production) resources.
One problem with using aliases is you can not have different environment variables for different aliases.
With that in mind, the typical approach for smoke/integration testing lambdas is to abandon using aliases deploy the staging resources as different/separate functions from your production resources.
This can be done more easily if you have a SAM/Cloudformation template that can deploy your lambdas and their dependencies so you can easily setup development, smoketest and production stacks. You will want to create a parameter for a prefix/suffix that you can give the resources to differentiate the different deployments.
When you are satisfied with your smoke testing results, you simply deploy the tested version of the lambda code to your production lambdas.
Related
I am looking for some guidance on how to do tests for a SAM deployment using codepipeline.
Scenario is a fairly large deployment of lambda functions which need to run in a VPC to be able to connect to DB and some other resources.
The deployment works and functions seem ok when i test manually using test events in lambdas.
But i want to add a testing stage into code build that where i can assert if the functions are working by running some tests (nodejs project so mocha/chai would be good) and continue the build / deploy if they work
But i am stuck on how to do this.
Am i best to run sam local in code build then run a npm test and connect to sam local lambdas? I just can't seem to find a good doc or example on what is the best approach.
We have a CodePipeline that runs on every GitHub commit/merge to the main branch, building the application and releasing it to a staging environment where we can manually test the application. Every now and then, ad-hoc, weekly, etc, depending on the project, we'd release to production manually. To implement this I added a ManualApprovalStep to my CodePipeline between staging and production but that means that my pipeline is never green. It's always stuck in blue:
This makes me think that I'm using the wrong tool here.
My mental model is coming from Heroku (ignore the review apps, I'm not tackling that challenge yet):
In Heroku there's a Tests tab that's green if the tests pass and there's a pipeline that's green if it gets deployed to staging. Lack of promotion to production in Heroku is not a non-green state in Heroku but it would be in ManualApprovalStep.
Is there another tool that AWS gives me to model this way of working that I'm missing?
Update: another big difference. The ManualApprovalStep seems to pile each change and releasing each change, one by one, not releasing whatever was the last release to staging, so clearly it's not analogous to the release to production that Heroku has.
You are right that the ManualApprovalStep is not a natural "promotion" mechanism. They are for yes-no approvals and will result in execution failure if rejected or after 7 days. Disabled Stage Transitions also sit awkwardly with your use case.
pipeline.CodePipline executions are (a) triggered on a change to a source and (b) meant to run all stages from start-to-finish. Executions are hard to interrupt. A consequence of a requirement to deploy environments independently is that environments are best modelled as independent pipelines, not stages within a single pipeline.
Simple Option: 2 github branches, 2 Pipelines
Clone your pipeline setup. A staging pipeline is tied to a staging branch source. A prod pipeline is triggered on changes to the main branch. This setup is easy to reason about and has the advantage that deploys always match your source. But it does not replicate the Heroku "promotion" concept.
Complex Option: 1 github branch, 2 Pipeline?
You could probably get something closer to the "promotion" pattern by having a pipelines.CodePipeline deployment for staging (tied to github) and a separate codepipeline.Pipeline pipeline for prod. The latter can be triggered by EventBridge events. Asset handling would be complex in this scenario.
[Edit:] Amplify CI/CD for the Front-end, CodePipeline for Back-end
AWS Amplify CI/CD gives you automatic feature branch deploys, PR review approvals etc. for front-end apps. Manual deploys require a workaround, but are possible. See this related SO question. The CDK supports Amplify build configurations. The catch is that these CI/CD goodies work for front-end apps, but not for arbitrary infrastructure stacks. To get the best of both worlds, split the app in two. Use Amplify for the high-velocity front-end and stick with CodePipeline for the back-end deploys.
I have a REST API in API Gateway with lambda proxy integration. The Lambda will invoke a Step Function workflow asynchronously and will return an ID in the payload. These AWS resources are deployed and managed by AWS CDK.
My question is, is there a proper way to perform integration test? There are two approaches I have in mind:
Call the REST API endpoint, and make assertions on side effects. But since the workflow is executed asynchronously, the test needs to continuously poll until side effects become visible.
According to this blog https://www.10printiamcool.com/step-function-integration-testing-with-cdk, it seems like we can use CDK to deploy a test stack with mocking the dependent resources (e.g Lambda). But this sounds more of like unit test.
I am not sure if there are any better options. Any thoughts?
I understand you want integration tests on your Step Function in the context of a serverless CDK app.
Your pass criteria for the Step Function include certain async backend side-effects in addition to a 200 API response.
Given that context, here are some ideas on two related topics:
How to engineer the Step Function tests
How about testing your Step Function's integration ... with another Step Function? A TestSfn would map through
test cases, in turn calling the API with various inputs in one Task and checking for expected side-effects in another Task.
After all, Step Functions are really good at orchestrating step-wise, async workflows in parallel, which is what your use case demands. The tests pass if TestSfn succeeds. The execution history console and logs give great visibility to diagnose test failures.
Test environments
The serverless + CDK setup makes it easy, fast and cheap to adopt the best practice multi-account strategy and spin-up and spin-down full, non-prod deployments of your entire app to test on.
You can perform ad hoc testing in a day-to-day dev environment and cdk destroy at the end of the day. And/or build CDK CI/CD pipeline that deploys to your prod environment on push to main if tests pass: [pull from github] -> [deploy stacks to TEST account] -> [seed test data] -> [run tests] -> [destroy TEST stacks] -> [deploy stacks to PROD acccount].
I'm using CodePipeline to deploy my CloudFormation templates that contain Lambda functions as AWS::SAM::Functions.
The CodePipeline is triggered by a commit in my main branch on GitHub.
The Source Stage in the CodePipeline retrieves the source files from GitHub. Zero or more Lambda functions could change in a commit. There are several Lambda Functions in this repository.
I intend on running through taskcat for CloudFormation Templates and Unit Tests for Lambda Python code during a test stage and then deploy the CloudFormation templates and Lambda Functions to production. The problem is, I can't figure out how to differentiate between changed and unchanged Lambda functions or automate the deployment of these Lambda functions.
I would like to only test and deploy new or update changed Lambda functions along with my CloudFormation templates - what is the best practice for this (ideally without Terraform or hacks)?
Regarding testing: Best practice is actually to simply test all lambda code in the repo on push before deploying. You might skip some work for example with github actions that you only test the files that have changed, but it definitely takes some scripting and it hardly ever adds much value. Each testing tool has its own way of dealing with that (sometimes you can simply pass the files you want to test as an argument and then its easy, but sometimes test tools are more of a all-or-nothing approach and it gets quite complicatedreal fast).
Also, personally I'm not a big fan of taskcat since it doesn't really add a lot of value and it's not a very intuitive tool (also relatively outdated IMO). Is there a reason you need to do these types of testing?
Regarding deployment: There are a few considerations when trying to only update lambdas that have changed.
Firstly, cloudformation already does this automatically: as long as the cloudformation resource for the lambda doesn't change, the lambda will not be updated.
However, SAM has a small problem there, since it will re-package the lambda code on every pipeline run and update the CodeUri property of the lambda. And thus the lambda gets updated (even though the code might stay the same).
To work around this, you have several options:
Simply accept that SAM updates your function even though the code might not have changed.
Build SAM locally, and use the --cached and --cache-dir option when deploying in your pipeline. Make sure to push the folder that you set as cache-dir.
Use a different file packaging tool than SAM. Either some custom script that or something else that only pushes your code to s3 when the files have changed.
If you're into programming I'd suggest you take a look into CDK. It's a major upgrade from cloudformation/SAM, and it handles code bundling better (only updates when files have changed). Also the testing options are much wider for CDK.
I am using AWS API Gateway with Lambda/ECS for compute and Cognito for users. But I find it really hard to test such applications. With AWS SAM Local I maybe able to test simple Lambda and API gateway functionality but if I use things like API Gateway authorizers I find it hard to test these end to end.
Looks like to test such applications, I need an entire new setup just for testing? I mean like a separate API Gateway with Lambda/ECS cluster/Cognito user pool just to enable testing? This seems very slow, and I think I will not be able to get things like a code coverage report anymore?
Disclaimer: I'm fairly new to AWS Lambda/ECS/Cognito so take this with a grain of salt.
Unit Tests: SAM Local or some other local docker hosting with a unit testing library (mocha) would be good for this because:
Speed. All your tests should execute quickly against a lambda function
Example : wildrydes with mocha
Integration Tests: Once you stage your changes, there's a bunch of options calling the API. I'd start off with postman to run the API tests and you can chain them together or run them in command line if needed.
End to End (E2E) tests: If the API is your front end then there might not be any difference between E2E and API tests. UI, Voice, Chat front ends differ significantly as do the options so I'll suggest some options:
UI : Selenium (has the most support and options available to you including docker images: Selenium Hub or standalone)
Voice: Suggestions?
Text: Suggestions?
Step functions :
allows you to visualize each step
retries when there are errors
allows you to diagnose and debug problems
X-Ray: collects data about requests that your app serves, and provides tools you can use to view
As for code coverage, I'm not sure how you currently do code coverage. Something like this npm run coverage, maybe?
I am assuming you are using cloudformation for deploying such an extensive stack and the following answer is based on that assumption.
Thus in addition to the #lloyd's answer, I would like to add that you can add custom resources within your cloudformation template for testing each individual lambdas or even api endpoints.
Also for lambda, you can use Deployment Preferences Hooks to test your serverless lambdas before and after moving your lambda to the new version.
https://github.com/awslabs/serverless-application-model/blob/release/v1.8.0/docs/safe_lambda_deployments.rst