Automated testing for Lambda deployment using codepipeline - amazon-web-services

I am looking for some guidance on how to do tests for a SAM deployment using codepipeline.
Scenario is a fairly large deployment of lambda functions which need to run in a VPC to be able to connect to DB and some other resources.
The deployment works and functions seem ok when i test manually using test events in lambdas.
But i want to add a testing stage into code build that where i can assert if the functions are working by running some tests (nodejs project so mocha/chai would be good) and continue the build / deploy if they work
But i am stuck on how to do this.
Am i best to run sam local in code build then run a npm test and connect to sam local lambdas? I just can't seem to find a good doc or example on what is the best approach.

Related

Skaffold/K8s Unit testing

I'm trying to unit test my microservices before deployment, however in the Skaffold pipeline images are tested before deployment. The issue here is that the code in my images are dependent on configs and credentials that are mounted from k8s configmaps and secrets, therefore tests will always fail if unit tests are run before deployment.
How do I run unit tests for microservices with skaffold? How are unit tests normally run for microservices? Looking around the net, no one seems to have a straight answer.
Any guidance will be much appreciated.
There are several strategies I've used or seen people use for end-to-end testing of a deployment:
In-cluster testing, where you deploy one or more jobs that perform tests, and then use the job status to communicate test success/failure. We use this approach to test the container-debug-support images that are used by skaffold debug. We run a little script that builds and deploys a set of images in a integration profile that includes a set of test Jobs (example). The script then waits for these jobs to complete with kubectl wait.
External to the cluster, where you access exposed services: this can be accomplished using after-deploy hooks. You can use this to do some smoke testing or load testing. This approach works if you're planning to keep the app running; I'm not sure that tearing down the app within the hooks is a great idea.
skaffold run and skaffold delete are helpful primitives for staging and tearing down an application.

Best way to test and deploy aws lambda functions in a step function

Long time stack overflow lurker and fist time poster.
I've started a new project using AWS lambdas and have found the learning curve particularly steep coming from a background of developing desktop applications.
When developing desktop applications it's easy to create a test environment locally. I know it's possible to test lambda functions locally and I've been able to do this for simple cases.
The lambda functions I'm using interact a lot with other AWS services (S3, Aurora, etc). Also, the final solution will include around 15 lambda functions linked via a step function.
I want to know if it's possible to create a separate test environment to the live production environment for the entire step function. This would allow me to perform system tests before deploying to production.
I've looked into AWS codepipeline as a possible solution but I'm not sure if this would allow me to create a seperate test environment before deploying to production.
Any help would be greatly appreciated.
Thanks!

How to get SonarQube results back to CodeBuild

I've seen many discussions on-line about Sonar web-hooks to send scan results to Jenkins, but as a CodePipeline acolyte, I could use some basic help with the steps to supply Sonar scan results (e.g., quality-gate pass/fail status) to the pipeline.
Is the Sonar web-hook the right way to go, or is it possible to use Sonar's API to fetch the status of a scan for a given code-project?
Our code is in BitBucket. I'm working with the AWS admin who will create the CodePipeline that fires when code is attempted to be pushed into the repo. sonar-scanner will be run, and then we'd like the pipeline to stop if the quality does not pass the Quality Gate.
If I would use a Sonar web-hook, I imagine the value for host would be, what, the AWS instance running the CodeBuild?
Any pointers, references, examples welcome.
I created a powershell to use with Azure DevOps, that possible may be migrated to some shell script that runs in the code build activity
https://github.com/michaelcostabr/SonarQubeBuildBreaker

Smoke test approach for AWS Lambda

I have many AWS Lambda using Java 8. We are using Blue/Green deployment for all Lambda which is having Smoke/Live aliases. We are using Jenkins to deploy aws lambda with below steps
Check out: which is to checkout lambda source from git.
Build & Unit test with Junit .
Code Coverages with Jacoco
Deploy it using Smoke alias.
Now we want to perform Smoke Test for the lambda against Smoke alias
If smoke test cases passes, we will promote Smoke alias to Live alias.
For the step 5, could you please advice if we have approaches to perform "smoke test" for a lambda?
I would think we need to actually execute the lambda itself (not junit) but if so actual business rules ran and then it can generate many things output to targets such as dynamodb and s3 ...
So share best practices you have for your real project. Thanks.
I'm thinking should I add a special param which will be passed through Smoke tests and then the lambda itself has a logic to deal with that param.
I have struggled with this concept as well.
Assuming you are externalizing your configurations (e.g. DynamoDB tables, S3 locations, etc) via something like environment variables or SSM Parameters: ideally you would have your "smoke" or staging versions of the Lambda point to smoke test (i.e. non-production) resources.
One problem with using aliases is you can not have different environment variables for different aliases.
With that in mind, the typical approach for smoke/integration testing lambdas is to abandon using aliases deploy the staging resources as different/separate functions from your production resources.
This can be done more easily if you have a SAM/Cloudformation template that can deploy your lambdas and their dependencies so you can easily setup development, smoketest and production stacks. You will want to create a parameter for a prefix/suffix that you can give the resources to differentiate the different deployments.
When you are satisfied with your smoke testing results, you simply deploy the tested version of the lambda code to your production lambdas.

How can I automate the end-to-end testing of my serverless web app?

So my app stack looks like this in prod:
Backend: AWS API Gateway + Lambda + DynamoDB + ElastiCache(redis)
Backend - algo: Long running process - dockerized Java app running on ECS (Fargate)
Frontend: Angular app, served from S3
I'd like to use https://www.cypress.io/ for end-to-end testing and I'd like to use https://circleci.com/ for my build server.
How do I go about creating an environment to allow the end-to-end tests to run?
Options:
1) Use Terraform to script the infrastructure and create/tear down a whole environment every time we run the end-to-end tests. This sounds like a huge overhead in terms of spin up time. Also the environment creation and setup being fully scripted sounds like a lot of work!
2) Create a dedicated, long lived environment that we deploy to incrementally. This sounds like it'll get messy - not ideal for a place to run tests.
3) Make it so we can run the environment locally. So perhaps use use AWS's SAM or something like this project https://github.com/gertjvr/serverless-plugin-simulate
That last option may also answer the question of the local dev environment setup however everything that mocks serverless tech locally seems to be in beta and I'm concerned that if I go down that road I might hit some issues after investing a lot of time....
"Also the environment creation and setup being fully scripted sounds like a lot of work" - it is. its also the correct thing to do. it allows you to not only version your code but the environments that the code runs in. automating your deployment is more than just your code. i'd recommend this.
You can use the serverless framework to encode your app as infrastructure as Code and create tests
https://serverless.com
https://serverless.com/framework/docs/providers/aws/guide/testing
On my side, I split my testing strategy as below:
Api:
- Unit test: (use your language favorite framework)
- Integration test: It depends on your InfraAsCode choice, if you use SAM or Serverless framework, you will then be able to inject event directly to your function locally. If you want to add integration part like DynamoDB or S3 interaction, you should consider using LocalStack (https://github.com/localstack/localstack) to emulate those services.
Front:
- For that part, I always mock API Requests using Stub and only test front end part (I already have tested api part previously). And then you will be able to use cypress or an other framework.
How about using endly e2e and automation runner,
It allows you to build testing workflow to automate build, deployment, data population and validate (NoSQL: DynamoDB, Firebase, or SQL: MySQL, BigQuery,PostgreSQL, etc), logs (cloud watch), message bus (SNS, SQS, Cloud Pus/Sub), triggering backrond or sending HTTP reques.
You can find some lambda, cloud function/ here
Or some more production project with e2e:
storage mirror
data ingestion
data sync