Postman-Tests Runner - not all tests are ran/executed - postman

I am relatively new to Postman, so I created test cases for 6 APIs:
I click on Run, and I would expect that all APIs tests are executed, but it is only test cases for 2 API are executed:
What would be the reason that the tests for the rest 4 API are not executed?
I also noticed that if I do not execute the second POST, so the rest of the API test would run.
My second API (POST Creates the user group type.) contains postman.setNextRequest() method in the Tests.
Could it be that postman.setNextRequest interferes with the Runner ?

I used postman.setNextRequest() to run multiple requests in one API.
I found out that if I take away postman.setNextRequest(), and split one API on two APIs, so the API would be executed on Runner. This is the one solution which I found so far

Related

How to perform integration testing on AWS Step Function

I have a REST API in API Gateway with lambda proxy integration. The Lambda will invoke a Step Function workflow asynchronously and will return an ID in the payload. These AWS resources are deployed and managed by AWS CDK.
My question is, is there a proper way to perform integration test? There are two approaches I have in mind:
Call the REST API endpoint, and make assertions on side effects. But since the workflow is executed asynchronously, the test needs to continuously poll until side effects become visible.
According to this blog https://www.10printiamcool.com/step-function-integration-testing-with-cdk, it seems like we can use CDK to deploy a test stack with mocking the dependent resources (e.g Lambda). But this sounds more of like unit test.
I am not sure if there are any better options. Any thoughts?
I understand you want integration tests on your Step Function in the context of a serverless CDK app.
Your pass criteria for the Step Function include certain async backend side-effects in addition to a 200 API response.
Given that context, here are some ideas on two related topics:
How to engineer the Step Function tests
How about testing your Step Function's integration ... with another Step Function? A TestSfn would map through
test cases, in turn calling the API with various inputs in one Task and checking for expected side-effects in another Task.
After all, Step Functions are really good at orchestrating step-wise, async workflows in parallel, which is what your use case demands. The tests pass if TestSfn succeeds. The execution history console and logs give great visibility to diagnose test failures.
Test environments
The serverless + CDK setup makes it easy, fast and cheap to adopt the best practice multi-account strategy and spin-up and spin-down full, non-prod deployments of your entire app to test on.
You can perform ad hoc testing in a day-to-day dev environment and cdk destroy at the end of the day. And/or build CDK CI/CD pipeline that deploys to your prod environment on push to main if tests pass: [pull from github] -> [deploy stacks to TEST account] -> [seed test data] -> [run tests] -> [destroy TEST stacks] -> [deploy stacks to PROD acccount].

How should I test my "Serverless" (API Gateway/Lambda/ECS) applications?

I am using AWS API Gateway with Lambda/ECS for compute and Cognito for users. But I find it really hard to test such applications. With AWS SAM Local I maybe able to test simple Lambda and API gateway functionality but if I use things like API Gateway authorizers I find it hard to test these end to end.
Looks like to test such applications, I need an entire new setup just for testing? I mean like a separate API Gateway with Lambda/ECS cluster/Cognito user pool just to enable testing? This seems very slow, and I think I will not be able to get things like a code coverage report anymore?
Disclaimer: I'm fairly new to AWS Lambda/ECS/Cognito so take this with a grain of salt.
Unit Tests: SAM Local or some other local docker hosting with a unit testing library (mocha) would be good for this because:
Speed. All your tests should execute quickly against a lambda function
Example : wildrydes with mocha
Integration Tests: Once you stage your changes, there's a bunch of options calling the API. I'd start off with postman to run the API tests and you can chain them together or run them in command line if needed.
End to End (E2E) tests: If the API is your front end then there might not be any difference between E2E and API tests. UI, Voice, Chat front ends differ significantly as do the options so I'll suggest some options:
UI : Selenium (has the most support and options available to you including docker images: Selenium Hub or standalone)
Voice: Suggestions?
Text: Suggestions?
Step functions :
allows you to visualize each step
retries when there are errors
allows you to diagnose and debug problems
X-Ray: collects data about requests that your app serves, and provides tools you can use to view
As for code coverage, I'm not sure how you currently do code coverage. Something like this npm run coverage, maybe?
I am assuming you are using cloudformation for deploying such an extensive stack and the following answer is based on that assumption.
Thus in addition to the #lloyd's answer, I would like to add that you can add custom resources within your cloudformation template for testing each individual lambdas or even api endpoints.
Also for lambda, you can use Deployment Preferences Hooks to test your serverless lambdas before and after moving your lambda to the new version.
https://github.com/awslabs/serverless-application-model/blob/release/v1.8.0/docs/safe_lambda_deployments.rst

SOAPUI Data Driven

I am currently using SOAPUI (Free Version).
I am looking to automate the tests so a value is place in each test, without having to manually enter them.
(At a Basic Level) Example
The test web service I am using is http://www.webservicex.net/mortgage.asmx?WSDL.
http://www.webservicex.net/mortgage.asmx?op=GetMortgagePayment
The service has been set up as SOAP, a test suite has been built and a SOAP request test step has been assigned.
Out of the box, you can enter data manually in to the test step. However I am wanting too achieve three things using the free addition.
Add a randomized value to the test, so that a value of between is inserted each time the test is run. I am aware you can use Groovy but I am unsure whether the Groovy script needs to be added as a test step or as the set up script tab
Pull a list of data from a .csv or similar so the test continues to run through all data.
Repeat step 2 but set the assertions for each piece of test data to state whether the response should be valid or not.
If you could help it would be appreciated.
Cheers,
Tim

Webmethods mocking in flow services

In webmethods (Software AG), is there a way to Mock object during unit testing?
or any available tool to test flow service.
You could have a look at the Open Source http://www.wmaop.org test framework that allows general mocking and unit testing along with a host of other functionality. The framework allows you to:
Create mocks of IS services
Apply conditions to mocks so that they only execute when the pipeline contents meet that condition
Raise an exception based on a condition or in place of a service
Capture the pipeline to file before or after a service is called
Modify or insert content into the pipeline
Have a series of conditions for a mocked service with a default if none of the conditions match
Create assertions that can apply before or after a service so that its possible to prove a service has been executed. Assertions can also have conditions to verify that the pipeline had the expected content.
Return either random or sequenced content from a mock to very its output every time its called
Create mocks using RESTful calls so you can use alternative test tools, such as SOAPui, to create them as part of your integrations test
Use the JBehave functionality for Behaviour Driven Unit Testing within Designer and execute tests with the in-built JUnit.
WmTestSuite could be a good tool for you (Why reinvent the wheel), your company chose webMethods to speedup devs, i advice you to keep going.
What wmTestSuite does:
Create unit tests Graphically for you flows in the Designer
Generate the related TestUnit class (you can complete it to add some asserts)
Add a hook the Integration server to "register" data to create test data
Mock endpoints to ease tests (db, ws...)
I got this slide from a SoftwareAG guy. From the version 9.10 (April 2016) you should be able to download it from empower.
You cannot define mocks in webMethods directly, as it requires you to hook into the invoke chain. This is a set of methods that are called between every flow or java service invocation. They take care of things like access control, input/output validation, updating statistics, auditing etc.
There are various tools and products available that leverage this internal mechanism and let you create mocks (or stubs) for your unit or system test cases:
IwTest, commercial, from IntegrationWise
WmTestSuite, commercial, from SoftwareAG
CATE, commercial, from Cloudgensys
WmAOP, open source, www.wmaop.org
With all four you can create test cases for webMethods flow/java services and define mocks for services that access external systems. All four provide ways to define assertions that the results should satisfy.
By far the easiest to work with is IwTest as it lets you generate test suites, including mocks (or stubs), based on input/output pipelines that it records for you. In addition to this it also supports pub/sub (asynchronous) scenario's.
Ask your Software AG liaison about webMethods Test Suite (WmTestSuite), which plugs into the Eclipse-based Designer and provides basic Unit testing capabilities.
Mocks per se are lightweight services that can be configured in the WmTestSuite dialog alongside the (test) input and (expected) output pipelines.

Postman - Ignore test execution based on environment

In Postman, is there a way to ignore a specific test execution based upon environment at runtime?
I have a collection which consists of around 30 tests and I don't want 4-5 tests to execute on production environment at runtime but because they are meant to be executed on staging only.
I don't want to have multiple collections for different environments.
Any way I can make use of pre-request scripts in this scenario?
I agree with #joroe that a simple way to conditionally add tests is with a variable and check that variable before each test that is conditional.
If you want to not have the test sent to the server at all you probably want to explore the collection runner and group your into collections according to the environment. For example you may group your requests into a collection called PROD Tests that runs requests 1-10. You can have a second collection called DEV Tests that contains requests 1-15 (the ten from PROD plus the five others you don't want to run in PROD). It's very easy to copy a request from one collection to another. You then in the test runner run the collection for the specific environment. You can even automate this using the Newman extension. I'm not super familiar but there is documentation at the link posted. I've included a screen capture of the collection runner interface and how I have some of my tests set up to run.
Create a variable in each environment ("ENV") with name of the environment ("LOCAL").
if(environment.ENV === "LOCAL") tests["Run in local environment"] = true;