Using Postman runner to call API n times for benchmark testing - postman

I am writing a new API and want to be able to see how it fairs when hit with n requests.
I have tried to setup environment variables and use the runner tool within Postman to no avail.
End goal is to run it n times, where I pass in the value of [n] into the body so I can audit (the value of that field is stored in database).
I have setup 2 environment variables
company=Bulk API Test
requestcount=0
My pre-request script is
let requestCount = +postman.getEnvironmentVariable("requestcount");
if(!requestCount)
{
requestCount = 0;
}
requestCount++;
postman.setEnvironmentVariable("requestcount", requestCount);
Which should update the environment variable requestcount to +1 each time.
My test script is
var currentCount = +postman.getEnvironmentVariable("requestcount");
if(currentCount < 5) // want it to run 5 times
{
postman.setNextRequest("https://snipped");
}
else
{
postman.setNextRequest(null);
}
When I run it through the runner it takes much longer than a non-runner execution and the result is the API was only hit once.

If your API Call is always the same, try just using the iteration-count of the postman runner. Just enter there e.g. 5. And your collection will be repeated 5 times.
Cou cann access the iteration over the following property:
pm.info.iteration
to find out, which iteration it was.
If you still need to icrement variables make sure, that they parsed as integers.
var currentCount =+ parseInt(postman.getEnvironmentVariable("requestcount"));
To be honest: The best way for this benchmarking test would be to use a load-test tool e.g. Loadrunner instead of Postman.

Related

How to terminate collection Iterations in postman if test fails

I am using postman for API's automation. Created collection runner and while execution, I provided Iterations as 5. I am trying to stop the whole test if any scenario fails.
I tried with the below options. But the current test is failing but it's going to the next iteration. How can I stop whole iterations?
postman.setNextRequest(null);
throw new Error('halt')
As you mentioned you can use postman.setNextRequest(null) and do something like this in your test:
// I used 500 status code for stopping condition
if(pm.response.code === 500) {
postman.setNextRequest(null)
}

Asserting postman random variable

I have a collection of simple API requests, and I'm running into an issue with one of the tests I've created.
I have a request which creates some content, and I use the built-in '$randomMACAddress' variable to generate a random name for the list I'm creating. This is stored in an env var called 'listName'. In a test further down in the flow, I assert that the list I'm retrieving has a name which matches '$randomMACAddress':
pm.test("List details are correct"), function () {
pm.expect(jsonData.name).equal($randomMACAddress);
}
This test PASSES.
I have the same check in another test, but this time it fails, and Postman tells me the following:
ReferenceError: $randomMACAddress is not defined
The test in that request is as follows:
pm.test("List details are correct", function () {
pm.expect(jsonData.id).equal(pm.globals.get('listID'));
pm.expect(jsonData.name).equal($randomMACAddress);
pm.expect(jsonData.products[0].skuId).equal('xyz');
});
The requests/tests are run at the same time (collection runner) and I'm baffled as to why that assertion fails on the latter test.
Tried to initialise things differently, but that hasn't worked.
The way to access the stored data would be via the pm.variables.get("variable_name") function and not using $randomMACAddress.
More info here:
https://learning.getpostman.com/docs/postman/environments_and_globals/variables
Also, the chai syntax would be to.equal()

Test runners inconsistent with HttpClient and Mocking HttpMessageRequest XUnit

So let me start by saying I've seen all the threads over the wars between creating a wrapper vs mocking the HttpMethodRequest. In the past, I've done the wrapper method with great success, but I thought I'd go down the path of Mocking the HttpMessageRequest.
For starters here is an example of the debate: Mocking HttpClient in unit tests. I want to add that's not what this is about.
What I've found is that I have tests upon tests that inject an HttpClient. I've been doing a lot of serverless aws lambdas, and the basic flow is like so:
//some pseudo code
public class Functions
{
public Functions(HttpClient client)
{
_httpClient = client;
}
public async Task<APIGatewayResponse> GetData(ApiGatewayRequest request, ILambdaContext context)
{
var result = await _client.Get("http://example.com");
return new APIGatewayResponse
{
StatusCode = result.StatusCode,
Body = await result.Content.ReadStringAsAsync()
};
}
}
...
[Fact]
public void ShouldDoCall()
{
var requestUri = new Uri("http://example.com");
var mockResponse = new HttpResponseMessage(HttpStatusCode.OK) { Content = new StringContent(expectedResponse) };
var mockHandler = new Mock<HttpClientHandler>();
mockHandler
.Protected()
.Setup<Task<HttpResponseMessage>>(
"SendAsync",
It.IsAny<HttpRequestMessage>(),
It.IsAny<CancellationToken>())
.ReturnsAsync(mockResponse);
var f = new Functions(new HttpClient(handler.Object);
var result = f.GetData().Result;
handlerMock.Protected().Verify(
"SendAsync",
Times.Exactly(1), // we expected a single external request
ItExpr.Is<HttpRequestMessage>(req =>
req.Method == HttpMethod.Get &&
req.RequestUri == expectedUri // to this uri
),
ItExpr.IsAny<CancellationToken>()
);
Assert.Equal(200, result.StatusCode);
}
So here's where I have the problem!
When all my tests run in NCrunch they pass, and pass fast!
When I run them all manually with Resharper 2018, they fail.
Equally, when they get run within the CI/CD platform, which is a docker container with the net core 2.1 SDK on a Linux distro, they too fail.
These tests should not be run in parallel (read the tests default this way). I have about 30 tests around these methods combined, and each one randomly fails on the moq verify portion. Sometimes they pass, sometimes they fail. If I break down the tests per test class and on run the groups that way, instead of all in one, then these will all pass in chunks. I'll also add that I have even gone through trying to isolate the variables per test method to make sure there is no overlap.
So, I'm really lost with trying to handle this through here and make sure this is testable.
Are there different ways to approach the HttpClient where it can consistently pass?
After lots of back n forth. I found two of situations from this.
I couldn't get parallel processing disabled within the docker setup, which is where I thought the issue was (I even made it do thread sleep between tests to slow it down (It felt really icky to me)
I found that all the tests l locally ran through the test runners were telling me they passed when about 1/2 failed on the docker test runner. What ended up being the issue was a magic string area when seeing and getting environment variables.
Small caveat to call out, Amazon updated their .NET Core lambda tools to install via dotnet cli, so this was updated in our docker image.

Postman: Is it possible to stop a postman call from being executed based on conditions detected in pre-requisite scripts?

I am using the pre-request script in the first call to dynamically generate essential environment variables for the entire script. I also want the users to be notified of those failures when running via collection runner without having to look up to the console. Is it possible to generate information in tests or some other alternative so failures are explicit in the collection runner results?
e.g. if the ip has not been provided in the environment, it does not make sense to run the login call. So i would like to write in a pre-requisite script:
if (!environment['IP']) {
//do not execute any further and do not send the REST call
}
I tried using:
if (!environment["xyz"]) {
tests["condtion1"]=false
}
but it gives the error:
There was an error in evaluating pre-requisite script: tests is not defined
Is there any workaround - I don't want to move this code to the tests tab as I don't want to clutter the code there with unrelated environment conditioning.
A throw works just fine. (Updated with excellent tip from #Joe White)
if (!environment['X']) {
throw new Error('No "X" set')
}
This prevents the REST call from going through.
But in the collection runner mode it stops the entire test suite.
But when coupled with newman collection runner it works just fine.
A throw error works fine with this test:
var value = pm.environment.get('X')
if (value == undefined || value == null || value.length == 0) {
throw new Error('No "X" set!')
}

How to modify domain mapping on grails before test executions

Thanks in advance for your help!!
I have to change the sequence of Domain object, cause when is working according some environment variable, de PK will be assigned by sequence ( its value will be over 100M , and if it's working with another "scope", I will have to setup the PK of the same domain ( It`s about from a migrated process, so the PK inserted will be from 40M to 90M, it's on demand process):
As an example:
static mapping = {
if (System.getenv("MIGRATOR")) {
id generator: 'assigned'
}else{
id generator: 'sequence', params: [sequence: 'MY_SEQ']
}
}
And I would like with my integration test do something like:
void "test ..." {
System.metaclass.'static'.getenv = {return (it.equals(MIGRATOR))}
..stuff test about migration and thing related to insert add hoc Domain instance.
}
But I realize that environment is setting up before test running.. so I don't see another way..
Note: I do Integration test cause is an transactional code, with withTransactions functions, so as unit test, it doesn't work , I do it in this way, but, I hear another propose so I can change my point of view to test it.
If you just want to make sure that your mapping is correct with your env variables, you can do a integration test, and inspect your domain class mapping though the org.codehaus.groovy.grails.orm.hibernate.cfg.Mapping instance:
Mapping mapping = new GrailsDomainBinder().getMapping(MyDomainClass)
println mapping.getIdentity() //id[generator:sequence, column:id, type:class java.lang.Long]
Another option is to set your variable in your cmd / console before running the test, take the advantage of running a single test in grails:
set MIGRATOR=true
grails test-app -integration package.TestSpec