I would like to run a Postman test on just the final iteration of a test run - I am building a variable (array) of response time values across all of the iterations and then want to test this variable for extreme values once we've reached the last / final iteration.
I hoped pm.info.iteration would have my answer but didn't see anything relevant.
I'm using a data file - the test runner highlights how many iterations are applicable (rows in the csv) as soon as the file is chosen so I'm guessing that Postman does know 'final' iteration? I just haven't worked out how to get at it.
My workaround is to hard code the number of iterations per test run based on how many rows my csvs currently have (e.g. if(pm.iteration.info === 70) but not ideal as the data file is likely to grow.
As #DannyDainton mentioned you can use iterationCOunt
iteration index starts from 0 ,so use
(pm.info.iteration === (pm.info.iterationCount-1) )
Related
I was given a job on postman application which I never used that I experienced today. So I have to write queries that must be repeated in a loop in order to have results over different periods of time to check performance (basically tests). I wanted to know how to write the test query to check results in a loop based on duration, date, size.
For testing a POST request to an API, I'm using postman. What I'm trying to achieve is that every request I do selects some data from a big file (randomly) and uses this data to populate the body of the request.
What I would like is that I can select for example 10 iterations and have every one of this iterations pick some random data out of the file I would provide.
I have this working for 1 iteration.
The problem is that in Postman I can't find a way to use the data for the first iteration for all iterations.
A workaround could be to copy-paste the data for every iteration I would like to do, but since this will be a rather large dataset I would like to avoid this.
In short, I'm looking for a way to provide a file to the postman runner and use the data in that file for every iteration I would run.
You can try and call your request in a while function with pm.setNextRequest('nameOfYourRequest');
function reccurrentCall(iterations){
while(iterations !== 0){
postman.SetNextRequest('nameOfYourRequest');
iterations--;
};
And call the function with number of Iterations you want to make
recurrentCall(5);
I'm getting a "The remote Process is out of memory" in SAS DIS (Data Integration Studio):
Since it is possible that my approach is wrong, I'll explain the problem I'm working on and the solution I've decided on:
I have a large list of customer names which need cleaning. In order to achieve this, I use a .csv file containing regular expression patterns and their corresponding replacements; (I use this approach since it is easier to add new patterns to the file and upload it to the server for the deployed job to read from rather than harcoding new rules and redeploying the job).
In order to get my data step to make use of the rules in the file I add the patterns and their replacements to an array in the first iteration of my data step then apply them to my names. Something like:
DATA &_OUPUT;
ARRAY rule_nums{1:&NOBS} _temporary_;
IF(_n_ = 1) THEN
DO i=1 to &NOBS;
SET WORK.CLEANING_RULES;
rule_nums{i} = PRXPARSE(CATS('s/',rule_string_match,'/',rule_string_replace,'/i'));
END;
SET WORK.CUST_NAMES;
customer_name_clean = customer_name;
DO i=1 to &NOBS;
customer_name_clean = PRXCHANGE(a_rule_nums{i},1,customer_name_clean);
END;
RUN;
When I run this on around ~10K rows or less, it always completes and finishes extremely quickly. If I try on ~15K rows it chokes for a super long time and eventually throws an "Out of memory" error.
To try and deal with this I built a loop (using the SAS DIS loop transformation) wherein I number the rows of my dataset first, then apply the preceding logic in batches of 10000 names at a time. After a very long time I got the same out of memory error, but when I checked my target table (Teradata) I noticed that it ran and loaded the data for all but the last iteration. When I switched the loop size from 10000 to 1000 I saw exactly the same behaviour.
For testing purposes I've been working with only around ~500K rows but will soon have to handle millions and am worried about how this is going to work. For reference, the set of cleaning rules I'm applying is currently 20 rows but will grow to possibly a few hundred.
Is it significantly less efficient to use a file with rules rather than hard coding the regular expressions directly in my datastep?
Is there any way to achieve this without having to loop?
Since my dataset gets overwritten on every loop iteration, how can there be an out of memory error for datasets that are 1000 rows long (and like 3 columns)?
Ultimately, how do I solve this out of memory error?
Thanks!
The issue turned out to be that the log that the job was generating was too large. The possible solutions are to disable logging or to redirect the log to a location which can be periodically purged and/or has enough space.
I read some articles about SOAPUI, one of them is this SoapUI getting request parameters in mock service script and I think the solution I am looking for is something using Groovy.
I have a SOAP Web Service that I want to run some testes with a dynamically changing request. This request...
<soapenv:Body>
<req:MyrRquest>
<req:number>XPTO</req:number>
</req:MyrRquest>
</soapenv:Body>
My idea is to run a loop from a starting value increasing 1 until I reach my maximum. And I would like to replace XPTO with this changing value.
Did anyone ever attempted this? What is the best way to do that?
Here is the way it can be done, by the use of groovy step.
Define a test case with two test steps:
Test Request step(soap, the one you shown)
Groovy Script Step(this is the additional one which I am proposing)
Define below three test case level custom properties like what you needed min and max times it should be repeatedly executed and provide values as per the test and keep CURRENT_VALUE same as MIN_VALUE which is one time job. Because, CURRENT_VALUE that gets incremented each time and do not want to alter MIN_VALUE each time the test runs. That way, do not have reset the value after each time test case is executed.
MIN_VALUE
MAX_VALUE
CURRENT_VALUE
Note that this cannot run individual steps i.e., the test case has to be executed in order to fulfill your need as it has to repeat the number of times, and hope that is ok for you.
In the test request, need to use the current value place holder.
Change: <req:number>XPTO</req:number>
To : <req:number>${#TestCase#CURRENT_VALUE}</req:number>
Here is the groovy script code:
//Read the test case level properties as integers
def min = context.testCase.getPropertyValue('MIN_VALUE') as Integer
def max = context.testCase.getPropertyValue('MAX_VALUE') as Integer
//Get the previous step name
def pStepName = context.testCase.testStepList[context.currentStepIndex-1].name
//min+1, because already test request is executed once
((min+1)..max).each {
//update the current value incremented by 1
context.testCase.setPropertyValue('CURRENT_VALUE', it.toString())
log.info "Running step ${pStepName} for ${it} time"
//run the previous test step
testRunner.runTestStepByName(pStepName)
}
//finally resetting current value to min value as test finishes
context.testCase.setPropertyValue('CURRENT_VALUE', min.toString())
This groovy script step basically takes care of running the first step for n-1 times, because step 1 already executed before groovy script test step where n is the total number of times required to be executed(n = max - min).
And as mentioned earlier, just run the test case.
I'm using postman for API testing. I'm running a large number of tests and I want to print the iteration number to the console on some of them. Is there a way to get the iteration number as an environment-like variable?
According to Postman API Reference, pm.info.iteration - is the value of the current iteration being run.
Example:
console.log(pm.info.iteration);
It is possible now! You can access the iteration variable, in the same way you access other variables like responseBody.
I don't know if there is an internal way to get the iteration number but I believe you should be able to track this number through code yourself. Here's a quick code snippet:
var value = environment.count;
value++;
postman.setEnvironmentVariable("count", value);
If you put this in the pre-request editor or the test editor of a collection that you are sure will run once per iteration it will effectively track the iteration count.
You can get the iteration number with
pm.info.iteration:Number
Is the value of the current iteration being run.
Postman Sandbox API reference
I got there like this:
const count = pm.info.iteration+1
console.log("======== LITERATION "+count+" ========");