Reuse data across iterations - postman

For testing a POST request to an API, I'm using postman. What I'm trying to achieve is that every request I do selects some data from a big file (randomly) and uses this data to populate the body of the request.
What I would like is that I can select for example 10 iterations and have every one of this iterations pick some random data out of the file I would provide.
I have this working for 1 iteration.
The problem is that in Postman I can't find a way to use the data for the first iteration for all iterations.
A workaround could be to copy-paste the data for every iteration I would like to do, but since this will be a rather large dataset I would like to avoid this.
In short, I'm looking for a way to provide a file to the postman runner and use the data in that file for every iteration I would run.

You can try and call your request in a while function with pm.setNextRequest('nameOfYourRequest');
function reccurrentCall(iterations){
while(iterations !== 0){
postman.SetNextRequest('nameOfYourRequest');
iterations--;
};
And call the function with number of Iterations you want to make
recurrentCall(5);

Related

Test request on postman to evaluate performance

I was given a job on postman application which I never used that I experienced today. So I have to write queries that must be repeated in a loop in order to have results over different periods of time to check performance (basically tests). I wanted to know how to write the test query to check results in a loop based on duration, date, size.

Dividing tasks into aws step functions and then join them back when all completed

We have a AWS step function that processes csv files. These CSV files records can be anything from 1 to 4000.
Now, I want to create another inner AWS step function that will process these csv records. The problem is for each record I need to hit another API and for that I want all of the record to be executed asynchronously.
For example - CSV recieved having records of 2500
The step function called another step function 2500 times (The other step function will take a CSV record as input) process it and then store the result in Dynamo or in any other place.
I have learnt about the callback pattern in aws step function but in my case I will be passing 2500 tokens and I want the outer step function to process them when all the 2500 records are done processing.
So my question is this possible using the AWS step function.
If you know any article or guide for me to reference then that would be great.
Thanks in advance
It sounds like dynamic parallelism could work:
To configure a Map state, you define an Iterator, which is a complete sub-workflow. When a Step Functions execution enters a Map state, it will iterate over a JSON array in the state input. For each item, the Map state will execute one sub-workflow, potentially in parallel. When all sub-workflow executions complete, the Map state will return an array containing the output for each item processed by the Iterator.
This keeps the flow all within a single Step Function and allows for easier traceability.
The limiting factor would be the amount of concurrency available (docs):
Concurrent iterations may be limited. When this occurs, some iterations will not begin until previous iterations have completed. The likelihood of this occurring increases when your input array has more than 40 items.
One additional thing to be aware of here is cost. You'll easily blow right through the free tier and start incurring actual cost (link).

How to know when elasticsearch is ready for query after adding new data?

I am trying to do some unit tests using elasticsearch. I first start by using the index API about 100 times to add new data to my index. Then I use the search API with aggs. The problem is if I don't pause for 1 second after adding data 100 times, I get random results. If I wait 1 second I always get the same result.
I'd rather not have to wait x amount of time in my tests, that seems like bad practice. Is there a way to know when the data is ready?
I am waiting until I get a success response from elasticsearch /index api already, but that is not enough it seems.
First I'd suggest you to index your documents with a single bulk query : it would save some time because of less http/tcp overhead.
To answer your question, you should consider using the refresh=true parameter (or wait_for) while indexing your 100 documents.
As stated in documentation, it would :
Refresh the relevant primary and replica shards (not the whole index)
immediately after the operation occurs, so that the updated document
appears in search results immediately
More about it here :
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-refresh.html

Is there a way to access the iteration number in Postman REST Client?

I'm using postman for API testing. I'm running a large number of tests and I want to print the iteration number to the console on some of them. Is there a way to get the iteration number as an environment-like variable?
According to Postman API Reference, pm.info.iteration - is the value of the current iteration being run.
Example:
console.log(pm.info.iteration);
It is possible now! You can access the iteration variable, in the same way you access other variables like responseBody.
I don't know if there is an internal way to get the iteration number but I believe you should be able to track this number through code yourself. Here's a quick code snippet:
var value = environment.count;
value++;
postman.setEnvironmentVariable("count", value);
If you put this in the pre-request editor or the test editor of a collection that you are sure will run once per iteration it will effectively track the iteration count.
You can get the iteration number with
pm.info.iteration:Number
Is the value of the current iteration being run.
Postman Sandbox API reference
I got there like this:
const count = pm.info.iteration+1
console.log("======== LITERATION "+count+" ========");

function with double mode of functionality

I have a function which should have two modes of behaviour according to the place where it's called from.
The core functionality is to do an insert into a table in my database, but it has to be done in two different ways.
Normal mode: whenever it's called only one time (outside of a loop)
For example:
//...
myfunc(param1, record); // it should insert a single record into the database
//...
Batch mode: whenever it's called from inside of a loop
For example:
while(...){
myfunc(param1, record);
}
Inside the "while" loop, each time it's called, it only should store the record in a list and when it reaches the end of the loop, it should fetch all records from the list and prepare a "batch" query that inserts all in one go.
I am wondering how to make it to detect from where it's called in order to switch to the corresponding mode and also how to detect that it has reached the end of a loop and from now on, it should start getting records from the list, prepare the query and execute it.
Any tips or suggestions will be highly appreciated!
Thanks heaps!
It is not, in general, possible to tell whether you are being called in a loop, even with full source code access.
You might be able to do something with caching and delaying the actual database insert for a limited time in all cases. Go on caching until you go for x microseconds without a new call, and then insert the cached data.
However, that could give strange effects if you are not in control of all accesses to the database. In particular, you should do your cached inserts any time there is query that might be affected by them, even in a loop.
Sometimes it is useful to cache queries like this in order to minimize the number of database queries. You can have one function that builds a cache and a second function that sends the request and flushes the cache. If you are going to do that, I recommend using the same function for both single-entry and multiple-entries. The pseudocode will look something like this:
Single-entry usage:
myfunc(param1, record); # caches requests
sendRequests(); # sends all cached requests, flushes cache
Multiple-entry usage:
while(...){
myfunc(param1, record);
}
sendRequests();
sendRequest() will send as many queries as it finds: 1 or many. For efficiency, it can format the requests differently based on their size.