Reuse {{$randomInt}} in Postman - postman

My 1st request is: GET http://example.com?int={{$randomInt}}.
I need to run 2nd request (with other tests in it) to the same address, so I need to save generated variable. How can I do it?
I was trying pm.variables.get("int") in the "Tests" sandbox after 1st request, but this code cannot see int var.
Creating random number in Pre-req. sandbox to 1st request:
postman.setGlobalVariable('int', Math.floor(Math.random() * 1000));
doesn't help either, because I need to use this param in the URL, while "Pre-req." block is run after request but before tests.
So how can I generate random var before 1st request and store it to use in 2nd request?

If you set this in the Pre-Request Script of the first request:
pm.globals.set('int', Math.floor(Math.random() * 1000))
Or
// Using the built-in Lodash module
pm.globals.set("int", _.random(0, 1000))
You will be able to reference it and use the {{int}} syntax in any request. If you add this in the first request and then use it in the URL http://first-example.com?int={{int}} this value will then persist and you can use it again in a second request http://second-example.com?int={{int}}
Each time that {{$randomInt}} is used, it will generate a new value at run time.

Related

How to specify the database in an ArangoDb AQL query?

If have multiple databases defined on a particular ArangoDB server, how do I specify the database I'd like an AQL query to run against?
Running the query through the REST endpoint that includes the db name (substituted into [DBNAME] below) ie:
/_db/[DBNAME]/_api/cursor
doesn't seem to work. The error message says 'unknown path /_db/[DBNAME]/_api/cursor'
Is this something I have to specify in the query itself?
Also: The query I'm trying to run is:
FOR col in COLLECTIONS() RETURN col.name
Fwiw, I haven't found a way to set the "current" database through the REST API. Also, I'm accessing the REST API from C++ using fuerte.
Tom Regner deserves primary credit here for prompting the enquiry that produced this answer. I am posting my findings here as an answer to help others who might run into this.
I don't know if this is a fuerte bug, shortcoming or just an api caveat that wasn't clear to me... BUT...
In order for the '/_db/[DBNAME/' prefix in an endpoint (eg full endpoint '/_db/[DBNAME/_api/cursor') to be registered and used in the header of a ::arangodb::fuerte::Request, it is NOT sufficient (as of arangodb 3.5.3 and the fuerte version available at the time of this answer) to simply call:
std::unique_ptr<fuerte::Request> request;
const char *endpoint = "/_db/[DBNAME/_api/cursor";
request = fuerte::createRequest(fuerte::RestVerb::Post,endpoint);
// and adding any arguments to the request using a VPackBuilder...
// in this case the query (omitted)
To have the database name included as part of such a request, you must additionally call the following:
request->header.parseArangoPath(endpoint);
Failure to do so seems to result in an error about an 'unknown path'.
Note 1: Simply setting the database member variable, ie
request->header.database = "[DBNAME]";
does not work.
Note 2: that operations without the leading '/_db/[DBNAME]/' prefix, seem to work fine using the 'current' database. (which at least for me, seems to be stuck at '_system' since as far as I can tell, there doesn't seem to be an endpoint to change this via the HTTP REST Api.)
The docs aren't very helpful right now, so just incase someone is looking for a more complete example, then please consider the following code.
EventLoopService eventLoopService;
// adjust the connection for your environment!
std::shared_ptr<Connection> conn = ConnectionBuilder().endpoint("http://localhost:8529")
.authenticationType(AuthenticationType::Basic)
.user(?) // enter a user with access
.password(?) // enter the password
.connect(eventLoopService);
// create the request
std::unique_ptr<Request> request = createRequest(RestVerb::Post, ContentType::VPack);
// enter the database name (ensure the user has access)
request->header.database = ?;
// API endpoint to submit AQL queries
request->header.path = "/_api/cursor";
// Create a payload to be submitted to the API endpoint
VPackBuilder builder;
builder.openObject();
// here is your query
builder.add("query", VPackValue("for col in collections() return col.name"));
builder.close();
// add the payload to the request
request->addVPack(builder.slice());
// send the request (blocking)
std::unique_ptr<Response> response = conn->sendRequest(std::move(request));
// check the response code - it should be 201
unsigned int statusCode = response->statusCode();
// slice has the response data
VPackSlice slice = response->slices().front();
std::cout << slice.get("result").toJson() << std::endl;

Postman: How to store an array from response and use it to make multiple requests

Let's say I have an API endpoint /bar that returns:
{
"fooIds": [1,2,3]
}
and a /foo/<id> endpoint that i would like to call with those ids.
Is there way of getting postman to make a call to the /bar endpoint and subsequent calls to /foo?
Create yourself a collection in Postman with two requests. The first one for /bar, second one for /foo/{{id}} where {{id}} is a Postman parameter stored in either the globals or environment variables (either place is fine, the example below uses the globals).
Then in the test script of the first request save fooIds to the globals with
pm.globals.set('fooIds', pm.response.json().fooIds.join(','));
In the pre-request of the second request
// fetch the fooIds into an array (leaves it undefined if not found)
const fooIds = pm.globals.has('fooIds') && pm.globals.get('fooIds').split(',');
// if fooIds was found, and has a first element, save it
fooIds && fooIds[0] && pm.globals.set('id',fooIds.shift()); // .shift() removes the first element
// save the updated fooIds back to the globals
pm.globals.set('fooIds', fooIds.join(','));
Note: when fooIds = [], then fooIds.join(',') returns "", and setting a global variable to "" deletes it.
and finally in the test script of the second request
pm.globals.has('fooIds') && postman.setNextRequest('nameOfFooRequest')
If you run that in the Postman Collection Runner, that should (hopefully) iterate over the array of id's.
(be sure to save before you run as you might have an infinite loop)
Let me know if you have any issues.

Postman - how to loop request until I get a specific response?

I'm testing API with Postman and I have a problem:
My request goes to sort of middleware, so either I receive a full 1000+ line JSON, or I receive PENDING status and empty array of results:
{
"meta": {
"status": "PENDING",
"missing_connectors_count": 0,
"xxx_type": "INTERNATIONAL"
},
"results": []
}
The question is, how to loop this request in Postman until I will get status SUCCESS and results array > 0?
When I'm sending those requests manually one-by-one it's ok, but when I'm running them through Collection Runner, "PENDING" messes up everything.
I found an awesome post about retrying a failed request by Christian Baumann which allowed me to find a suitable approach to the exact same problem of first polling the status of some operation and only when it's complete run the actual tests.
The code I'd end up if I were you is:
const maxNumberOfTries = 3; // your max number of tries
const sleepBetweenTries = 5000; // your interval between attempts
if (!pm.environment.get("tries")) {
pm.environment.set("tries", 1);
}
const jsonData = pm.response.json();
if ((jsonData.meta.status !== "SUCCESS" && jsonData.results.length === 0) && (pm.environment.get("tries") < maxNumberOfTries)) {
const tries = parseInt(pm.environment.get("tries"), 10);
pm.environment.set("tries", tries + 1);
setTimeout(function() {}, sleepBetweenTries);
postman.setNextRequest(request.name);
} else {
pm.environment.unset("tries");
// your actual tests go here...
}
What I liked about this approach is that the call postman.setNextRequest(request.name) doesn't have any hardcoded request names. The downside I see with this approach is that if you run such request as a part of the collection, it will be repeated a number of times, which might bloat your logs with unnecessary noise.
The alternative I was considering is writhing a Pre-request Script which will do polling (by sending a request) and spinning until the status is some kind of completion. The downside of this approach is the need for much more code for the same logic.
When waiting for services to be ready, or when polling for long-running job results, I see 4 basic options:
Use Postman collection runner or newman and set a per-step delay. This delay is inserted between every step in the collection. Two challenges here: it can be fragile unless you set the delay to a value the request duration will never exceed, AND, frequently, only a small number of steps need that delay and you are increasing total test run time, creating excessive build times for a common build server delaying other pending builds.
Use https://postman-echo.com/delay/10 where the last URI element is number of seconds to wait. This is simple and concise and can be inserted as a single step after the long running request. The challenge is if the request duration varies widely, you may get false failures because you didn't wait long enough.
Retry the same step until success with postman.setNextRequest(request.name);. The challenge here is that Postman will execute the request as fast as it can which can DDoS your service, get you black-listed (and cause false failures), and chew up a lot of CPU if run on a common build server - slowing other builds.
Use setTimeout() in a Pre-request Script. The only downside I see in this approach is that if you have several steps needing this logic, you end up with some cut & paste code that you need to keep in sync
Note: there are minor variations on these - like setting them on a collection, a collection folder, a step, etc.
I like option 4 because it provides the right level of granularity for most of my cases. Note that this appears to be the only way to "sleep" in a Postman script. Now standard javascript sleep methods like a Promise with async and await are not supported and using the sandbox's lodash _.delay(function() {}, delay, args[...]) does not keep script execution on the Pre-request script.
In Postman standalone app v6.0.10, set your step Pre-request script to:
console.log('Waiting for job completion in step "' + request.name + '"');
// Construct our request URL from environment variables
var url = request['url'].replace('{{host}}', postman.getEnvironmentVariable('host'));
var retryDelay = 1000;
var retryLimit = 3;
function isProcessingComplete(retryCount) {
pm.sendRequest(url, function (err, response) {
if(err) {
// hmmm. Should I keep trying or fail this run? Just log it for now.
console.log(err);
} else {
// I could also check for response.json().results.length > 0, but that
// would omit SUCCESS with empty results which may be valid
if(response.json().meta.status !== 'SUCCESS') {
if (retryCount < retryLimit) {
console.log('Job is still PENDING. Retrying in ' + retryDelay + 'ms');
setTimeout(function() {
isProcessingComplete(++retryCount);
}, retryDelay);
} else {
console.log('Retry limit reached, giving up.');
postman.setNextRequest(null);
}
}
}
});
}
isProcessingComplete(1);
And you can do your standard tests in the same step.
Note: Standard caveats apply to making retryLimit large.
Try this:
var body = JSON.parse(responseBody);
if (body.meta.status !== "SUCCESS" && body.results.length === 0){
postman.setNextRequest("This_same_request_title");
} else {
postman.setNextRequest("Next_request_title");
/* you can also try postman.setNextRequest(null); */
}
I was searching for an answer to the same question and thought of a possible solution as I was reading your question.
Use postman workflow to rerun your request every time you don't get the response you're looking for. Anyway, that's what I'm gonna try.
postman.setNextRequest("request_name");
https://www.getpostman.com/docs/workflows
I didn't succeed to find the complete guidelines for this issue that's why I decided to invest some time and to describe all steps of the process from A to Z.
I will be observing an example where we will need to pass through transaction ids and in each iteration to change query param for next transaction id from the list.
Step 1. Prepare your request
https://some url/{{queryParam}}
Add {{queryParam}} variable for changing it from pre-request script.
If you need a token for request you should add it here, in Authorization tab.
Save request to collection (Save button in the right corner). For demonstration purpose I will use "Transactions Request" name. We will need to use this name later on.
Step 2. Prepare pre-request script
In postman use tab Pre-request Script to change transactionId variable from query param to actual transaction id.
let ids = pm.collectionVariables.get("TransactionIds");
ids = JSON.parse(ids);
const id = ids.shift();
console.log('id', id)
postman.setEnvironmentVariable("transactionId", id);
pm.collectionVariables.set("TransactionIds", JSON.stringify(ids));
pm.collectionVariables.get - gets array of transaction ids from collection variables. We will set it up in Step 4.
ids.shift() - we use it to remove id that we will use from our ids list (to prevent running twice on the same id)
postman.setEnvironmentVariable("transactionId", id) - change transaction id from query param to actual transaction id
pm.collectionVariables.set("TransactionIds", JSON.stringify(ids)) - we are setting up a new collection of variables that now does not include the id that was handled.
Step 3. Prepare Tests
In postman use tab Tests to create a loop logic. Tests will be executed after the request execution, so we can use it to make next request.
let ids = pm.collectionVariables.get("TransactionIds");
ids = JSON.parse(ids);
if (ids && ids.length > 0){
console.log('length', ids.length);
postman.setNextRequest("Transactions Request");
} else {
postman.setNextRequest(null);
}
postman.setNextRequest("Transactions Request") - calls a new request, in this case it will call the "Transactions Request" request
Step 4. Run Collections
In Postman from the left side bar you should choose Collections (click on it) and then choose a tab Variables.
This is the collection variables. In our example we used TransactionIds as a variable, so put in Current Value the array of transaction ids on which you want to loop.
Now you can click on Run (the button from right corner, near Save button) to run our loop requests.
You will be proposed to choose on which request you want to perform an action. Choose the request that we’ve created "Transactions Request".
It will run our request with pre-request script and with logic that we’ve set in Tests. In the end postman will open a new window with summary of our run.

JMeter - Verify a Specific Cookie Value was Used?

So in my Test Plan I have a Cookie Manager setup inside my Thread Group which sets a specific Cookie value for 1 Cookie. Let's call it, MYID. I'm trying to figure out a way to verify that this specific Cookie's value was used to complete this one HTTP Request, because if I set my MYID to a specific value *(which actually tells which web server to go to), say to "Server1", but Server1 is down, unavailable, etc... HAProxy should change this and send you to Server2.
So basically I want to try and make sure that Cookie MYID was equal to "Server1" all the way through the HTTP Request.
I am trying to use a BeanShell PostProcessor to verify the Cookie's value after the request is ran, but when I tried using some code I have inside a PreProcessor that sets a cookie in a different Test Plan of mine I get an error saying:
Error Message:
Typed variable declaration : Attempt to resolve method: getCookieManager() on undefined variable or class name: sampler
And below here is the Code slightly modified from a BeanShell PreProcessor in another Test Plan I have...
CODE:
import org.apache.jmeter.protocol.http.control.Cookie;
import org.apache.jmeter.protocol.http.control.CookieManager;
CookieManager manager = sampler.getCookieManager();
for (int i = 0; i < manager.getCookieCount(); i++) {
Cookie cookie = manager.get(i);
if (cookie.getName().equals("MYID")) {
if (cookie.getValue().equals("Server1")) {
log.info("OK: The Cookie contained the Correct Server Number...");
} else {
log.info("ERROR: The Cookie did NOT contain the Correct Server Number...");
}
break;
}
}
For the error, I was thinking the "sampler" object was no longer available since the Request was already run, or something along those lines, but I'm not sure...
Or, is there another JMeter object I should be using instead of the "BeanShell PostProcessor" in order to verify the Cookie's value was correct..?
Any thoughts or suggestion would be greatly appreciated!
Thanks in Advance,
Matt
If you trying to get cookie manager from the parent sampler in the Beanshell PostProcessor - you need to use ctx.getCurrentSampler(), not "sampler" as it is not exposed in script variables.
So just change this line:
CookieManager manager = sampler.getCookieManager();
to
CookieManager manager = ctx.getCurrentSampler().getCookieManager();
And your script should start working as you expect.
ctx is a shorthand to JMeterContext instance and getCurrentSampler() method name is self-explanatory.
For more information on Beanshell scripting check out How to use BeanShell: JMeter's favorite built-in component guide.

How do I call render_template() once per minute flask

I am new to Flask. How do I call render_template('prices.html', stock_price) once every minute for a given page that is fed by constantly changing data?
I tried this:
throttle.Throttle(period=60) # 60 second throttle
while True:
stock_price = get_stock_price()
render_template('prices.html', stock_price=stock_price)
throttle.check() # Check if 60 seconds have passed otherwise sleep
The only thing that does work is return render_template(...). Apparently render_template() must be part of a return-statement. Unfortunately, once return is called the game is over.
How do I accomplish this? I'm assuming it is just ignorance on my part.
When you render the template you run the Flask process and return an HTML after the template file rendered, and as your said, you use return and the game is over.
Even if you will find a really hard-code way to do it, it's not a good way to do it. You will waste too much server calls and a user waiting time.
The easy way to do it
Writing a really simple Python function that return only the stock number(i guess you already have one - get_stock_price() )
But i would decorate it with a route(Lets say "/getprice"). Every one who will get to this page will get a black page but the stock price text.
Now for the real magic - use JQuery-AJAX on the HTML page to call this function:
$.ajax({
type: "POST",
url: "/getprice",
})
.done(function( price ) {
$("#price-box").val(price)
});
Hey but wait, what about my "once per minute"??
of curse we can insert this ajax call into setInerval() function like this:
setInterval(function() {
$.ajax({
type: "POST",
url: "/getprice",
})
.done(function( price ) {
$("#price-box").val(price)
});
}, 1000 * 60);
Tell me if you need any help with that.
The Pro way to do it
Ok, so you managed to send a request for your get_stock_price() function once per minute.
Great.
But what about performance?
I guess your get_stock_price() doing some other request or even some web scraping which, again, could be really hard for the server (think what going on when 10,000 users using this page).
What i would do is:
Store the data from get_stock_price() in your DB every minute(Cron Job would do the job), then when a user asks(AJAX request) for this data, pull it out of the DB.
This way the server will work on the background and the user won't see any different with his data loading speed.