I'm testing API with Postman and I have a problem:
My request goes to sort of middleware, so either I receive a full 1000+ line JSON, or I receive PENDING status and empty array of results:
{
"meta": {
"status": "PENDING",
"missing_connectors_count": 0,
"xxx_type": "INTERNATIONAL"
},
"results": []
}
The question is, how to loop this request in Postman until I will get status SUCCESS and results array > 0?
When I'm sending those requests manually one-by-one it's ok, but when I'm running them through Collection Runner, "PENDING" messes up everything.
I found an awesome post about retrying a failed request by Christian Baumann which allowed me to find a suitable approach to the exact same problem of first polling the status of some operation and only when it's complete run the actual tests.
The code I'd end up if I were you is:
const maxNumberOfTries = 3; // your max number of tries
const sleepBetweenTries = 5000; // your interval between attempts
if (!pm.environment.get("tries")) {
pm.environment.set("tries", 1);
}
const jsonData = pm.response.json();
if ((jsonData.meta.status !== "SUCCESS" && jsonData.results.length === 0) && (pm.environment.get("tries") < maxNumberOfTries)) {
const tries = parseInt(pm.environment.get("tries"), 10);
pm.environment.set("tries", tries + 1);
setTimeout(function() {}, sleepBetweenTries);
postman.setNextRequest(request.name);
} else {
pm.environment.unset("tries");
// your actual tests go here...
}
What I liked about this approach is that the call postman.setNextRequest(request.name) doesn't have any hardcoded request names. The downside I see with this approach is that if you run such request as a part of the collection, it will be repeated a number of times, which might bloat your logs with unnecessary noise.
The alternative I was considering is writhing a Pre-request Script which will do polling (by sending a request) and spinning until the status is some kind of completion. The downside of this approach is the need for much more code for the same logic.
When waiting for services to be ready, or when polling for long-running job results, I see 4 basic options:
Use Postman collection runner or newman and set a per-step delay. This delay is inserted between every step in the collection. Two challenges here: it can be fragile unless you set the delay to a value the request duration will never exceed, AND, frequently, only a small number of steps need that delay and you are increasing total test run time, creating excessive build times for a common build server delaying other pending builds.
Use https://postman-echo.com/delay/10 where the last URI element is number of seconds to wait. This is simple and concise and can be inserted as a single step after the long running request. The challenge is if the request duration varies widely, you may get false failures because you didn't wait long enough.
Retry the same step until success with postman.setNextRequest(request.name);. The challenge here is that Postman will execute the request as fast as it can which can DDoS your service, get you black-listed (and cause false failures), and chew up a lot of CPU if run on a common build server - slowing other builds.
Use setTimeout() in a Pre-request Script. The only downside I see in this approach is that if you have several steps needing this logic, you end up with some cut & paste code that you need to keep in sync
Note: there are minor variations on these - like setting them on a collection, a collection folder, a step, etc.
I like option 4 because it provides the right level of granularity for most of my cases. Note that this appears to be the only way to "sleep" in a Postman script. Now standard javascript sleep methods like a Promise with async and await are not supported and using the sandbox's lodash _.delay(function() {}, delay, args[...]) does not keep script execution on the Pre-request script.
In Postman standalone app v6.0.10, set your step Pre-request script to:
console.log('Waiting for job completion in step "' + request.name + '"');
// Construct our request URL from environment variables
var url = request['url'].replace('{{host}}', postman.getEnvironmentVariable('host'));
var retryDelay = 1000;
var retryLimit = 3;
function isProcessingComplete(retryCount) {
pm.sendRequest(url, function (err, response) {
if(err) {
// hmmm. Should I keep trying or fail this run? Just log it for now.
console.log(err);
} else {
// I could also check for response.json().results.length > 0, but that
// would omit SUCCESS with empty results which may be valid
if(response.json().meta.status !== 'SUCCESS') {
if (retryCount < retryLimit) {
console.log('Job is still PENDING. Retrying in ' + retryDelay + 'ms');
setTimeout(function() {
isProcessingComplete(++retryCount);
}, retryDelay);
} else {
console.log('Retry limit reached, giving up.');
postman.setNextRequest(null);
}
}
}
});
}
isProcessingComplete(1);
And you can do your standard tests in the same step.
Note: Standard caveats apply to making retryLimit large.
Try this:
var body = JSON.parse(responseBody);
if (body.meta.status !== "SUCCESS" && body.results.length === 0){
postman.setNextRequest("This_same_request_title");
} else {
postman.setNextRequest("Next_request_title");
/* you can also try postman.setNextRequest(null); */
}
I was searching for an answer to the same question and thought of a possible solution as I was reading your question.
Use postman workflow to rerun your request every time you don't get the response you're looking for. Anyway, that's what I'm gonna try.
postman.setNextRequest("request_name");
https://www.getpostman.com/docs/workflows
I didn't succeed to find the complete guidelines for this issue that's why I decided to invest some time and to describe all steps of the process from A to Z.
I will be observing an example where we will need to pass through transaction ids and in each iteration to change query param for next transaction id from the list.
Step 1. Prepare your request
https://some url/{{queryParam}}
Add {{queryParam}} variable for changing it from pre-request script.
If you need a token for request you should add it here, in Authorization tab.
Save request to collection (Save button in the right corner). For demonstration purpose I will use "Transactions Request" name. We will need to use this name later on.
Step 2. Prepare pre-request script
In postman use tab Pre-request Script to change transactionId variable from query param to actual transaction id.
let ids = pm.collectionVariables.get("TransactionIds");
ids = JSON.parse(ids);
const id = ids.shift();
console.log('id', id)
postman.setEnvironmentVariable("transactionId", id);
pm.collectionVariables.set("TransactionIds", JSON.stringify(ids));
pm.collectionVariables.get - gets array of transaction ids from collection variables. We will set it up in Step 4.
ids.shift() - we use it to remove id that we will use from our ids list (to prevent running twice on the same id)
postman.setEnvironmentVariable("transactionId", id) - change transaction id from query param to actual transaction id
pm.collectionVariables.set("TransactionIds", JSON.stringify(ids)) - we are setting up a new collection of variables that now does not include the id that was handled.
Step 3. Prepare Tests
In postman use tab Tests to create a loop logic. Tests will be executed after the request execution, so we can use it to make next request.
let ids = pm.collectionVariables.get("TransactionIds");
ids = JSON.parse(ids);
if (ids && ids.length > 0){
console.log('length', ids.length);
postman.setNextRequest("Transactions Request");
} else {
postman.setNextRequest(null);
}
postman.setNextRequest("Transactions Request") - calls a new request, in this case it will call the "Transactions Request" request
Step 4. Run Collections
In Postman from the left side bar you should choose Collections (click on it) and then choose a tab Variables.
This is the collection variables. In our example we used TransactionIds as a variable, so put in Current Value the array of transaction ids on which you want to loop.
Now you can click on Run (the button from right corner, near Save button) to run our loop requests.
You will be proposed to choose on which request you want to perform an action. Choose the request that we’ve created "Transactions Request".
It will run our request with pre-request script and with logic that we’ve set in Tests. In the end postman will open a new window with summary of our run.
Related
I am new to Postman and running into a recurrent issue that I can’t figure out.
I am trying to run the same request multiple times using an array of data established on the Pre-request script, however, when I go to the runner the request is only running once, rather than 3 times.
Pre-request script:
var uuids = pm.environment.get(“uuids”);
if(!uuids) {
uuids= [“1eb253c6-8784”, “d3fb3ab3-4c57”, “d3fb3ab3-4c78”];
}
var currentuuid = uuids.shift();
pm.environment.set(“uuid”, currentuuid);
pm.environment.set(“uuids”, uuids);
Tests:
var uuids = pm.environment.get(“uuids”);
if (uuids && uuids.length>0) {
postman.setNextRequest(myurl/?userid={{uuid}});
} else {
postman.setNextRequest();
}
I have looked over regarding documentation and I cannot find what is wrong with my code.
Thanks!
Pre-request script is not a good way to test api with different data. Better use Postman runner for the same.
First, prepare a request with postman with variable data. For e.g
Then click to the Runner tab
Prepare csv file with data
uuids
1eb253c6-8784
d3fb3ab3-4c57
d3fb3ab3-4c78
And provide as data file, and run the sample.
It will allow you run the same api, multiple times with different data types and can check test cases.
You are so close! The issue is that you are not un-setting your environment variable for uuids, so it is an empty list at the start of each run. Simply add
pm.environment.unset("uuids") to your exit statement and it should run all three times. All specify the your next request should stop the execution by setting it to null.
So your new "Tests" will become:
var uuids = pm.environment.get("uuids");
if (uuids && uuids.length>0) {
postman.setNextRequest(myurl/?userid={{uuid}});
} else {
postman.setNextRequest(null);
pm.environment.unset("uuids")
}
It seems as though the Runner tab has been removed now?
For generating 'real' data, I found this video a great help: Creating A Runner in Postman-API Testing
Sending 1000 responses to the db to simulate real usage has saved a lot of time!
I'm giving the AWS' Step Functions a try and I'm interested in them for implementing long-running procedures. One functionality I would like to provide to my users is the possibility of showing execution's progress. Using describeExecution I can verify if some execution is still running or done. But progress is a logical measure and Step Functions itself has no way to tell me how much of the process is left.
For that, I need to provide the logic myself. I can measure the progress in the tasks of the state machine knowing the total number of steps needed to take and counting the number of steps already taken. I can store this information in the state of the machine which is passed among steps while the machine is running. But how can I extract this state using API? Of course, I can store this information is an external storage like DynamoDb but that's not very elegant!
The solution I have found my self (so far this is the only), is using getExecutionHistory API. This API returned a list of events that are generated for the Step Functions and it can include input or output (or neither) based on whether the event is for a starting a lambda function or is it for the time a lambda function has exited. You can call the API like this:
var params = {
executionArn: 'STRING_VALUE', /* required */
maxResults: 10,
reverseOrder: true
};
stepfunctions.getExecutionHistory(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
By reversing the order of the list of events, we can get the latest ones first. Then we can look for the latest output in the list. The first one you'll find will be the latest version of the output which is the current state of the Step Functions.
I have this scenario where I have a WebApi and an endpoint that when triggered does a lot of work (around 2-5min). It is a POST endpoint with side effects and I would like to limit the execution so that if 2 requests are sent to this endpoint (should not happen, but better safe than sorry), one of them will have to wait in order to avoid race conditions.
I first tried to use a simple static lock inside the controller like this:
lock (_lockObj)
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
this is of course not possible because of the await inside the lock statement.
Another solution I considered was to use a SemaphoreSlim implementation like this:
await semaphore.WaitAsync();
try
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
finally
{
semaphore.Release();
}
However, according to MSDN:
The SemaphoreSlim class represents a lightweight, fast semaphore that can be used for waiting within a single process when wait times are expected to be very short.
Since in this scenario the wait times may even reach 5 minutes, what should I use for concurrency control?
EDIT (in response to plog17):
I do understand that passing this task onto a service might be the optimal way, however, I do not necessarily want to queue something in the background that still runs after the request is done.
The request involves other requests and integrations that take some time, but I would still like the user to wait for this request to finish and get a response regardless.
This request is expected to be only fired once a day at a specific time by a cron job. However, there is also an option to fire it manually by a developer (mostly in case something goes wrong with the job) and I would like to ensure the API doesn't run into concurrency issues if the developer e.g. double-sends the request accidentally etc.
If only one request of that sort can be processed at a given time, why not implement a queue ?
With such design, no more need to lock nor wait while processing the long running request.
Flow could be:
Client POST /RessourcesToProcess, should receive 202-Accepted quickly
HttpController simply queue the task to proceed (and return the 202-accepted)
Other service (windows service?) dequeue next task to proceed
Proceed task
Update resource status
During this process, client should be easily able to get status of requests previously made:
If task not found: 404-NotFound. Ressource not found for id 123
If task processing: 200-OK. 123 is processing.
If task done: 200-OK. Process response.
Your controller could look like:
public class TaskController
{
//constructor and private members
[HttpPost, Route("")]
public void QueueTask(RequestBody body)
{
messageQueue.Add(body);
}
[HttpGet, Route("taskId")]
public void QueueTask(string taskId)
{
YourThing thing = tasksRepository.Get(taskId);
if (thing == null)
{
return NotFound("thing does not exist");
}
if (thing.IsProcessing)
{
return Ok("thing is processing");
}
if (!thing.IsProcessing)
{
return Ok("thing is not processing yet");
}
//here we assume thing had been processed
return Ok(thing.ResponseContent);
}
}
This design suggests that you do not handle long running process inside your WebApi. Indeed, it may not be the best design choice. If you still want to do so, you may want to read:
Long running task in WebAPI
https://blogs.msdn.microsoft.com/webdev/2014/06/04/queuebackgroundworkitem-to-reliably-schedule-and-run-background-processes-in-asp-net/
I'm developing a REST server in Play with Scala, that at some point needs to request data at one or more other web services. Based on the responses from these services the server must compose a unified result to use later on.
Example:
Event C on www.someplace.com needs to be executed. In order to execute Event C, Event A on www.anotherplace.com and Event B on www.athirdplace.com must also be executed.
Event C has a Seq(www.anotherplace.com, www.athirdplace.com) from which I would like to iterate and send a WS request to each URL respectively to check wether B and C are executed.
It is assumed that a GET to these URLs returns either true or false
How do I collect the responses from each request (preferably combined to a list) and assert that each response is equal to true?
EDIT: An event may contain an arbitrary number of URL's. So I cant know beforehand how many WS requests i need to send.
Short Answer
You can use sequence method available on Future object.
For example:
import scala.concurrent.Future
val urls = Seq("www.anotherplace.com", "www.athirdplace.com")
val requests = urls.map(WS.url)
val futureResponses = Future.sequence(requests.map(_.get()))
Aggregated Future
Note that the type of futureResponses will be Future[Seq[WSResponse]]. Now you can work on the results:
futureResponses.map { responses =>
responses.map { response =>
val body = response.body
// do something with response body
}
}
More Details
From ScalaDocs of sequence method:
Transforms a TraversableOnce[Future[A]] into a
Future[TraversableOnce[A]]. Useful for reducing many Futures into a
single Future.
Note that if any of the Futures you pass to sequence fails, the resulting Future will be failed as well. Only when all Futures are completed successfully the result will complete successfully. This is good for some purposes, especially if you want to send requests at the same time, no one after another.
Have a look at the documentation and see if you can get their example to work.
Try something like this:
val futureResponse: Future[WSResponse] = for {
responseOne <- WS.url(urlOne).get()
responseTwo <- WS.url(responseOne.body).get()
responseThree <- WS.url(responseTwo.body).get()
} yield responseOne && responseTwo && responseThree
You probably need to parse the response of your WebService since they (probably) won't return booleans, but you'll get the idea.
(I may be using this in a totally incorrect manner, so feel free to challenge the premise of this post.)
I have a small RACTest app (sound familiar?) that I'm trying to unit test. I'd like to test MPSTicker, one of the most ReactiveCocoa-based components. It has a signal that sends a value once per second that accumulates, iff an accumulation flag is set to YES. I added an initializer to take a custom signal for its incrementing signal, rather than being only timer-based.
I wanted to unit test a couple of behaviours of MPSTicker:
Verify that its accumulation signal increments properly (i.e. monotonically increases) when accumulation is enabled and the input incrementing signal sends a new value.
Verify that it sends the same value (and not an incremented value) when the input signal sends a value.
I've added a test that uses the built-in timer to test the first increment, and it works as I expected (though I'm seeking advice on improving the goofy RACSequence initialization I did to get a signal with the #(1) value I wanted.)
I've had a very difficult time figuring out what input signal I can provide to MPSTicker that I can manually send values to. I'm envisioning a test like:
<set up ticker>
<send a tick value>
<verify accumulated value is 1>
<send another value>
<verify accumulated value is 2>
I tried using a RACSubject so I can use sendNext: to push in values as I see fit, but it's not working like I expect. Here's two broken tests:
- (void)testManualTimerTheFirst
{
// Create a custom tick with one value to send.
RACSubject *controlledSignal = [RACSubject subject];
MPSTicker *ticker = [[MPSTicker alloc] initWithTickSource:controlledSignal];
[ticker.accumulateSignal subscribeNext:^(id x) {
NSLog(#"%s value is %#", __func__, x);
}];
[controlledSignal sendNext:#(2)];
}
- (void)testManualTimerTheSecond
{
// Create a custom tick with one value to send.
RACSubject *controlledSignal = [RACSubject subject];
MPSTicker *ticker = [[MPSTicker alloc] initWithTickSource:controlledSignal];
BOOL success = NO;
NSError *error = nil;
id value = [ticker.accumulateSignal asynchronousFirstOrDefault:nil success:&success error:&error];
if (!success) {
XCTAssertTrue(success, #"Signal failed to return a value. Error: %#", error);
} else {
XCTAssertNotNil(value, #"Signal returned a nil value.");
XCTAssertEqualObjects(#(1), value, #"Signal returned an unexpected value.");
}
// Send a value.
[controlledSignal sendNext:#(1)];
}
In testManualTimerTheFirst, I never see any value from controlledSignal's sendNext: come through to my subscribeNext: block.
In testManualTimerTheSecond, I tried using the asynchronousFirstOrDefault: call to get the first value from the signal, then manually sent a value on my subject, but the value didn't come through, and the test failed when asynchronousFirstOrDefault: timed out.
What am I missing here?
This may not answer your question exactly, but it may give you insights on how to effectively test your signals. I've used 2 approaches myself so far:
XCTestCase and TRVSMonitor
TRVSMonitor is a small utility which will pause the current thread for you while you run your assertions. For example:
TRVSMonitor *monitor = [TRVSMonitor monitor];
[[[self.service searchPodcastsWithTerm:#"security now"] collect] subscribeNext:^(NSArray *results) {
XCTAssertTrue([results count] > 0, #"Results count should be > 0";
[monitor signal];
} error:^(NSError *error) {
XCTFail(#"%#", error);
[monitor signal];
}];
[monitor wait];
As you can see, I'm telling the monitor to wait right after I subscribe and signal it to stop waiting at the end of subscribeNext and error blocks to make it continue executing (so other tests can run too). This approach has the benefit of not relying on a static timeout, so your code can run as long as it needs to.
Using CocoaPods, you can easily add TRVSMonitor to your project:
pod "TRVSMonitor", "~> 0.0.3"
Specta & Expecta
Specta is a BDD/TDD (behavior driven/test driven) test framework. Expecta is a framework which provides more convenient assertion matchers. It has built-in support for async tests. It enables you to write more descriptive tests with ReactiveCocoa, like so:
it(#"should return a valid image, with cache state 'new'", ^AsyncBlock {
[[cache imageForURL:[NSURL URLWithString:SECURITY_NOW_ARTWORK_URL]] subscribeNext:^(UIImage *image) {
expect(image).notTo.beNil();
expect(image.cacheState).to.equal(JPImageCacheStateNew);
} error:^(NSError *error) {
XCTFail(#"%#", error);
} completed:^{
done();
}];
});
Note the use of ^AsyncBlock {. Using simply ^ { would imply a synchronous test.
Here you call the done() function to signal the end of an asynchronous test. I believe Specta uses a 10 second timeout internally.
Using CocoaPods, you can easily add Expecta & Specta:
pod "Expecta", "~> 0.2.3"
pod "Specta", "~> 0.2.1"
See this question: https://stackoverflow.com/a/19127547/420594
The XCAsyncTestCase has some extra functionality to allow for asynchronous test cases.
Also, I haven't looked at it in depth yet, but could ReactiveCocoaTests be of some interest to you? On a glance, they appear to be using Expecta.