Unit-testing a simple usage of RACSignal with RACSubject - unit-testing

(I may be using this in a totally incorrect manner, so feel free to challenge the premise of this post.)
I have a small RACTest app (sound familiar?) that I'm trying to unit test. I'd like to test MPSTicker, one of the most ReactiveCocoa-based components. It has a signal that sends a value once per second that accumulates, iff an accumulation flag is set to YES. I added an initializer to take a custom signal for its incrementing signal, rather than being only timer-based.
I wanted to unit test a couple of behaviours of MPSTicker:
Verify that its accumulation signal increments properly (i.e. monotonically increases) when accumulation is enabled and the input incrementing signal sends a new value.
Verify that it sends the same value (and not an incremented value) when the input signal sends a value.
I've added a test that uses the built-in timer to test the first increment, and it works as I expected (though I'm seeking advice on improving the goofy RACSequence initialization I did to get a signal with the #(1) value I wanted.)
I've had a very difficult time figuring out what input signal I can provide to MPSTicker that I can manually send values to. I'm envisioning a test like:
<set up ticker>
<send a tick value>
<verify accumulated value is 1>
<send another value>
<verify accumulated value is 2>
I tried using a RACSubject so I can use sendNext: to push in values as I see fit, but it's not working like I expect. Here's two broken tests:
- (void)testManualTimerTheFirst
{
// Create a custom tick with one value to send.
RACSubject *controlledSignal = [RACSubject subject];
MPSTicker *ticker = [[MPSTicker alloc] initWithTickSource:controlledSignal];
[ticker.accumulateSignal subscribeNext:^(id x) {
NSLog(#"%s value is %#", __func__, x);
}];
[controlledSignal sendNext:#(2)];
}
- (void)testManualTimerTheSecond
{
// Create a custom tick with one value to send.
RACSubject *controlledSignal = [RACSubject subject];
MPSTicker *ticker = [[MPSTicker alloc] initWithTickSource:controlledSignal];
BOOL success = NO;
NSError *error = nil;
id value = [ticker.accumulateSignal asynchronousFirstOrDefault:nil success:&success error:&error];
if (!success) {
XCTAssertTrue(success, #"Signal failed to return a value. Error: %#", error);
} else {
XCTAssertNotNil(value, #"Signal returned a nil value.");
XCTAssertEqualObjects(#(1), value, #"Signal returned an unexpected value.");
}
// Send a value.
[controlledSignal sendNext:#(1)];
}
In testManualTimerTheFirst, I never see any value from controlledSignal's sendNext: come through to my subscribeNext: block.
In testManualTimerTheSecond, I tried using the asynchronousFirstOrDefault: call to get the first value from the signal, then manually sent a value on my subject, but the value didn't come through, and the test failed when asynchronousFirstOrDefault: timed out.
What am I missing here?

This may not answer your question exactly, but it may give you insights on how to effectively test your signals. I've used 2 approaches myself so far:
XCTestCase and TRVSMonitor
TRVSMonitor is a small utility which will pause the current thread for you while you run your assertions. For example:
TRVSMonitor *monitor = [TRVSMonitor monitor];
[[[self.service searchPodcastsWithTerm:#"security now"] collect] subscribeNext:^(NSArray *results) {
XCTAssertTrue([results count] > 0, #"Results count should be > 0";
[monitor signal];
} error:^(NSError *error) {
XCTFail(#"%#", error);
[monitor signal];
}];
[monitor wait];
As you can see, I'm telling the monitor to wait right after I subscribe and signal it to stop waiting at the end of subscribeNext and error blocks to make it continue executing (so other tests can run too). This approach has the benefit of not relying on a static timeout, so your code can run as long as it needs to.
Using CocoaPods, you can easily add TRVSMonitor to your project:
pod "TRVSMonitor", "~> 0.0.3"
Specta & Expecta
Specta is a BDD/TDD (behavior driven/test driven) test framework. Expecta is a framework which provides more convenient assertion matchers. It has built-in support for async tests. It enables you to write more descriptive tests with ReactiveCocoa, like so:
it(#"should return a valid image, with cache state 'new'", ^AsyncBlock {
[[cache imageForURL:[NSURL URLWithString:SECURITY_NOW_ARTWORK_URL]] subscribeNext:^(UIImage *image) {
expect(image).notTo.beNil();
expect(image.cacheState).to.equal(JPImageCacheStateNew);
} error:^(NSError *error) {
XCTFail(#"%#", error);
} completed:^{
done();
}];
});
Note the use of ^AsyncBlock {. Using simply ^ { would imply a synchronous test.
Here you call the done() function to signal the end of an asynchronous test. I believe Specta uses a 10 second timeout internally.
Using CocoaPods, you can easily add Expecta & Specta:
pod "Expecta", "~> 0.2.3"
pod "Specta", "~> 0.2.1"

See this question: https://stackoverflow.com/a/19127547/420594
The XCAsyncTestCase has some extra functionality to allow for asynchronous test cases.
Also, I haven't looked at it in depth yet, but could ReactiveCocoaTests be of some interest to you? On a glance, they appear to be using Expecta.

Related

How to make order-independent assertions on Flux output?

I have a test case for a Flux from Project Reactor roughly like this:
testMultipleChunks(StepVerifier.FirstStep<Chunk> verifier, Chunk chunk1, Chunk chunk2) {
verifier.then(() -> {
worker.add(chunk1);
worker.add(chunk2);
worker.update();
})
.expectNext(chunk1, chunk2)
.verifyTimeout(Duration.ofSeconds(5));
}
Thing is, my worker is encouraged to parallelize the work, which means the order of the output is undefined. chunk2, chunk1 would be equally valid.
How can I make assertions on the output in an order-independent way?
Properties I care about:
every element in the expected set is present
there are no unexpected elements
there are no extra (duplicate) events
I tried this:
testMultipleChunks(StepVerifier.FirstStep<Chunk> verifier, Chunk chunk1, Chunk chunk2) {
Set<Chunk> expectedOutput = Set.of(chunk1, chunk2);
verifier.then(() -> {
worker.add(chunk1);
worker.add(chunk2);
worker.update();
})
.recordWith(HashSet::new)
.expectNextCount(expectedOutput.size())
.expectRecordedMatches(expectedOutput::equals)
.verifyTimeout(Duration.ofSeconds(5));
}
While I think that makes the assertions I want, it took a terrible dive in readability. A clear one-line, one-method assertion was replaced with four lines with a lot of extra punctuation.
expectedRecordedMatches is also horribly uninformative when it fails, saying only “expected collection predicate match” without giving any information about what the expectation is or how close the result was.
What's a clearer way to write this test?
StepVerifier is not a good fit for that because it verifies each signal as it gets emitted, and materialize an expected order for asynchronous signals, by design.
It is especially tricky because (it seems) your publisher under test doesn't clearly complete.
If it was completing after N elements (N being the expected amount here), I'd change the publisher passed to StepVerifier.create from flux to flux.collectList(). That way, you get a List view of the onNext and you can assert the list as you see fit (eg. using AssertJ, which I recommend).
One alternative in recent versions of Reactor is the TestSubscriber, which can be used to drive request and cancel() without any particular opinion on blocking or on when to perform the assertions. Instead, it internally stores the events it sees (onNext go into a List, onComplete and onError are stored as a terminal Signal...) and you can access these for arbitrary assertions.

Postman - how to loop request until I get a specific response?

I'm testing API with Postman and I have a problem:
My request goes to sort of middleware, so either I receive a full 1000+ line JSON, or I receive PENDING status and empty array of results:
{
"meta": {
"status": "PENDING",
"missing_connectors_count": 0,
"xxx_type": "INTERNATIONAL"
},
"results": []
}
The question is, how to loop this request in Postman until I will get status SUCCESS and results array > 0?
When I'm sending those requests manually one-by-one it's ok, but when I'm running them through Collection Runner, "PENDING" messes up everything.
I found an awesome post about retrying a failed request by Christian Baumann which allowed me to find a suitable approach to the exact same problem of first polling the status of some operation and only when it's complete run the actual tests.
The code I'd end up if I were you is:
const maxNumberOfTries = 3; // your max number of tries
const sleepBetweenTries = 5000; // your interval between attempts
if (!pm.environment.get("tries")) {
pm.environment.set("tries", 1);
}
const jsonData = pm.response.json();
if ((jsonData.meta.status !== "SUCCESS" && jsonData.results.length === 0) && (pm.environment.get("tries") < maxNumberOfTries)) {
const tries = parseInt(pm.environment.get("tries"), 10);
pm.environment.set("tries", tries + 1);
setTimeout(function() {}, sleepBetweenTries);
postman.setNextRequest(request.name);
} else {
pm.environment.unset("tries");
// your actual tests go here...
}
What I liked about this approach is that the call postman.setNextRequest(request.name) doesn't have any hardcoded request names. The downside I see with this approach is that if you run such request as a part of the collection, it will be repeated a number of times, which might bloat your logs with unnecessary noise.
The alternative I was considering is writhing a Pre-request Script which will do polling (by sending a request) and spinning until the status is some kind of completion. The downside of this approach is the need for much more code for the same logic.
When waiting for services to be ready, or when polling for long-running job results, I see 4 basic options:
Use Postman collection runner or newman and set a per-step delay. This delay is inserted between every step in the collection. Two challenges here: it can be fragile unless you set the delay to a value the request duration will never exceed, AND, frequently, only a small number of steps need that delay and you are increasing total test run time, creating excessive build times for a common build server delaying other pending builds.
Use https://postman-echo.com/delay/10 where the last URI element is number of seconds to wait. This is simple and concise and can be inserted as a single step after the long running request. The challenge is if the request duration varies widely, you may get false failures because you didn't wait long enough.
Retry the same step until success with postman.setNextRequest(request.name);. The challenge here is that Postman will execute the request as fast as it can which can DDoS your service, get you black-listed (and cause false failures), and chew up a lot of CPU if run on a common build server - slowing other builds.
Use setTimeout() in a Pre-request Script. The only downside I see in this approach is that if you have several steps needing this logic, you end up with some cut & paste code that you need to keep in sync
Note: there are minor variations on these - like setting them on a collection, a collection folder, a step, etc.
I like option 4 because it provides the right level of granularity for most of my cases. Note that this appears to be the only way to "sleep" in a Postman script. Now standard javascript sleep methods like a Promise with async and await are not supported and using the sandbox's lodash _.delay(function() {}, delay, args[...]) does not keep script execution on the Pre-request script.
In Postman standalone app v6.0.10, set your step Pre-request script to:
console.log('Waiting for job completion in step "' + request.name + '"');
// Construct our request URL from environment variables
var url = request['url'].replace('{{host}}', postman.getEnvironmentVariable('host'));
var retryDelay = 1000;
var retryLimit = 3;
function isProcessingComplete(retryCount) {
pm.sendRequest(url, function (err, response) {
if(err) {
// hmmm. Should I keep trying or fail this run? Just log it for now.
console.log(err);
} else {
// I could also check for response.json().results.length > 0, but that
// would omit SUCCESS with empty results which may be valid
if(response.json().meta.status !== 'SUCCESS') {
if (retryCount < retryLimit) {
console.log('Job is still PENDING. Retrying in ' + retryDelay + 'ms');
setTimeout(function() {
isProcessingComplete(++retryCount);
}, retryDelay);
} else {
console.log('Retry limit reached, giving up.');
postman.setNextRequest(null);
}
}
}
});
}
isProcessingComplete(1);
And you can do your standard tests in the same step.
Note: Standard caveats apply to making retryLimit large.
Try this:
var body = JSON.parse(responseBody);
if (body.meta.status !== "SUCCESS" && body.results.length === 0){
postman.setNextRequest("This_same_request_title");
} else {
postman.setNextRequest("Next_request_title");
/* you can also try postman.setNextRequest(null); */
}
I was searching for an answer to the same question and thought of a possible solution as I was reading your question.
Use postman workflow to rerun your request every time you don't get the response you're looking for. Anyway, that's what I'm gonna try.
postman.setNextRequest("request_name");
https://www.getpostman.com/docs/workflows
I didn't succeed to find the complete guidelines for this issue that's why I decided to invest some time and to describe all steps of the process from A to Z.
I will be observing an example where we will need to pass through transaction ids and in each iteration to change query param for next transaction id from the list.
Step 1. Prepare your request
https://some url/{{queryParam}}
Add {{queryParam}} variable for changing it from pre-request script.
If you need a token for request you should add it here, in Authorization tab.
Save request to collection (Save button in the right corner). For demonstration purpose I will use "Transactions Request" name. We will need to use this name later on.
Step 2. Prepare pre-request script
In postman use tab Pre-request Script to change transactionId variable from query param to actual transaction id.
let ids = pm.collectionVariables.get("TransactionIds");
ids = JSON.parse(ids);
const id = ids.shift();
console.log('id', id)
postman.setEnvironmentVariable("transactionId", id);
pm.collectionVariables.set("TransactionIds", JSON.stringify(ids));
pm.collectionVariables.get - gets array of transaction ids from collection variables. We will set it up in Step 4.
ids.shift() - we use it to remove id that we will use from our ids list (to prevent running twice on the same id)
postman.setEnvironmentVariable("transactionId", id) - change transaction id from query param to actual transaction id
pm.collectionVariables.set("TransactionIds", JSON.stringify(ids)) - we are setting up a new collection of variables that now does not include the id that was handled.
Step 3. Prepare Tests
In postman use tab Tests to create a loop logic. Tests will be executed after the request execution, so we can use it to make next request.
let ids = pm.collectionVariables.get("TransactionIds");
ids = JSON.parse(ids);
if (ids && ids.length > 0){
console.log('length', ids.length);
postman.setNextRequest("Transactions Request");
} else {
postman.setNextRequest(null);
}
postman.setNextRequest("Transactions Request") - calls a new request, in this case it will call the "Transactions Request" request
Step 4. Run Collections
In Postman from the left side bar you should choose Collections (click on it) and then choose a tab Variables.
This is the collection variables. In our example we used TransactionIds as a variable, so put in Current Value the array of transaction ids on which you want to loop.
Now you can click on Run (the button from right corner, near Save button) to run our loop requests.
You will be proposed to choose on which request you want to perform an action. Choose the request that we’ve created "Transactions Request".
It will run our request with pre-request script and with logic that we’ve set in Tests. In the end postman will open a new window with summary of our run.

How to lock a long async call in a WebApi action?

I have this scenario where I have a WebApi and an endpoint that when triggered does a lot of work (around 2-5min). It is a POST endpoint with side effects and I would like to limit the execution so that if 2 requests are sent to this endpoint (should not happen, but better safe than sorry), one of them will have to wait in order to avoid race conditions.
I first tried to use a simple static lock inside the controller like this:
lock (_lockObj)
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
this is of course not possible because of the await inside the lock statement.
Another solution I considered was to use a SemaphoreSlim implementation like this:
await semaphore.WaitAsync();
try
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
finally
{
semaphore.Release();
}
However, according to MSDN:
The SemaphoreSlim class represents a lightweight, fast semaphore that can be used for waiting within a single process when wait times are expected to be very short.
Since in this scenario the wait times may even reach 5 minutes, what should I use for concurrency control?
EDIT (in response to plog17):
I do understand that passing this task onto a service might be the optimal way, however, I do not necessarily want to queue something in the background that still runs after the request is done.
The request involves other requests and integrations that take some time, but I would still like the user to wait for this request to finish and get a response regardless.
This request is expected to be only fired once a day at a specific time by a cron job. However, there is also an option to fire it manually by a developer (mostly in case something goes wrong with the job) and I would like to ensure the API doesn't run into concurrency issues if the developer e.g. double-sends the request accidentally etc.
If only one request of that sort can be processed at a given time, why not implement a queue ?
With such design, no more need to lock nor wait while processing the long running request.
Flow could be:
Client POST /RessourcesToProcess, should receive 202-Accepted quickly
HttpController simply queue the task to proceed (and return the 202-accepted)
Other service (windows service?) dequeue next task to proceed
Proceed task
Update resource status
During this process, client should be easily able to get status of requests previously made:
If task not found: 404-NotFound. Ressource not found for id 123
If task processing: 200-OK. 123 is processing.
If task done: 200-OK. Process response.
Your controller could look like:
public class TaskController
{
//constructor and private members
[HttpPost, Route("")]
public void QueueTask(RequestBody body)
{
messageQueue.Add(body);
}
[HttpGet, Route("taskId")]
public void QueueTask(string taskId)
{
YourThing thing = tasksRepository.Get(taskId);
if (thing == null)
{
return NotFound("thing does not exist");
}
if (thing.IsProcessing)
{
return Ok("thing is processing");
}
if (!thing.IsProcessing)
{
return Ok("thing is not processing yet");
}
//here we assume thing had been processed
return Ok(thing.ResponseContent);
}
}
This design suggests that you do not handle long running process inside your WebApi. Indeed, it may not be the best design choice. If you still want to do so, you may want to read:
Long running task in WebAPI
https://blogs.msdn.microsoft.com/webdev/2014/06/04/queuebackgroundworkitem-to-reliably-schedule-and-run-background-processes-in-asp-net/

scalatra < squeryl < select ALL | Always

I want to read elements from the database and return them as JSON objects.
Scalatra is set up to return JSON.
Databaseschema is created.
Players are added.
The following code seems to be the main problem:
get("/") {
inTransaction {
List(from(MassTournamentSchema.players)(s => select(s)))
}
}
I get the following error:
"No session is bound to current thread, a session must be created via Session.create and bound to the thread via 'work' or 'bindToCurrentThread' Usually this error occurs when a statement is executed outside of a transaction/inTrasaction block "
I want to do it right so simply adding something like "Session.create" may not really be the right way.
Can anyone help a scalatra-noob? :-)
I think that your comment is on the right track. The inTransaction block will bind a JDBC connection to a thread local variable and start the connection on it. If the select doesn't occur on the same thread, you'll see an error like the one your received. There are two things I would suggest that you try:
Start your transaction later
List(inTransaction {
from(MassTournamentSchema.players)(s => select(s))
})
I'm not familiar with Scalatra's List, but it's possible that it's accepting a by-name parameter and executing it later on a different thread.
Force an eager evaluation of the query
inTransaction {
List(from(MassTournamentSchema.players)(s => select(s)).toList)
}
Here the .toList call will turn the Query object Squeryl returns into a Scala List immediately and guard against any lazy evaluation errors caused by later iteration.

create_task and return values

I need to call an Async method within a method I declared. The method should return a value. I'm trying to wrap calls to the Windows Store into an easy to use class. My method should look like this:
bool Purchase(enum_InAppOption optionToPurchase);
enum_InAppOption is an enum consisting of all In-App options to purchase. At some point I need to call RequestProductPurchaseAsync. The result of this call determines if the method should return trueor false. I'm new to c++/cx (or at least I have a long history between now and the last time I used c++), so maybe this is easier as I think.
The create_task looks like this:
create_task(CurrentAppSimulator::RequestProductPurchaseAsync(this->_LastProductId, false))
The options I considered / tried:
returning the task would not abstract the store
tried to call wait on the task. I've got the exception An invalid parameter was passed to a function that considers invalid parameters fatal.
tried to use structured_task_group but it seems this does not allow for non void returning methods or I'm trying to provide a wrong interpretation. Compiler returns error C2064 (have googled but I can't get the point what to change)
Using an array of tasks and when_all
Found the following code on http://msdn.microsoft.com/en-us/library/dd492427.aspx#when_all in the middle of the page:
array<task<void>, 3> tasks =
{
create_task([] { wcout << L"Hello from taskA." << endl; }),
create_task([] { wcout << L"Hello from taskB." << endl; }),
create_task([] { wcout << L"Hello from taskC." << endl; })
};
auto joinTask = when_all(begin(tasks), end(tasks));
// Print a message from the joining thread.
wcout << L"Hello from the joining thread." << endl;
// Wait for the tasks to finish.
joinTask.wait();
So I tried to translate it into the following code:
array<task<Platform::String^>,1> tasks = {
create_task(CurrentAppSimulator::RequestProductPurchaseAsync(this->_LastProductId, false))
};
Even though I included the compiler throws C2065 ('array': undeclared identifier), C2275 ('Concurrency::task<_ReturnType>': illegal use of this type as an expression and some errors that seem to be errors following up on those two.
To sum up: How to make the method return after the async task has completed, so I can return a meaningful result based on the stuff going on asynchronously?
How to make the method return after the async task has completed, so I can return a meaningful result based on the stuff going on asynchronously?
This doesn't make much sense: the "stuff" isn't asynchronous if you want to wait for it to complete before returning. That's the definition of synchronous.
When using C++/CX, you cannot wait on a not-yet-completed task on an STA. Any attempt to do so will result in an exception being thrown. If you are going to call Purchase() on an STA and if it starts an asynchronous operation, you cannot wait for that operation to complete before returning.
Instead, you can use .then to perform another operation when the asynchronous operation completes. If the continuation needs to be performed on the invoking thread, make sure to pass the use_current() continuation context to ensure that the continuation is executed in the correct context.
Sascha,
Returning a task would abstract out the store, and I think that would be the most reasonable decision, since you are not restricting the users of your helper class to get the results straight away, but also allowing them to handle the results in their own way and asynchronously.
As #James correctly mentioned, you are not allowed to wait in the UI thread, then you will make the app unresponsive, there are different ways to avoid waiting:
create a continuation with concurrency::task::then;
you cannot wait in the UI thread, but you can wait for an operation on the UI thread to complete, that means you can wrap the future result of the task running on UI in a task_completion_event and then wait on the event in another (background) thread and handle the result;
concurrency::task_completion_event<Platform::String^> purchaseCompleted;
create_task(CurrentAppSimulator::RequestProductPurchaseAsync(
this->_LastProductId, false)).then(
[purchaseCompleted](concurrency::task<Platform::String^> task)
{
try
{
purchaseCompleted.set(task.get());
}
catch(Platform::Exception^ exception)
{
purchaseCompleted.set_exception(exception);
}
});
// and somewhere on non-UI thread you can do
Platform::String^ purchaseResult = create_task(purchaseCompleted).get();
you can achieve the previous trick using more WinRT-specific facilities rather than Concurrency Runtime, more precisely, IAsyncOperation<T>::Completed and IAsyncOperation<T>::GetResults;
and
seem irrelevant here, since you have only 1 real task, which is make a purchase.