I am testing a coroutine that blocks. Here is my production code:
interface Incrementer {
fun inc()
}
class MyViewModel : Incrementer, CoroutineScope {
override val coroutineContext: CoroutineContext
get() = Dispatchers.IO
private val _number = MutableStateFlow(0)
fun getNumber(): StateFlow<Int> = _number.asStateFlow()
override fun inc() {
launch(coroutineContext) {
delay(100)
_number.tryEmit(1)
}
}
}
And my test:
class IncTest {
#BeforeEach
fun setup() {
Dispatchers.setMain(StandardTestDispatcher())
}
#AfterEach
fun teardown() {
Dispatchers.resetMain()
}
#Test
fun incrementOnce() = runTest {
val viewModel = MyViewModel()
val results = mutableListOf<Int>()
val resultJob = viewModel.getNumber()
.onEach(results::add)
.launchIn(CoroutineScope(UnconfinedTestDispatcher(testScheduler)))
launch(StandardTestDispatcher(testScheduler)) {
viewModel.inc()
}.join()
assertEquals(listOf(0, 1), results)
resultJob.cancel()
}
}
How would I go about testing my inc() function? (The interface is carved in stone, so I can't turn inc() into a suspend function.)
There are two problems here:
You want to wait for the work done in the coroutine that viewModel.inc() launches internally.
Ideally, the 100ms delay should be fast-forwarded during tests so that it doesn't actually take 100ms to execute.
Let's start with problem #2 first: for this, you need to be able to modify MyViewModel (but not inc), and change the class so that instead of using a hardcoded Dispatchers.IO, it receives a CoroutineContext as a parameter. With this, you could pass in a TestDispatcher in tests, which would use virtual time to fast-forward the delay. You can see this pattern described in the Injecting TestDispatchers section of the Android docs.
class MyViewModel(coroutineContext: CoroutineContext) : Incrementer {
private val scope = CoroutineScope(coroutineContext)
private val _number = MutableStateFlow(0)
fun getNumber(): StateFlow<Int> = _number.asStateFlow()
override fun inc() {
scope.launch {
delay(100)
_number.tryEmit(1)
}
}
}
Here, I've also done some minor cleanup:
Made MyViewModel contain a CoroutineScope instead of implementing the interface, which is an officially recommended practice
Removed the coroutineContext parameter passed to launch, as it doesn't do anything in this case - the same context is in the scope anyway, so it'll already be used
For problem #1, waiting for work to complete, you have a few options:
If you've passed in a TestDispatcher, you can manually advance the coroutine created inside inc using testing methods like advanceUntilIdle. This is not ideal, because you're relying on implementation details a lot, and it's something you couldn't do in production. But it'll work if you can't use the nicer solution below.
viewModel.inc()
advanceUntilIdle() // Returns when all pending coroutines are done
The proper solution would be for inc to let its callers know when it's done performing its work. You could make it a suspending method instead of launching a new coroutine internally, but you stated that you can't modify the method to make it suspending. An alternative - if you're able to make this change - would be to create the new coroutine in inc using the async builder, returning the Deferred object that that creates, and then await()-ing at the call site.
override fun inc(): Deferred<Unit> {
scope.async {
delay(100)
_number.tryEmit(1)
}
}
// In the test...
viewModel.inc().await()
If you're not able to modify either the method or the class, there's no way to avoid the delay() call causing a real 100ms delay. In this case, you can force your test to wait for that amount of time before proceeding. A regular delay() within runTest would be fast-forwarded thanks to it using a TestDispatcher for the coroutine it creates, but you can get away with one of these solutions:
// delay() on a different dispatcher
viewModel.inc()
withContext(Dispatchers.Default) { delay(100) }
// Use blocking sleep
viewModel.inc()
Thread.sleep(100)
For some final notes about the test code:
Since you're doing Dispatchers.setMain, you don't need to pass in testScheduler into the TestDispatchers you create. They'll grab the scheduler from Main automatically if they find a TestDispatcher there, as described in its docs.
Instead of creating a new scope to pass in to launchIn, you could simply pass in this, the receiver of runTest, which points to a TestScope.
Context:
I'm writing unit test for a gRPC service. I want to verify that the method of the mock on the server side is called. I'm using easy mock. To be sure we get the response of gRPC (whatever it is) I need to suspend the thread before easy mock verify the calls.
So I tried something like this using LockSupport:
#Test
public void alphaMethodTest() throws Exception
{
Dummy dummy = createNiceMock(Dummy.class);
dummy.alphaMethod(anyBoolean());
expectLastCall().once();
EasyMock.replay(dummy);
DummyServiceGrpcImpl dummyServiceGrpc = new DummyServiceGrpcImpl();
bcreuServiceGrpc.setDummy(dummy);
DummyServiceGrpc.DummyServiceStub stub = setupDummyServiceStub();
Thread thread = Thread.currentThread();
stub.alphaMethod(emptyRequest, new StreamObserver<X>(){
#Override
public void onNext(X value) {
LockSupport.unpark(thread);
}
}
Instant expirationTime = Instant.now().plus(pDuration);
LockSupport.parkUntil(expirationTime.toEpochMilli());
verify(dummy);
}
But I have many tests like this one (around 40) and I suspect threading issue. I usually get one or two failing the verify step, sometime all of them pass. I try to use a ReentrantLock with Condition instead. But again some are failing (IllegalMonitorStateException on the signalAll):
#Test
public void alphaMethodTest() throws Exception
{
Dummy dummy = createNiceMock(Dummy.class);
dummy.alphaMethod(anyBoolean());
expectLastCall().once();
EasyMock.replay(dummy);
DummyServiceGrpcImpl dummyServiceGrpc = new DummyServiceGrpcImpl();
bcreuServiceGrpc.setDummy(dummy);
DummyServiceGrpc.DummyServiceStub stub = setupDummyServiceStub();
ReentrantLock lock = new ReentrantLock();
Condition conditionPromiseTerminated = lock.newCondition();
stub.alphaMethod(emptyRequest, new StreamObserver<X>(){
#Override
public void onNext(X value) {
conditionPromiseTerminated.signalAll();
}
}
Instant expirationTime = Instant.now().plus(pDuration);
conditionPromiseTerminated.awaitUntil(new Date(expirationTime.toEpochMilli()));
verify(dummy);
}
I'm sorry not providing runnable example for you, my current code is using a private API :/.
Do you think LockSupport may cause trouble because of the multiple tests running? Am I missing something using lock support or reentrant lock. Do you think of any other class of the concurrent API that would suit better my needs?
LockSupport is a bit dangerous, you will need to read the documentation closely and find out that:
The call spuriously (that is, for no reason) returns.
So when you think your code will do some "waiting", it might simply return immediately. The simplest reason for that would be this for example, but there could be other reasons too.
When using ReentrantLock, all of them should fail with IllegalMonitorStateException, because you never acquire the lock via ReentrantLock::lock. And stop using new Date(...), it is deprecated for a reason.
I think you are over-complicating things, you could do the same signaling with a plain lock, a simplified example:
public static void main(String[] args) {
Object lock = new Object();
Thread first = new Thread(() -> {
synchronized (lock) {
System.out.println("Locked");
try {
System.out.println("Sleeping");
lock.wait();
System.out.println("Waked up");
} catch (InterruptedException e) {
// these are your tests, no one should interrupt
// unless it's yourself
throw new RuntimeException(e);
}
}
});
first.start();
sleepOneSecond();
Thread second = new Thread(() -> {
synchronized (lock) {
System.out.println("notifying waiting threads");
lock.notify();
}
});
second.start();
}
private static void sleepOneSecond() {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
Notice the output:
Locked
Sleeping
notifying waiting threads
Waked up
It should be obvious how the "communication" (signaling) between threads happens.
I follow the MVP pattern + UseCases to interact with a Model layer. This is a method in a Presenter I want to test:
fun loadPreviews() {
launch(UI) {
val items = previewsUseCase.getPreviews() // a suspending function
println("[method] UseCase items: $items")
println("[method] View call")
view.showPreviews(items)
}
}
My simple BDD test:
fun <T> givenSuspended(block: suspend () -> T) = BDDMockito.given(runBlocking { block() })
infix fun <T> BDDMockito.BDDMyOngoingStubbing<T>.willReturn(block: () -> T) = willReturn(block())
#Test
fun `load previews`() {
// UseCase and View are mocked in a `setUp` method
val items = listOf<PreviewItem>()
givenSuspended { previewsUseCase.getPreviews() } willReturn { items }
println("[test] before Presenter call")
runBlocking { presenter.loadPreviews() }
println("[test] after Presenter call")
println("[test] verify the View")
verify(view).showPreviews(items)
}
The test passes successfully but there's something weird in the log. I expect it to be:
"[test] before Presenter call"
"[method] UseCase items: []"
"[method] View call"
"[test] after Presenter call"
"[test] verify the View"
But it turns out to be:
[test] before Presenter call
[test] after Presenter call
[test] verify the View
[method] UseCase items: []
[method] View call
What's the reason of this behaviour and how should I fix it?
I've found out that it's because of a CoroutineDispatcher. I used to mock UI context with EmptyCoroutineContext. Switching to Unconfined has solved the problem
Update 02.04.20
The name of the question suggests that there'll be an exhaustive explanation how to unit test a suspending function. So let me explain a bit more.
The main problem with testing a suspending function is threading. Let's say we want to test this simple function that updates a property's value in a different thread:
class ItemUpdater(val item: Item) {
fun updateItemValue() {
launch(Dispatchers.Default) { item.value = 42 }
}
}
We need to somehow replace Dispatchers.Default with an another dispatcher only for testing purposes. There're two ways how we can do that. Each has its pros and cons, and which one to choose depends on your project & style of coding:
1. Inject a Dispatcher.
class ItemUpdater(
val item: Item,
val dispatcher: CoroutineDispatcher // can be a wrapper that provides multiple dispatchers but let's keep it simple
) {
fun updateItemValue() {
launch(dispatcher) { item.value = 42 }
}
}
// later in a test class
#Test
fun `item value is updated`() = runBlocking {
val item = Item()
val testDispatcher = Dispatchers.Unconfined // can be a TestCoroutineDispatcher but we still keep it simple
val updater = ItemUpdater(item, testDispatcher)
updater.updateItemValue()
assertEquals(42, item.value)
}
2. Substitute a Dispatcher.
class ItemUpdater(val item: Item) {
fun updateItemValue() {
launch(DispatchersProvider.Default) { item.value = 42 } // DispatchersProvider is our own global wrapper
}
}
// later in a test class
// -----------------------------------------------------------------------------------
// --- This block can be extracted into a JUnit Rule and replaced by a single line ---
// -----------------------------------------------------------------------------------
#Before
fun setUp() {
DispatchersProvider.Default = Dispatchers.Unconfined
}
#After
fun cleanUp() {
DispatchersProvider.Default = Dispatchers.Default
}
// -----------------------------------------------------------------------------------
#Test
fun `item value is updated`() = runBlocking {
val item = Item()
val updater = ItemUpdater(item)
updater.updateItemValue()
assertEquals(42, item.value)
}
Both of them are doing the same thing - they replace the original Dispatchers.Default in test classes. The only difference is how they do that. It's really really up to you which of them to choose so don't get biased by my own thoughts below.
IMHO: The first approach is a little too much cumbersome. Injecting dispatchers everywhere will result into polluting most of the classes' constructors with an extra DispatchersWrapper only for a testing purpose. However Google recommends this way at least for now. The second style keeps things simple and it doesn't complicate the production classes. It's like an RxJava's way of testing where you have to substitute schedulers via RxJavaPlugins. By the way, kotlinx-coroutines-test will bring the exact same functionality someday in future.
I see you found out on you own, but I'd like to explain a bit more for the people that might run into the same problem
When you do launch(UI) {}, a new coroutine is created and dispatched to the "UI" Dispatcher, that means that your coroutine now runs on a different thread.
Your runBlocking{} call create a new coroutine, but runBlocking{} will wait for this coroutine to end before continuing, your loadPreviews() function creates a coroutine, start it and then return immediately, so runBlocking() just wait for it and return.
So while runBlocking{} has returned, the coroutine that you created with launch(UI){} is still running in a different thread, that's why the order of your log is messed up
The Unconfined context is a special CoroutineContext that simply create a dispatcher that execute the coroutine right there on the current thread, so now when you execute runBlocking{}, it has to wait for the coroutine created by launch{} to end because it is running on the same thread thus blocking that thread.
I hope my explanation was clear, have a good day
My team is designing a scalable solution with micro-services architecture and planning to use gRPC as the transport communication between layers. And we've decided to use async grpc model. The design that example(greeter_async_server.cc) provides doesn't seem viable if I scale the number of RPC methods, because then I'll have to create a new class for every RPC method, and create their objects in HandleRpcs() like this.
Pastebin (Short example code).
void HandleRpcs() {
new CallDataForRPC1(&service_, cq_.get());
new CallDataForRPC2(&service_, cq_.get());
new CallDataForRPC3(&service, cq_.get());
// so on...
}
It'll be hard-coded, all the flexibility will be lost.
I've around 300-400RPC methods to implement and having 300-400 classes will be cumbersome and inefficient when I'll have to handle more than 100K RPC requests/sec and this solution is a very bad design. I can't bear the overhead of creation of objects this way on every single request. Can somebody kindly provide me a workaround for this. Can async grpc c++ not be simple like its sync companion?
Edit: In favour of making the situation more clear, and for those who might be struggling to grasp the flow of this async example, I'm writing what I've understood so far, please make me correct if wrong somewhere.
In async grpc, every time we have to bind a unique-tag with the completion-queue so that when we poll, the server can give it back to us when the particular RPC will be hit by the client, and we infer from the returned unique-tag about the type of the call.
service_->RequestRPC2(&ctx_, &request_, &responder_, cq_, cq_,this); Here we're using the address of the current object as the unique-tag. This is like registering for our RPC call on the completion queue. Then we poll down in HandleRPCs() to see if the client hits the RPC, if so then cq_->Next(&tag, &OK) will fill the tag. The polling code snippet:
while (true) {
GPR_ASSERT(cq_->Next(&tag, &ok));
GPR_ASSERT(ok);
static_cast<CallData*>(tag)->Proceed();
}
Since, the unique-tag that we registered into the queue was the address of the CallData object so we're able to call Proceed(). This was fine for one RPC with its logic inside Proceed(). But with more RPCs each time we'll have all of them inside the CallData, then on polling, we'll be calling the only one Proceed() which will contain logic to (say) RPC1(postgres calls), RPC2(mongodb calls), .. so on. This is like writing all my program inside one function. So, to avoid this, I used a GenericCallData class with the virtual void Proceed() and made derived classes out of it, one class per RPC with their own logic inside their own Proceed(). This is a working solution but I want to avoid writing many classes.
Another solution I tried was keeping all RPC-function-logics out of the proceed() and into their own functions and maintaining a global std::map<long, std::function</*some params*/>> . So whenever I register an RPC with unique-tag onto the queue, I store its corresponding logic function (which I'll surely hard code into the statement and bind all the parameters required), then the unique-tag as key. On polling, when I get the &tag I do a lookup in the map for this key and call the corresponding saved function. Now, there's one more hurdle, I'll have to do this inside the function logic:
// pseudo code
void function(reply, responder, context, service)
{
// register this RPC with another unique tag so to serve new incoming request of the same type on the completion queue
service_->RequestRPC1(/*params*/, new_unique_id);
// now again save this new_unique_id and current function into the map, so when tag will be returned we can do lookup
map.emplace(new_unique_id, function);
// now you're free to do your logic
// do your logic
}
You see this, code has spread into another module now, and it's per RPC based.
Hope it clears the situation.
I thought if somebody could have implemented this type of server in a more easy way.
This post is pretty old by now but I have not seen any answer or example regarding this so I will show how I solved it to any other readers. I have around 30 RPC calls and was looking for a way of reducing the footprint when adding and removing RPC calls. It took me some iterations to figure out a good way to solve it.
So my interface for getting RPC requests from my (g)RPC library is a callback interface that the recepiant need to implement. The interface looks like this:
class IRpcRequestHandler
{
public:
virtual ~IRpcRequestHandler() = default;
virtual void onZigbeeOpenNetworkRequest(const smarthome::ZigbeeOpenNetworkRequest& req,
smarthome::Response& res) = 0;
virtual void onZigbeeTouchlinkDeviceRequest(const smarthome::ZigbeeTouchlinkDeviceRequest& req,
smarthome::Response& res) = 0;
...
};
And some code for setting up/register each RPC method after the gRPC server is started:
void ready()
{
SETUP_SMARTHOME_CALL("ZigbeeOpenNetwork", // Alias that is used for debug messages
smarthome::Command::AsyncService::RequestZigbeeOpenNetwork, // Generated gRPC service method for async.
smarthome::ZigbeeOpenNetworkRequest, // Generated gRPC service request message
smarthome::Response, // Generated gRPC service response message
IRpcRequestHandler::onZigbeeOpenNetworkRequest); // The callback method to call when request has arrived.
SETUP_SMARTHOME_CALL("ZigbeeTouchlinkDevice",
smarthome::Command::AsyncService::RequestZigbeeTouchlinkDevice,
smarthome::ZigbeeTouchlinkDeviceRequest,
smarthome::Response,
IRpcRequestHandler::onZigbeeTouchlinkDeviceRequest);
...
}
This is all that you need to care about when adding and removing RPC methods.
The SETUP_SMARTHOME_CALL is a home-cooked macro which looks like this:
#define SETUP_SMARTHOME_CALL(ALIAS, SERVICE, REQ, RES, CALLBACK_FUNC) \
new ServerCallData<REQ, RES>( \
ALIAS, \
std::bind(&SERVICE, \
&mCommandService, \
std::placeholders::_1, \
std::placeholders::_2, \
std::placeholders::_3, \
std::placeholders::_4, \
std::placeholders::_5, \
std::placeholders::_6), \
mCompletionQueue.get(), \
std::bind(&CALLBACK_FUNC, requestHandler, std::placeholders::_1, std::placeholders::_2))
I think the ServerCallData class looks like the one from gRPCs examples with a few modifications. ServerCallData is derived from a non-templete class with an abstract function void proceed(bool ok) for the CompletionQueue::Next() handling. When ServerCallData is created, it will call the SERVICE method to register itself on the CompletionQueue and on every first proceed(ok) call, it will clone itself which will register another instance. I can post some sample code for that as well if someone is interested.
EDIT: Added some more sample code below.
GrpcServer
class GrpcServer
{
public:
explicit GrpcServer(std::vector<grpc::Service*> services);
virtual ~GrpcServer();
void run(const std::string& sslKey,
const std::string& sslCert,
const std::string& password,
const std::string& listenAddr,
uint32_t port,
uint32_t threads = 1);
private:
virtual void ready(); // Called after gRPC server is created and before polling CQ.
void handleRpcs(); // Function that polls from CQ, can be run by multiple threads. Casts object to CallData and calls CallData::proceed().
std::unique_ptr<ServerCompletionQueue> mCompletionQueue;
std::unique_ptr<Server> mServer;
std::vector<grpc::Service*> mServices;
std::list<std::shared_ptr<std::thread>> mThreads;
...
}
And the main part of the CallData object:
template <typename TREQUEST, typename TREPLY>
class ServerCallData : public ServerCallMethod
{
public:
explicit ServerCallData(const std::string& methodName,
std::function<void(ServerContext*,
TREQUEST*,
::grpc::ServerAsyncResponseWriter<TREPLY>*,
::grpc::CompletionQueue*,
::grpc::ServerCompletionQueue*,
void*)> serviceFunc,
grpc::ServerCompletionQueue* completionQueue,
std::function<void(const TREQUEST&, TREPLY&)> callback,
bool first = false)
: ServerCallMethod(methodName),
mResponder(&mContext),
serviceFunc(serviceFunc),
completionQueue(completionQueue),
callback(callback)
{
requestNewCall();
}
void proceed(bool ok) override
{
if (!ok)
{
delete this;
return;
}
if (callStatus() == ServerCallMethod::PROCESS)
{
callStatus() = ServerCallMethod::FINISH;
new ServerCallData<TREQUEST, TREPLY>(callMethodName(), serviceFunc, completionQueue, callback);
try
{
callback(mRequest, mReply);
}
catch (const std::exception& e)
{
mResponder.Finish(mReply, Status::CANCELLED, this);
return;
}
mResponder.Finish(mReply, Status::OK, this);
}
else
{
delete this;
}
}
private:
void requestNewCall()
{
serviceFunc(
&mContext, &mRequest, &mResponder, completionQueue, completionQueue, this);
}
ServerContext mContext;
TREQUEST mRequest;
TREPLY mReply;
ServerAsyncResponseWriter<TREPLY> mResponder;
std::function<void(ServerContext*,
TREQUEST*,
::grpc::ServerAsyncResponseWriter<TREPLY>*,
::grpc::CompletionQueue*,
::grpc::ServerCompletionQueue*,
void*)>
serviceFunc;
std::function<void(const TREQUEST&, TREPLY&)> callback;
grpc::ServerCompletionQueue* completionQueue;
};
Although the thread is old I wanted to share a solution I am currently implementing. It mainly consists templated classes inheriting CallData to be scalable. This way, each new rpc will only require specializing the templates of the required CallData methods.
Calldata header:
class CallData {
protected:
enum Status { CREATE, PROCESS, FINISH };
Status status;
virtual void treat_create() = 0;
virtual void treat_process() = 0;
public:
void Proceed();
};
CallData Proceed implementation:
void CallData::Proceed() {
switch (status) {
case CREATE:
status = PROCESS;
treat_create();
break;
case PROCESS:
status = FINISH;
treat_process();
break;
case FINISH:
delete this;
}
}
Inheriting from CallData header (simplified):
template <typename Request, typename Reply>
class CallDataTemplated : CallData {
static_assert(std::is_base_of<google::protobuf::Message, Request>::value,
"Request and reply must be protobuf messages");
static_assert(std::is_base_of<google::protobuf::Message, Reply>::value,
"Request and reply must be protobuf messages");
private:
Service,Cq,Context,ResponseWriter,...
Request request;
Reply reply;
protected:
void treat_create() override;
void treat_process() override;
public:
...
};
Then, for specific rpc's in theory you should be able to do things like:
template<>
void CallDataTemplated<HelloRequest, HelloReply>::treat_process() {
...
}
It's a lot of templated methods but preferable to creating a class per rpc from my point of view.
I am trying to unit test a method that uses the Wait() method on an IObservable however my test never completes - the Wait never finishes. My test contains the following:
var scheduler = new TestScheduler();
var input1 = scheduler.CreateColdObservable<List<string>>(
new Recorded<Notification<List<string>>>(100, Notification.CreateOnNext(new List<string> { "John", "Harry" })),
new Recorded<Notification<List<string>>>(200, Notification.CreateOnCompleted<List<string>>())
);
I am using Moq to setup the response on my method by returning input1. For example
myObj.Setup(f => f.GetStrings()).Returns(input1);
It doesn't actually matter about the details of myObj. I start the scheduler and call my method which contains a Wait(e.g somewhere in my method I call
var results = myObj.GetStrings().Wait();
But this never returns. I suspect I am using the scheduler wrong but I am not sure.
Regards
Alan
Summary
The problem is that you are creating a cold observable and advancing the scheduler before you have subscribed to it.
Detail
If you call the blocking Wait() operation on a single threaded test, you are dead in the water at that point. This is because the TestScheduler's internal clock only advances when you call Start() or one of the AdvanceXXX() methods and, since you have a cold observable, the event times you specify are relative the point of subscription. There are also some nuances to calling Start() which I will explain below.
So, as Wait will block, you might try to call it on another thread, but it's still tricky. Consider the following code, which is similar to yours:
void Main()
{
var scheduler = new TestScheduler();
var source = scheduler.CreateColdObservable(
new Recorded<Notification<int>>(100, Notification.CreateOnNext(1)),
new Recorded<Notification<int>>(200, Notification.CreateOnCompleted<int>()));
// (A)
int result = 0;
var resultTask = Task.Run(() => { result = source.Wait(); });
// (B)
resultTask.Wait();
Console.WriteLine(result);
}
This code tries to wait on a background thread. If we insert a call to scheduler.Start() at point (A), then source.Wait() will block forever.
This is because Start() will ONLY advance the internal clock of the TestScheduler until all currently scheduled events are executed. With a cold observable, events are scheduled relative to the virtual time of subscription. Since there are no subscribers at point (A), you will find that TestScheduler.Now.Ticks will report 0 even after the call to Start().
Hmmm. Things get even worse if we move the call to scheduler.Start() to point B. Now we have a race condition! It's a race condition that will almost always result in the test hanging at the call to resultTask.Wait(). This is because the chances are that the resultTask will not have had time to execute it's action and subscribe to source before the scheduler.Start() call executes - and so time once again will not advance.
A deterministic execution is therefore very hard to achieve - there is no nice way to announce that the Wait() call has been issued before advancing time, since the Wait() call itself will block. Inserting a long enough delay before calling Start() will work, but kind of defeats the object of using the TestScheduler:
// (B)
Task.Delay(2000).Wait();
scheduler.AdvanceBy(200);
What this question really demonstrates to me (IMHO) is that calling Wait() and blocking a thread is almost always a bad idea. Look for using methods like LastAsync() instead, and/or using continuations to get hold of results to asynchronous methods.
I can't recommend the approach due to the complexity, but here is a deterministic solution that makes use of an extension method to signal when a subscription has been made.
void Main()
{
var scheduler = new TestScheduler();
var source = scheduler.CreateColdObservable(
new Recorded<Notification<int>>(100, Notification.CreateOnNext(1)),
new Recorded<Notification<int>>(200, Notification.CreateOnCompleted<int>()));
// (A)
var waitHandle = new AutoResetEvent(false);
int result = 0;
var resultTask = Task.Run(() =>
{
result = source.AnnounceSubscription(waitHandle).Wait();
});
// (B)
waitHandle.WaitOne();
scheduler.Start();
resultTask.Wait();
Console.WriteLine(result);
}
public static class ObservableExtensions
{
public static IObservable<T> AnnounceSubscription<T>(
this IObservable<T> source, AutoResetEvent are)
{
return Observable.Create<T>(o =>
{
var sub = source.Subscribe(o);
are.Set();
return sub;
});
}
}
Recommended approach for testing Rx
A more idiomatic use of the TestScheduler is to create an observer to collect results, and then assert they meet expectations. Something like:
void Main()
{
var scheduler = new TestScheduler();
var source = scheduler.CreateColdObservable(
new Recorded<Notification<int>>(100, Notification.CreateOnNext(1)),
new Recorded<Notification<int>>(200, Notification.CreateOnCompleted<int>()));
var results = scheduler.CreateObserver<int>();
// here you would append to source the Rx calls that do something interesting
source.Subscribe(results);
scheduler.Start();
results.Messages.AssertEqual(
new Recorded<Notification<int>>(100, Notification.CreateOnNext(1)),
new Recorded<Notification<int>>(200, Notification.CreateOnCompleted<int>()));
}
Finally, if you derive a unit test class from ReactiveTest you can take advantage of OnNext, OnCompleted and OnError helper methods to create Recorded<Notification<T>> instances in a more readable fashion.