Unit testing with timeouts - unit-testing

I am unit testing a class with a property whose value changes often, depending on communication it receives from another component. If the class does not receive any communication for 5 seconds, the property reverts to a default value.
It is easy for me to stub and mock out the communicating component in order to trigger the values I want to test for. The problem is that if I run my unit tests on a machine which is busy (like a build machine), and there is a significant-enough delay to cause the property to default, then my unit test will fail.
How would you test to be sure that this property has the proper value when simulating various communication conditions?
One idea is to restructure my code so that I can stub the part of the class which controls the timeout. Another is to write my unit test such that it can detect if it failed due to a timeout and indicates that in the test results.

I'd try a different approach. Game developers often need a way to control the game time, e.g. for fast-forward functionality or to synchronize frame rates. They introduce a Timer object, which reads ticks from either a hardware clock, or from a simulated clock.
In your case, you could provide a controllable timer for your unit tests and a timer which delegates to system time in production mode. That way, you can control the time which passes for your test case and thus how the class-under-test has to react under certain timeout conditions.
Pseudo-code:
public void testTimeout() throws Exception {
MockTimerClock clock = new MockTimerClock();
ClassUnderTest cut = new ClassUnderTest();
cut.setTimerClock(clock);
cut.beginWaitingForCommunication();
assertTrue(cut.hasDefaultValues());
cut.receiveOtherValues();
assertFalse(cut.hasDefaultValues());
clock.tick(5,TimeUnit.SECONDS);
assertTrue(cut.hasDefaultValues());
cut.shutdown();
}

You could make the timeout property configurable, then set it to a high enough value in your unit tests (or low enough, if you want to unit test the reset behaviour).

There is a similar problem when using DateTime.Now.
Ayende described a trick to deal with it that I liked:
public static class SystemTime
{
public static Func<DateTime> Now = () => DateTime.Now;
}
and then in your test :
[Test]
public void Should_calculate_length_of_stay_from_today_when_still_occupied()
{
var startDate = new DateTime(2008, 10, 1);
SystemTime.Now = () => new DateTime(2008, 10, 5);
var occupation = new Occupation { StartDate = startDate };
occupation.LengthOfStay().ShouldEqual(4);
}
Maybe you can use the same kind of trick for your timeout?

Related

Nunit parallel lifecycle and parallel test failing sometimes

I am having an issue with tests sporadically failing when using nunit 3 and using parallel test running.
We have a number of tests that currently are structured as follows
[TestFixture]
public class CalculateShipFromStoreShippingCost
{
private IService_service;
private IClient _client;
[SetUp]
public void SetUp()
{
_service = Substitute.For<IService>();
_client = new Client(_service);
}
[Test]
public async Task WhenScenario1()
{
_service.Apply(Args.Any<int>).Returns(1);
var result = _client.DoTheThing();
Assert.IsTrue(1,result);
}
[Test]
public async Task WhenScenario2()
{
_service.Apply(Args.Any<int>).Returns(2);
var result = _client.DoTheThing();
Assert.IsTrue(2,result);
}
}
Sometimes the test fail as one of the substitutes is returning the value for the other test.
How should this test be structured so that with Nunit it will run reliably when done in parallel
You haven't shown any Parallelizable attributes in your example, so I assume you are using the attribute at a higher level, most likely on the assembly. Otherwise, no parallel execution would occur. Further, since you say the test cases are running in parallel, you have apparently specified ParallelScope.Children.
The two test cases shown in your fixture cannot run in parallel. You should bear in mind that the SetUp method runs for each of the tests. So each of your two tests sets the value of _service, which is part of the state of the single instance of CalculateShipFromStoreShippingCost, which is shared by both tests. That is why you are seeing the "wrong" substitute being returned at times.
It is not possible for two test cases to run reliably in parallel if they both change the state of the fixture. Note that it does not matter whether the assignment to _service takes place in the test method itself or in the SetUp method - both are executed as part of the test case. So, you have to either stop running these two cases in parallel or stop changing the state.
To stop running the tests in parallel, you simply add [NonParallelizable] to each test method. If you are not using the latest framework version, use [Parallelizable(ParallelScope.None)] instead. Your other tests will continue to run in parallel, but these two will not.
Alternatively, use ParallelScope.Fixture at the assembly level. This will cause fixtures to run in parallel by default, while the individual test cases within them each run sequentially. When using ParallelizableAttribute at the assembly level, it is sometimes best to take a more conservative approach, adding in more parallelism within some fixtures where it is useful.
An entirely different approach is to make your tests stateless. Eliminate the _service member and use a local value within the test method itself. Each of your tests would add two lines like...
var service = SubstituteFor<IService>();
var client = new Client(service);
As shown in your example, I would imagine you are getting very little performance gain from running the two methods in parallel, so I would not use that last approach unless I saw a specific performance reason to do so.
As a final note... If you use make your fixtures run in parallel by default (either with an assembly-level attribute or with attributes on each fixture) and place no Parallelizable attribute on your test cases, NUnit uses an optimization, whereby all the tests within the fixture run on the same thread. This saving in context changes will often make up for the loss of any performance improvement you hoped to get through running in parallel.

Wrapper around TASKs in C#

I am using tasks in WinForms (.NET 4.0) to perform lengthy operations like WCF call. Application is already in product with heavy use of Tasks (almost all the methods which uses Tasks are void).
During the unit testing we have used AutoResetEvents (in actual code) to find out when the given task is completed then perform assert.
This gives me a thought that almost all the AutoResetEvent are waste of effort. They are just fulfilling unit testing needs, nothing else.
Can we create a wrapper around Tasks likewise when actual code run... they should work in background and in case of unit testing they should be synchronous.
Similar to below link for BackgroundWorker.
http://si-w.co.uk/blog/2009/09/11/unit-testing-code-that-uses-a-backgroundworker/
Why can't you simply use the continuation for tasks in your wrapper, like this:
var task = ...
task.ContinueWith(t => check task results here)
Also, unit tests can be marked as async, if they have a return type Task, so you can use an await there, and after that do your asserts:
[Test]
public async Task SynchronizeTestWithRecurringOperationViaAwait()
{
var sut = new SystemUnderTest();
// Execute code to set up timer with 1 sec delay and interval.
var firstNotification = sut.StartRecurring();
// Wait that operation has finished two times.
var secondNotification = await firstNotification.GetNext();
await secondNotification.GetNext();
// Assert outcome.
Assert.AreEqual("Init Poll Poll", sut.Message);
}
Another approach (from the same article) is to use a custom task scheduler, which will be synchronous in case of unit testing:
[Test]
public void TestCodeSynchronously()
{
var dts = new DeterministicTaskScheduler();
var sut = new SystemUnderTest(dts);
// Execute code to schedule first operation and return immediately.
sut.StartAsynchronousOperation();
// Execute all operations on the current thread.
dts.RunTasksUntilIdle();
// Assert outcome of the two operations.
Assert.AreEqual("Init Work1 Work2", sut.Message);
}
Same MSDN magazine contains nice article about best practices for async unit testing. Also async void should be used only as an event handler, all other methods should have async Task signature.

How to unit test an Akka actor that sends a message to itself, without using Thread.sleep

I have a Scala unit test for an Akka actor. The actor is designed to poll a remote system and update a local cache. Part of the actor's design is that it doesn't attempt to poll while it's still processing or awaiting the result of the last poll, to avoid flooding the remote system when it experiences a slowdown.
I have a test case (shown below) which uses Mockito to simulate a slow network call, and checks that when the actor is told to update, it won't make another network call until the current one is complete. It checks the actor has not made another call by verifying a lack of interactions with the remote service.
I want to eliminate the call to Thread.sleep. I want to test the functionality of the actor without relying on waiting for a hardcoded time, in every test run, which is brittle, and wastes time. The test can poll or block, waiting for a condition, with a timeout. This will be more robust, and will not waste time when the test is passing. I also have the added constraint that I want to keep the state used to prevent extra polling var allowPoll limited in scope, to the internals of the PollingActor.
is there a way force a wait until the actor is finished messaging itself? If there's a way I can wait until then before trying to assert.
is it necessary to send the internal message at all? Couldn't I maintain the internal state with a threadsafe datastructure, such as java.util.concurrent.AtomicBoolean. I have done this and the code appears to work, but I'm not knowledgeable enough about Akka to know if it's discouraged -- a colleague recommended the self message style.
is there better, out-of-the-box functionality with the same semantics? Then I would opt for an integration test instead of a unit test, though I'm not sure if it would solve this problem.
The current actor looks something like this:
class PollingActor(val remoteService: RemoteServiceThingy) extends ActWhenActiveActor {
private var allowPoll: Boolean = true
def receive = {
case PreventFurtherPolling => {
allowPoll = false
}
case AllowFurtherPolling => {
allowPoll = true
}
case UpdateLocalCache => {
if (allowPoll) {
self ! PreventFurtherPolling
remoteService.makeNetworkCall.onComplete {
result => {
self ! AllowFurtherPolling
// process result
}
}
}
}
}
}
trait RemoteServiceThingy {
def makeNetworkCall: Future[String]
}
private case object PreventFurtherPolling
private case object AllowFurtherPolling
case object UpdateLocalCache
And the unit test, in specs2, looks like this:
"when request has finished a new requests can be made" ! {
val remoteService = mock[RemoteServiceThingy]
val actor = TestActorRef(new PollingActor(remoteService))
val slowRequest = new DefaultPromise[String]()
remoteService.makeNetworkCall returns slowRequest
actor.receive(UpdateLocalCache)
actor.receive(UpdateLocalCache)
slowRequest.complete(Left(new Exception))
// Although the test calls the actor synchronously, the actor calls *itself* asynchronously, so we must wait.
Thread.sleep(1000)
actor.receive(UpdateLocalCache)
there was two(remoteService).makeNetworkCall
}
The way we have chosen to solve this for now is to inject the equivalent of an observer into the actor (piggybacking on an existing logger which wasn't included in the listing in the question). The actor can then tell the observer when it has transitioned from various states. In the test code we perform an action then wait for the relevant notification from the actor, before continuing and making assertions.
In the test we have something like this:
actor.receive(UpdateLocalCache)
observer.doActionThenWaitForEvent(
{ actor.receive(UpdateLocalCache) }, // run this action
"IgnoredUpdateLocalCache" // then wait for the actor to emit an event
}
// assert on number of calls to remote service
I don't know if there's a more idiomatic way, this seems like a reasonable suggestion to me.

Application Service Layer: Unit Tests, Integration Tests, or Both?

I've got a bunch of methods in my application service layer that are doing things like this:
public void Execute(PlaceOrderOnHoldCommand command)
{
var order = _repository.Load(command.OrderId);
order.PlaceOnHold();
_repository.Save(order);
}
And at present, I have a bunch of unit tests like this:
[Test]
public void PlaceOrderOnHold_LoadsOrderFromRepository()
{
var repository = new Mock<IOrderRepository>();
const int orderId = 1;
var order = new Mock<IOrder>();
repository.Setup(r => r.Load(orderId)).Returns(order.Object);
var command = new PlaceOrderOnHoldCommand(orderId);
var service = new OrderService(repository.Object);
service.Execute(command);
repository.Verify(r => r.Load(It.Is<int>(x => x == orderId)), Times.Exactly(1));
}
[Test]
public void PlaceOrderOnHold_CallsPlaceOnHold()
{
/* blah blah */
}
[Test]
public void PlaceOrderOnHold_SavesOrderToRepository()
{
/* blah blah */
}
It seems to be debatable whether these unit tests add value that's worth the effort. I'm quite sure that the application service layer should be integration tested, though.
Should the application service layer be tested to this level of granularity, or are integration tests sufficient?
I'd write a unit test despite there also being an integration test. However, I'd likely make the test much simpler by eliminating the mocking framework, writing my own simple mock, and then combining all those tests to check that the the order in the mock repository was on hold.
[Test]
public void PlaceOrderOnHold_LoadsOrderFromRepository()
{
const int orderId = 1;
var repository = new MyMockRepository();
repository.save(new MyMockOrder(orderId));
var command = new PlaceOrderOnHoldCommand(orderId);
var service = new OrderService(repository);
service.Execute(command);
Assert.IsTrue(repository.getOrder(orderId).isOnHold());
}
There's really no need to check to be sure that load and/or save is called. Instead I'd just make sure that the only way that MyMockRepository will return the updated order is if load and save are called.
This kind of simplification is one of the reasons that I usually don't use mocking frameworks. It seems to me that you have much better control over your tests, and a much easier time writing them, if you write your own mocks.
Exactly: it's debatable! It's really good that you are weighing the expense/effort of writing and maintaining your test against the value it will bring you - and that's exactly the consideration you should make for every test you write. Often I see tests written for the sake of testing and thereby only adding ballast to the code base.
As a guideline I usually take that I want a full integration test of every important successful scenario/use case. Other tests I'll write are for parts of the code that are likely to break with future changes, or have broken in the past. And that is definitely not all code. That's where your judgement and insight in the system and requirements comes into play.
Assuming that you have an (integration) test for service.Execute(placeOrderOnHoldCommand), I'm not really sure if it adds value to test if the service loads an order from the repository exactly once. But it could be! For instance when your service previously had a nasty bug that would hit the repository ten times for a single order, causing performance issues (just making it up). In that case, I'd rename the test to PlaceOrderOnHold_LoadsOrderFromRepositoryExactlyOnce().
So for each and every test you have to decide for yourself ... hope that helps.
Notes:
The tests you show can be perfectly valid and look well written.
Your test sequence methods seems to be inspired on the way the Execute(...) method is currently implemented. When you structure your test this way, it could be that you are tying yourself to a specific implementation. This way, tests can actually make it harder to change - make sure you're only testing the important external behavior of your class.
I usually write a single integration test of the primary scenario. By primary scenario i mean the successful path of all the code being tested. Then I write unit tests of all the other scenarios like checking all the cases in a switch, testing exception and so forth.
I think it is important to have both and yes it is possible to test it all with integration tests only, but that makes your tests long running and harder to debug. In average I think I have 10 unit tests per integration test.
I don't bother to test methods with one-liners unless something bussines logic-like happens in that line.
Update: Just to make it clear, cause I'm doing test-driven development I always write the unit tests first and typically do the integration test at the end.

Unit Testing.... a data provider?

Given problem:
I like unit tests.
I develop connectivity software to external systems that pretty much and often use a C++ library
The return of this systems is nonndeterministic. Data is received while running, but making sure it is all correctly interpreted is hard.
How can I test this properly?
I can run a unit test that does a connect. Sadly, it will then process a life data stream. I can say I run the test for 30 or 60 seconds before disconnecting, but getting code ccoverage is impossible - I simply dont even comeclose to get all code paths EVERY ONCE PER DAY (error code paths are rarely run).
I also can not really assert every result. Depending on the time of the day we talk of 20.000 data callbacks per second - all of which are not relly determined good enough to validate each of them for consistency.
Mocking? Well, that would leave me testing an empty shell of myself because the code handling the events basically is the to be tested case, and in many cases we talk here of a COMPLEX c level structure - hard to have mocking frameworks that integrate from Csharp to C++
Anyone any idea? I am short on giving up using unit tests for this part of the application.
Unit testing is good, but it shouldn't be your only weapon against bugs. Look into the difference between unit tests and integration tests: it sounds to me like the latter is your best choice.
Also, automated tests (unit tests and integration tests) are only useful if your system's behavior isn't going to change. If you're breaking backward compatibility with every release, the automated tests of that functionality won't help you.
You may also want to see a previous discussion on how much unit testing is too much.
Does your external data source implement an interface -- or can you using a combination of an interface and a wrapper around the data source that implements the interface decouple your class under test from the data source. If either of these are true, then you can mock out the data source in your unit tests and provide the data from the mock instance.
public interface IDataSource
{
public List<DataObject> All();
...
}
public class DataWrapper : IDataSource
{
public DataWrapper( RealDataSource source )
{
this.Source = source;
}
public RealDataSource Source { get; set; }
public List<DataObject> All()
{
return this.Source.All();
}
}
Now in your class under test depend on the interface and inject an instance, then in your unit tests, provide a mock instance that implements the interface.
public void DataSourceAllTest()
{
var dataSource = MockRepository.GenerateMock<IDataSource>();
dataSource.Expect( s => s.All() ).Return( ... mock data ... );
var target = new ClassUnderTest( dataSource );
var actual = target.Foo();
// assert something about actual
dataSource.VerifyAllExpectations();
}