I am just starting to learn unit testing and I think it's a really good tool which I want to start using for all my projects, however I'm still not sure how to test some things.
As an example I am implementing a queue and one of the methods it has is AddNode, I also have a mock object representing the Node in the queue, after writing the method I don't know what I should test for. The method is void so I can't test for a return value, maybe I should test for out of mamory exception or some other exception ? or maybe there is no need to do any testing in this case.
For AddNode you could be testing the following:
queue is not empty afterwards
size has increased by one
if the queue checks for duplication, that the size has not increased after adding a duplicate
a roundtrip: new Queue, addNode, getNode returns the same Node again
if the queue checks for invalid objects (no nulls, wrong node type or something), that there is an exception when you try to add these things.
Note that some of these tests do not test AddNode in isolation (they also need to call other methods). That is not a problem. You are unit-testing the class, not individual methods.
Related
Assume we have an Order class with a method called Approve. When this method is called, it checks certain conditions and either puts the Order in the state of Approved or throws an exception. In the service layer, we've got something like this:
var order = _repository.Single(o => o.ID == orderID);
order.Approve();
_context.SaveChanges(); // or _session.SaveChanges();
There are 2 ways to test this method and I'd like to hear your insight on this:
Solution 1: Stub the repository to return an Order object. Then assert the Order is in the state of "Approved".
Solution 2: Stub the repository to return a Mock Order object. Assert that Approve() method was called.
Solution 1 is easier and I personally favor state-based testing to interaction-based testing, as the latter can target implementation details and should be avoided. However, I believe testing that the given Order is in the state of Approved is not the concern of this service method. I think we need a separate test method for the Order class to test whether an exception is thrown or the Order's state is changed to Approved.
Solution 2 may sound logical as we are delegating the responsibility of Approving an Order to the Order class itself. So perhaps we need 2 tests for this service method: One to ensure it delegates the task of Approving an Order to the Order class and one to ensure it saves the changes.
What's your insight on this? Which solution do you prefer?
Cheers
Unit tests are to test whether the observed behavior is conforming to expectations/specification.
The answer to your question boils down to what you consider "expected behavior" in this case: a) if the expected behavior is that the Order is in approved state after calling the service method, then test the state; b) if the expected behavior is that the approve action is delegated, then test the method call.
You will need to test the Order object's behavior as well (so that calling Approve() changes the state to approved) in either case.
The second solution plays well as it decouples the behavior of the two objects, but if there are more than one ways that the order can be in approved state (and that's what you are testing -- case a)), then you limit the accepted behavior needlessly.
Also, I would create a separate test for testing the saving part, if that is not essential to the approval part
I have an object which has a public void method on it that just modifies internal state. Part of me feels as though it should have tests as it is a public method but how do you go about testing this? Do you bother or do you rely on the other tests which make use of the fact that the internal state has changed?
To add an example I am implementing a gap buffer which has an insert method on it and a moveCursorForward method which just modifies the internal state.
Thanks,
Aly
I think injection is your answer. Create a Cursor object and inject it into your Buffer. Unit test the cursor object so that you know it works perfectly ;). Since cursor is stateful and you've passed it into your buffer, you can assert on the cursor object when you test moveCursorForward.
Check out Michael Feather's Working Effectively with Legacy Code
This also is where mocking objects might be beneficial.
I prefer separate tests for such methods. For example, there are two case for "moveCursorForward" method:
1. cursor is already in the end of the buffer
2. cursor is not in the end of the buffer
So it is likely that case 1 will be skipped if you don't create special test for it.
In other words you can miss some boundary cases.
I had to start writing some unit tests, using QualityTools.UnitTestFramework, for a web service layer we have developed, when my approach seemed to be incorrect from the beginning.
It seems that unit tests should be able to run in any order and not rely on other tests.
My initial thought was to have the something similar to the following tests (a simplified expample) which would run as an ordered test in the same order.
AddObject1SuccessTest
AddObject2WithSameUniqueCodeTest(relies on first test having created object1 first then expects fail)
AddObject2SuccessTest
UpdateObject2WithSameUniqueCodeTest(relies on first test having created object1 and thrid test having created object2 first then expects fail)
UpdateObject2SuccessTest
GetObjectListTest
DeleteObjectsTest(using added IDs)
However, there is no state between tests and no apparent way of passing say added IDs to the deletetest for example.
So, is it then the case that the correct approach for unit testing complex interactions is by scenario?
For example
AddObjectSuccessTest
(which creates an object, gets it to validate the data and then deletes it)
AddObjectWithSameUniqueCodeTest
(which creates object 1 then attempts to create object 2 with a fail and then deletes object 1)
UpdateObjectWithSameUniqueCodeTest
(which creates object 1 then creates object 2 and then attempts to update object 2 to have the same unique code as object 1 with a fail and then deletes object 1 and object 2)
Am I coming at this wrong?
Thanks
It is a tenet of unit testing that each test case should be independent of any other test case. MSTest (as well as all other unit testing frameworks) enforce this by not guaranteeing the order in which tests are run - some (xUnit.NET) even go so far as to randomize the order between each test run.
It is also a recommended best practice that units are condensed into simple interactions. Although no hard and fast rule can be provided, it's not a unit test if the interaction is too complex. In any case, complex tests are brittle and have a very high maintainance overhead, which is why simple tests are preferred.
It sounds like you have a case of shared state between your tests. This leads to interdependent tests and should be avoided. Instead you can write reusable code that sets up the pre-condition state for each test, ensuring that this state is always correct.
Such a pre-condition state is called a Fixture. The book xUnit Test Patterns contains lots of information and guidance on how to manage Fixtures in many different scenarios.
As a complement to what Mark said, yes, each test should be completely independent from the others, and, to use your terms, each test should be a self-contained scenario, which can run independently of the others.
I assume from what you describe that you are testing persistence, because you have in your steps the deletion of the entities you created at the end of the test, to clean up the state. Ideally, a unit test is running completely in memory, with no shared state between each test. One way to achieve that is to use Mocks. I assume you have something like a Repository in place, so that your class calls Repository.Add(myNewObject), which calls something like Repository.ValidateObjectCanBeAdded(myNewObject). Rather than testing against the real repository, which will add objects in the database and require to delete them to clean the state after the test, you can create an interface IRepository, with the two same methods, and use a Mock to check that when your class calls IRepository, it is exercising the right methods, with the right arguments, in the right order. It also gives you the ability to set the "fake" repository to any state you want, in memory, without having to physically add or delete records from a real storage.
Hope this helps!
Background: I have some classes implementing a subject/observer design pattern that I've made thread-safe. A subject will notify it's observers by a simple method call observer->Notified( this ) if the observer was constructed in the same thread as the notification is being made. But if the observer was constructed in a different thread, then the notification will be posted onto a queue to be processed later by the thread that constructed the observer and then the simple method call can be made when the notification event is processed.
So… I have a map associating threads and queues which gets updated when threads and queues are constructed and destroyed. This map itself uses a mutex to protect multi-threaded access to it.
The map is a singleton.
I've been guilty of using singletons in the past because "there will be only one in this application", and believe me - I have paid my penance!
One part of me can't help thinking that there really will be only one queue/thread map in an application. The other voice says that singletons are not good and you should avoid them.
I like the idea of removing the singleton and being able to stub it for my unit tests. Trouble is, I'm having a hard time trying to think of a good alternative solution.
The "usual" solution which has worked in the past is to pass in a pointer to the object to use instead of referencing the singleton. I think that would be tricky in this case, since observers and subjects are 10-a-penny in my application and it would very awkward to have to pass a queue/thread map object into the constructor of every single observer.
What I appreciate is that I may well have only one map in my application, but it shouldn't be in the bowels of the subject and observer class code where that decision is made.
Maybe this is a valid singleton, but I'd also appreciate any ideas on how I could remove it.
Thanks.
PS. I have read What's Alternative to Singleton and this article mentioned in the accepted answer. I can't help thinking that the ApplicationFactory it just yet another singleton by another name. I really don't see the advantage.
If the only purpose to trying to get rid of the singleton is from a unit test perspective, maybe replacing the singleton getter with something that you can swap in a stub for.
class QueueThreadMapBase
{
//virtual functions
};
class QeueueThreadMap : public QueueThreadMapBase
{
//your real implementation
};
class QeueueThreadMapTestStub : public QueueThreadMapBase
{
//your test implementation
};
static QueueThreadMapBase* pGlobalInstance = new QeueueThreadMap;
QueueThreadMapBase* getInstance()
{
return pGlobalInstance;
}
void setInstance(QueueThreadMapBase* pNew)
{
pGlobalInstance = pNew
}
Then in your test just swap out the queue/thread map implementation. At the very least this exposes the singleton a little more.
Some thoughts towards a solution:
Why do you need to enqueue notifications for observers that were created on a different thread? My preferred design would be to have the subject just notify the observers directly, and put the onus on the observers to implement themselves thread-safely, with the knowledge that Notified() might be called at any time from another thread. The observers know which parts of their state need to be protected with locks, and they can handle that better than the subject or the queue.
Assuming that you really have a good reason for keeping the queue, why not make it an instance? Just do queue = new Queue() somewhere in main, and then pass around that reference. There may only every be one, but you can still treat that as an instance and not a global static.
What's wrong with putting the queue inside the subject class? What do you need the map for?
You already have a thread reading from the singleton queue map. Instead of doing that simply make the map inside the subject class and provide two methods to subscribe an observer:
class Subject
{
// Assume is threadsafe and all
private QueueMap queue;
void Subscribe(NotifyCallback, ThreadId)
{
// If it was created from another thread add to the map
if (ThreadId != This.ThreadId)
queue[ThreadId].Add(NotifyCallback);
}
public NotifyCallBack GetNext()
{
return queue[CallerThread.Id].Pop;
}
}
Now any thread can call the GetNext method to start dispatching... of course it is all overly simplified, but it's just the idea.
Note: I'm working with the assumption that you already have an architecture around this model so that you already have a bunch of Observers, one or more subjects and that the threads already go to the map to do the notifications. This gets rid of the singleton but I'd suggest you notify from the same thread and let the observers handle concurrency issues.
My approach was to have the observers provide a queue when they registered with the subject; the observer's owner would be responsible for both the thread and the associated queue, and the subject would associate the observer with the queue, with no need for a central registry.
Thread-safe observers could register without a queue, and be called directly by the subject.
Your observers may be cheap, but they depend on the notification-queue-thread map, right?
What's awkward about making that dependency explicit and taking control of it?
As for the application factory Miško Hevery describes in his article, the biggest advantages are that 1) the factory approach doesn't hide dependencies and 2) the single instances you depend on aren't globally available such that any other object can meddle with their state. So using this approach, in any given top-level application context, you know exactly what's using your map. With a globally accessible singleton, any class you use might be doing unsavory things with or to the map.
What about adding a Reset method that returns a singleton to its initial state that you can call between tests? That might be simpler than a stub. It might be possible to add a generic Reset method to a Singleton template (deletes the internal singleton pimpl and resets the pointer). This could even include a registry of all singletons with a master ResetAll method to reset all of them!
The question may be a little vague but here's an example of what I want to know (pseudocode):
//start test-case for CreateObject function
{
// initialization of parameters
MyObject *obj = CreateObject();
// test results
}
//end test-case for CreateObject function
Is it necessary in this case to also deallocate the memory by calling "DestroyObject" function? [this is the particular case that gave birth to this question]
My personal opinion would be no, that I shouldn't undo what the function did, but if many tests would be carried out I could remain without memory/resources for that test-suite (not likely to happen but ...).
What do you think? In this case and also in a general case.
Thanks,
Iulian
In this case, you should deallocate the memory your test case has allocated. That way, you can use a tool that lets you run your tests and confirms that no memory was leaked. Letting your test code leak memory means that this would fail, and you wouldn't be able to tell for certain that the leak was in the test and not in your production code.
As to the more general situation, tests should clean up what they've done. Most unit testing frameworks let you implement a tearDown() method to do this. That way, if one test fails you'll know it's an issue with that test and not an interaction with another test.
You should really try to create all the mock objects on the stack (or use smart pointers). That way they get automatically destructed when test function goes out of scope.
Not directly to do with testing, but if you have C++ code that does stuff like:
MyObject *obj = CreateObject();
where "obj" is not a smart pointer or is not being managed by a class, then you have problems. If I were writing the test, I would say:
MyObject obj;
// tests on obj here
No matter what the results of your tests, obj will be correctly destroyed. Never create an object dynamically in C++ if you can possibly avoid it.
Typically, you want to test an isolated code path and an isolated piece of functionality, and you want to make it a fair test each time. That means starting fresh, setting up exactly what you need, and then discarding the testing environment when you're through. This avoids the problem of different test cases leaving behind side effects that might alter the results or behaviour of subsequent runs. It also means you can guarantee that your tests are all independent of each other, and that you can run any subset, in any order.
You'll find, however, that it's also quite common to have pre and post-suite set-up and tear-down methods, which set up a whole testing environment (such as a database or whatever) that a whole bunch of unit tests can execute against.