The question may be a little vague but here's an example of what I want to know (pseudocode):
//start test-case for CreateObject function
{
// initialization of parameters
MyObject *obj = CreateObject();
// test results
}
//end test-case for CreateObject function
Is it necessary in this case to also deallocate the memory by calling "DestroyObject" function? [this is the particular case that gave birth to this question]
My personal opinion would be no, that I shouldn't undo what the function did, but if many tests would be carried out I could remain without memory/resources for that test-suite (not likely to happen but ...).
What do you think? In this case and also in a general case.
Thanks,
Iulian
In this case, you should deallocate the memory your test case has allocated. That way, you can use a tool that lets you run your tests and confirms that no memory was leaked. Letting your test code leak memory means that this would fail, and you wouldn't be able to tell for certain that the leak was in the test and not in your production code.
As to the more general situation, tests should clean up what they've done. Most unit testing frameworks let you implement a tearDown() method to do this. That way, if one test fails you'll know it's an issue with that test and not an interaction with another test.
You should really try to create all the mock objects on the stack (or use smart pointers). That way they get automatically destructed when test function goes out of scope.
Not directly to do with testing, but if you have C++ code that does stuff like:
MyObject *obj = CreateObject();
where "obj" is not a smart pointer or is not being managed by a class, then you have problems. If I were writing the test, I would say:
MyObject obj;
// tests on obj here
No matter what the results of your tests, obj will be correctly destroyed. Never create an object dynamically in C++ if you can possibly avoid it.
Typically, you want to test an isolated code path and an isolated piece of functionality, and you want to make it a fair test each time. That means starting fresh, setting up exactly what you need, and then discarding the testing environment when you're through. This avoids the problem of different test cases leaving behind side effects that might alter the results or behaviour of subsequent runs. It also means you can guarantee that your tests are all independent of each other, and that you can run any subset, in any order.
You'll find, however, that it's also quite common to have pre and post-suite set-up and tear-down methods, which set up a whole testing environment (such as a database or whatever) that a whole bunch of unit tests can execute against.
Related
I have two unit tests that share some state (unfortunately I can't change this since the point is to test the handling of this very state).
TEST(MySuite, test1)
{
shared_ptr<MockObject> first(make_shared<MockObject>());
SubscribeToFooCallsGlobal(first);
EXPECT_CALL(*first, Foo(_));//.RetiresOnSaturation();
TriggerFooCalls(); // will call Foo in all subscribed
}
TEST(MySuite, test2)
{
shared_ptr<MockObject> second(make_shared<MockObject>());
SubscribeToFooCallsGlobal(second);
EXPECT_CALL(*second, Foo(_)).Times(1);
TriggerFooCalls(); // will call Foo in all subscribed
}
If I run the tests separately, both are successful. If I run them in the order test1, test2, I will get the following error in test2:
mytest.cpp(42): error: Mock function called more times than expected - returning directly.
Function call: Foo(0068F65C)
Expected: to be called once
Actual: called twice - over-saturated and active
The expectation that fails is the one in test1. The call does take place, but I would like to tell GoogleMock to not care after test1 is complete (in fact, I only want to check expectations in a test while the test is running).
I was under the impression that RetiresOnSaturation would do this, but with it I get:
Unexpected mock function call - returning directly.
Function call: Foo(005AF65C)
Google Mock tried the following 1 expectation, but it didn't match:
mytest.cpp(42): EXPECT_CALL(first, Foo(_))...
Expected: the expectation is active
Actual: it is retired
Expected: to be called once
Actual: called once - saturated and retired
Which I have to admit, confuses me. What does it mean? How can I solve this?
You can read in documentation of Mock almost literally described your case:
Forcing a Verification
When it's being destoyed, your friendly mock object will automatically
verify that all expectations on it have been satisfied, and will
generate Google Test failures if not. This is convenient as it leaves
you with one less thing to worry about. That is, unless you are not
sure if your mock object will be destoyed.
How could it be that your mock object won't eventually be destroyed?
Well, it might be created on the heap and owned by the code you are
testing. Suppose there's a bug in that code and it doesn't delete the
mock object properly - you could end up with a passing test when
there's actually a bug.
So you shall not expect, that at the end of the test case, in some magic way expectations will be "deactivated". As cited above - the mock destructor is the point of verification.
In your case - you mocks are not local variable - they are created in dynamic memory (heap in cited doc) and kept in your tested code via SubscribeToFooCallsGlobal(), So for sure mock created in one test is still alive in next test.
The easy, and proper solution is to unsubscribe at the end of each TC - I do not know if you have any UnsubscribeToFooCallsGlobal() - if not - create such function. To be sure that it will be always called - use ScopeGuard pattern.
There is one function to manually enforce verification Mock::VerifyAndClearExpectations(&mock_object) - but use it only if you need this verification not in the last line of your testcase, because that should be point of destruction.
edit: Fixed the googlemock link.
I have a class structured like this:
class Server
{
private:
SOCKET listener;
public:
Server(char const * const address, unsigned short int port);
~Server();
void Start();
};
Is there an alternative to relying on the user of the library to delete the object if the Start method throws an exception during say a call to CreateIoCompletionPort or listen?
A little bit subjective I know, but is there a best practice for this kind of situation?
I wanted to avoid duplicating the cleanup code and potentially causing problems with double freeing resources and the added complexity of also having to track what is cleaned up and what is not.
EDIT
To clear up some of the questions asked, I am referring to when a user of my code will create an instance of the Server class. I am trying to decide whether or not I should go the route of protecting the class from executing when it's in an invalid state due to an exception being encountered within Start. If the Start method fails because of some issue, then it's an unrecoverable error and the class is in a bad state and cannot go any further. This would be something like a configuration error or a system level error that prevents Start from succeeding, but at the same time leaves the SOCKET in a state that can't be reverted without closing and creating a new socket.
It is entirely unreasonable to expect a programmer to remember deleting everything he allocates, especially in the presence of exceptions: there is a limit to how much a human being can keep in mind, and in a sufficiently large system you run against that limit pretty quickly.
However, deleting everything is what he must do in order to avoid memory leaks. To achieve any degree of success, programmers need to do two things:
Follow defensive coding practices - Prefer objects allocated in automated store to pointers. Use smart pointers when you must allocate objects dynamically. Adhere to RAII techniques.
Write exhaustive unit tests, and profile them for memory leaks - Using memory profiler helps you spot leaks that are otherwise hard to find.
The first part is prevention; the second part is the "safety net". If you are disciplined about using smart pointers and running your unit tests, your code would be exception-safe at the basic level (i.e. it would provide a leak-free guarantee). You could go a step further, do your allocations upfront, and change state only when all allocations have succeeded to implement transactional semantic for strong exception safety.
I have an object which has a public void method on it that just modifies internal state. Part of me feels as though it should have tests as it is a public method but how do you go about testing this? Do you bother or do you rely on the other tests which make use of the fact that the internal state has changed?
To add an example I am implementing a gap buffer which has an insert method on it and a moveCursorForward method which just modifies the internal state.
Thanks,
Aly
I think injection is your answer. Create a Cursor object and inject it into your Buffer. Unit test the cursor object so that you know it works perfectly ;). Since cursor is stateful and you've passed it into your buffer, you can assert on the cursor object when you test moveCursorForward.
Check out Michael Feather's Working Effectively with Legacy Code
This also is where mocking objects might be beneficial.
I prefer separate tests for such methods. For example, there are two case for "moveCursorForward" method:
1. cursor is already in the end of the buffer
2. cursor is not in the end of the buffer
So it is likely that case 1 will be skipped if you don't create special test for it.
In other words you can miss some boundary cases.
I am just starting to learn unit testing and I think it's a really good tool which I want to start using for all my projects, however I'm still not sure how to test some things.
As an example I am implementing a queue and one of the methods it has is AddNode, I also have a mock object representing the Node in the queue, after writing the method I don't know what I should test for. The method is void so I can't test for a return value, maybe I should test for out of mamory exception or some other exception ? or maybe there is no need to do any testing in this case.
For AddNode you could be testing the following:
queue is not empty afterwards
size has increased by one
if the queue checks for duplication, that the size has not increased after adding a duplicate
a roundtrip: new Queue, addNode, getNode returns the same Node again
if the queue checks for invalid objects (no nulls, wrong node type or something), that there is an exception when you try to add these things.
Note that some of these tests do not test AddNode in isolation (they also need to call other methods). That is not a problem. You are unit-testing the class, not individual methods.
I had to start writing some unit tests, using QualityTools.UnitTestFramework, for a web service layer we have developed, when my approach seemed to be incorrect from the beginning.
It seems that unit tests should be able to run in any order and not rely on other tests.
My initial thought was to have the something similar to the following tests (a simplified expample) which would run as an ordered test in the same order.
AddObject1SuccessTest
AddObject2WithSameUniqueCodeTest(relies on first test having created object1 first then expects fail)
AddObject2SuccessTest
UpdateObject2WithSameUniqueCodeTest(relies on first test having created object1 and thrid test having created object2 first then expects fail)
UpdateObject2SuccessTest
GetObjectListTest
DeleteObjectsTest(using added IDs)
However, there is no state between tests and no apparent way of passing say added IDs to the deletetest for example.
So, is it then the case that the correct approach for unit testing complex interactions is by scenario?
For example
AddObjectSuccessTest
(which creates an object, gets it to validate the data and then deletes it)
AddObjectWithSameUniqueCodeTest
(which creates object 1 then attempts to create object 2 with a fail and then deletes object 1)
UpdateObjectWithSameUniqueCodeTest
(which creates object 1 then creates object 2 and then attempts to update object 2 to have the same unique code as object 1 with a fail and then deletes object 1 and object 2)
Am I coming at this wrong?
Thanks
It is a tenet of unit testing that each test case should be independent of any other test case. MSTest (as well as all other unit testing frameworks) enforce this by not guaranteeing the order in which tests are run - some (xUnit.NET) even go so far as to randomize the order between each test run.
It is also a recommended best practice that units are condensed into simple interactions. Although no hard and fast rule can be provided, it's not a unit test if the interaction is too complex. In any case, complex tests are brittle and have a very high maintainance overhead, which is why simple tests are preferred.
It sounds like you have a case of shared state between your tests. This leads to interdependent tests and should be avoided. Instead you can write reusable code that sets up the pre-condition state for each test, ensuring that this state is always correct.
Such a pre-condition state is called a Fixture. The book xUnit Test Patterns contains lots of information and guidance on how to manage Fixtures in many different scenarios.
As a complement to what Mark said, yes, each test should be completely independent from the others, and, to use your terms, each test should be a self-contained scenario, which can run independently of the others.
I assume from what you describe that you are testing persistence, because you have in your steps the deletion of the entities you created at the end of the test, to clean up the state. Ideally, a unit test is running completely in memory, with no shared state between each test. One way to achieve that is to use Mocks. I assume you have something like a Repository in place, so that your class calls Repository.Add(myNewObject), which calls something like Repository.ValidateObjectCanBeAdded(myNewObject). Rather than testing against the real repository, which will add objects in the database and require to delete them to clean the state after the test, you can create an interface IRepository, with the two same methods, and use a Mock to check that when your class calls IRepository, it is exercising the right methods, with the right arguments, in the right order. It also gives you the ability to set the "fake" repository to any state you want, in memory, without having to physically add or delete records from a real storage.
Hope this helps!