Considering such code:
class ToBeTested {
public:
void doForEach() {
for (vector<Contained>::iterator it = m_contained.begin(); it != m_contained.end(); it++) {
doOnce(*it);
doTwice(*it);
doTwice(*it);
}
}
void doOnce(Contained & c) {
// do something
}
void doTwice(Contained & c) {
// do something
}
// other methods
private:
vector<Contained> m_contained;
}
I want to test that if I fill vector with 3 values my functions will be called in proper order and quantity. For example my test can look something like this:
tobeTested.AddContained(one);
tobeTested.AddContained(two);
tobeTested.AddContained(three);
BEGIN_PROC_TEST()
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
tobeTested.doForEach()
END_PROC_TEST()
How do you recommend to test this? Are there any means to do this with CppUnit or GoogleTest frameworks? Maybe some other unit test framework allow to perform such tests?
I understand that probably this is impossible without calling any debug functions from these functions, but at least can it be done automatically in some test framework. I don't like to scan trace logs and check their correctness.
UPD: I'm trying to check not only the state of an objects, but also the execution order to avoid performance issues on the earliest possible stage (and in general I want to know that my code is executed exactly as I expected).
You should be able to use any good mocking framework to verify that calls to a collaborating object are done in a specific order.
However, you don't generally test that one method makes some calls to other methods on the same class... why would you?
Generally, when you're testing a class, you only care about testing its publicly visible state. If you test
anything else, your tests will prevent you from refactoring later.
I could provide more help, but I don't think your example is consistent (Where is the implementation for the AddContained method?).
If you're interested in performance, I recommend that you write a test that measures performance.
Check the current time, run the method you're concerned about, then check the time again. Assert that the total time taken is less than some value.
The problem with check that methods are called in a certain order is that your code is going to have to change, and you don't want to have to update your tests when that happens. You should focus on testing the actual requirement instead of testing the implementation detail that meets that requirement.
That said, if you really want to test that your methods are called in a certain order, you'll need to do the following:
Move them to another class, call it Collaborator
Add an instance of this other class to the ToBeTested class
Use a mocking framework to set the instance variable on ToBeTested to be a mock of the Collborator class
Call the method under test
Use your mocking framework to assert that the methods were called on your mock in the correct order.
I'm not a native cpp speaker so I can't comment on which mocking framework you should use, but I see some other commenters have added their suggestions on this front.
You could check out mockpp.
Instead of trying to figure out how many functions were called, and in what order, find a set of inputs that can only produce an expected output if you call things in the right order.
Some mocking frameworks allow you to set up ordered expectations, which lets you say exactly which function calls you expect in a certain order. For example, RhinoMocks for C# allows this.
I am not a C++ coder so I'm not aware of what's available for C++, but that's one type of tool that might allow what you're trying to do.
http://msdn.microsoft.com/en-au/magazine/cc301356.aspx
This is a good article about Context Bound Objects. It contains some so advanced stuff, but if you are not lazy and really want to understand this kind of things it will be really helpful.
At the end you will be able to write something like:
[CallTracingAttribute()]
public class TraceMe : ContextBoundObject
{...}
You could use ACE (or similar) debug frameworks, and in your test, configure the debug object to stream to a file. Then you just need to check the file.
Related
I'm new to test driven development and first time I'm tring to use it in a simple project.
I have a class, and I need to test creation, insertion and deletion of objects of this class. If I write three seperate test functions I need to duplicate initialization codes in other function. On the hand if I put all tests in one test function then it is a contradiction with one test per function. What should I do?
Here the situation:
tst_create()
{
createHead(head);
createBody(body);
createFoot(foot);
}
tst_insert()
{
createHead(head);
createBody(body);
createFoot(foot);
obj_id=insert(obj); //Also I need to delete obj_id somehow in order to preserve old state
}
tst_delete()
{
createHead(head);
createBody(body);
createFoot(foot);
obj_id=insert(obj);
delete(obj_id);
}
vs
tstCreateInsertDelete()
{
createHead(head);
createBody(body);
createFoot(foot);
obj_id=insert(obj);
delete(obj_id);
}
Rather than "One test per function", try thinking about it as, "One aspect of behaviour per function".
What does inserting an object give you? How about deleting an object? Why are these valuable? How can you tell you've done them? Write an example of how the code might be used, and why that behaviour is valuable. That then becomes your test.
When you've worked out what the behaviour is that you're interested in, extract out the duplication only if it makes the test more readable. TDD isn't just about testing; it's also about providing documentation, and helping you think about the responsibility of each element of code and the design of that code. The tests will probably be read far more than they're written, so readability has to come first.
If necessary, put all the behaviour you're interested in in one method, and just make sure it's readable. You can add comments if required.
Factor out the duplication in your tests.
Depending on your test framework, there may be support for defining a setup method that's called before each test execution and a teardown method that's called after each test.
Regardless, you can extract the common stuff so that you only have to repeat a call to a single shared setup.
If you tell us what language and test framework you use, we might be able to give more specific advice.
Suppose I have several unit tests in a test class ([TestClass] in VSUnit in my case). I'm trying to test just one thing in each test (doesn't mean just one Assert though). Imagine there's one test (e.g. Test_MethodA() ) that tests a method used in other tests as well. I do not want to put an assert on this method in other tests that use it to avoid duplicity/maintainability issues so I have the assert in only this one test. Now when this test fails, all tests that depend on correct execution of that tested method fail as well. I want to be able to locate the problem faster so I want to be somehow pointed to Test_MethodA. It would e.g. help if I could make some of the tests in the test class execute in a particular order and when they fail I'd start looking for cause of the failure in the first failing test. Do you have any idea how to do this?
Edit: By suggesting that a solution would be to execute the tests in a particular order I have probably went too far and in the wrong direction. I don't care about the order of the tests. It's just that some of the tests will always fail if a prequisite isn't valid. E.g. I have a test class that tests a DAO class (ok, probably not a UNIT test, but there's logic in the database stored procedures that needs to be tested but that's not the point here I think). I need to insert some records into a table in order to test that a method responsible for retrieving the records (let's call it GetAll()) gets them all in the correct order e.g. I do the insert by using a method on the DAO class. Let's call it Insert(). I have tests in place that verify that the Insert() method works as expected. Now I want to test the GetAll() method. In order to get the database in a desired state I use the Insert() method. If Insert() doesn't work, most tests for GetAll() will fail. I'd prefer to mark the tests that can't pass because Insert() doesn't work as inconclusive rather than failed. It would ease finding the cause of the problem if I know which method/test to look into first.
You can't (and shouldn't) execute unit tests in a specific order. The underlying reason for this is to prevent Interacting Tests - I realize that your motivation for requesting such a feature is different, but that's the reason why unit test frameworks don't allow you to order tests. In fact, last time I checked, xUnit.net even randomizes the order.
One could argue that the fact that some of your tests depend on a different method call on the same class is a symptom of tight coupling, but that's not always the case (state machines come to mind).
However, if possible, consider using a Back Door instead of the other method in question.
If you can't do either that or decouple the interdependency (e.g. by making the first method virtual and using the Extract and Override technique), you will have to live with it.
Here's an example:
public class MyClass
{
public virtual void FirstMethod() { // do something... }
public void SecondMethod() {}
}
Since FirstMethod is virtual, you can derive from MyClass and override its behavior. You can also use a dynamic mock to do that for you. With Moq, it would look like this:
var sutStub = new Mock<MyClass>();
// by default, Moq overrides all virtual methods without calling base
// Now invoke both methods in sequence:
sutStub.Object.FirstMethod(); // overriden by Moq, so it does nothing
sutSutb.Object.SecondMethod();
I think I would indeed have the assertion on the method_A() result in every tests relying on its result, even if this introduces some duplication. Then I would use the assertion message to point to the method_A() failure.
assert("method_A() returned true", true, rc);
Perhaps will I end extracting the method_A() call and the assertion into an helper function to remove the duplication.
Now let's imagine method_A() queries an object and returns it, or NULL when no object is found. Then this assertion is a guard ; and it is necessary with languages suchas C, C++ that do not have NullPointerException.
I'm afraid you can't do this. The only solution is to redesign your code and break it up into smaller methods so that unit tests can call these one by one. Of course this isn't always desirable.
With Visual Studio you can order your tests: see here. But I'd like to advise you to stay away from this technique as much as possible: unit tests are meant to be run anywhere, anytime and in every order.
EDIT: why is this a problem for you? All failing tests point to the same method anyway...
If a function just calls another function or performs actions. How do I test it? Currently, I enforce all the functions should return a value so that I could assert the function return values. However, I think this approach mass up the API because in the production code. I don't need those functions to return value. Any good solutions?
I think mock object might be a possible solution. I want to know when should I use assert and when should I use mock objects? Is there any general guide line?
Thank you
Let's use BufferedStream.Flush() as an example method that doesn't return anything; how would we test this method if we had written it ourselves?
There is always some observable effect, otherwise the method would not exist. So the answer can be to test for the effect:
[Test]
public void FlushWritesToUnderlyingStream()
{
var memory = new byte[10];
var memoryStream = new MemoryStream(memory);
var buffered = new BufferedStream(memoryStream);
buffered.Write(0xFF);
Assert.AreEqual(0x00, memory[0]); // not yet flushed, memory unchanged
buffered.Flush();
Assert.AreEqual(0xFF, memory[0]); // now it has changed
}
The trick is to structure your code so that these effects aren't too hard to observe in a test:
explicitly pass collaborator objects,
just like how the memoryStream is passed
to the BufferedStream in the constructor.
This is called dependency
injection.
program against an interface, just
like how BufferedStream is programmed
against the Stream interface. This enables
you to pass simpler, test-friendly implementations (like MemoryStream in this case) or use a mocking framework (like MoQ or RhinoMocks), which is all great for unit testing.
Sorry for not answering straight but ... are you sure you have the exact balance in your testing?
I wonder if you are not testing too much ?
Do you really need to test a function that merely delegates to another?
Returns only for the tests
I agree with you when you write you don't want to add return values that are useful only for the tests, not for production. This clutters your API, making it less clear, which is a huge cost in the end.
Also, your return value could seem correct to the test, but nothing says that the implementation is returning the return value that corresponds to the implementation, so the test is probably not proving anything anyway...
Costs
Note that testing has an initial cost, the cost of writing the test.
If the implementation is very easy, the risk of failure is ridiculously low, but the time spend testing still accumulates (over hundred or thousands cases, it ends up being pretty serious).
But more than that, each time you refactor your production code, you will probably have to refactor your tests also. So the maintenance cost of your tests will be high.
Testing the implementation
Testing what a method does (what other methods it calls, etc) is critized, just like testing a private method... There are several points made:
this is fragile and costly : any code refactoring will break the tests, so this increases the maintenance cost
Testing a private method does not bring much safety to your production code, because your production code is not making that call. It's like verifying something you won't actually need.
When a code delegates effectively to another, the implementation is so simple that the risk of mistakes is very low, and the code almost never changes, so what works once (when you write it) will never break...
Yes, mock is generally the way to go, if you want to test that a certain function is called and that certain parameters are passed in.
Here's how to do it in Typemock (C#):
Isolate.Verify.WasCalledWithAnyArguments(()=> myInstance.WeatherService("","", null,0));
Isolate.Verify.WasCalledWithExactArguments(()=> myInstance. StockQuote("","", null,0));
In general, you should use Assert as much as possible, until when you can't have it ( For example, when you have to test whether you call an external Web service API properly, in this case you can't/ don't want to communicate with the web service directly). In this case you use mock to verify that a certain web service method is correctly called with correct parameters.
"I want to know when should I use assert and when should I use mock objects? Is there any general guide line?"
There's an absolute, fixed and important rule.
Your tests must contain assert. The presence of assert is what you use to see if the test passed or failed. A test is a method that calls the "component under test" (a function, an object, whatever) in a specific fixture, and makes specific assertions about the component's behavior.
A test asserts something about the component being tested. Every test must have an assert, or it isn't a test. If it doesn't have assert, it's not clear what you're doing.
A mock is a replacement for a component to simplify the test configuration. It is a "mock" or "imitation" or "false" component that replaces a real component. You use mocks to replace something and simplify your testing.
Let's say you're going to test function a. And function a calls function b.
The tests for function a must have an assert (or it's not a test).
The tests for a may need a mock for function b. To isolate the two functions, you test a with a mock for function b.
The tests for function b must have an assert (or it's not a test).
The tests for b may not need anything mocked. Or, perhaps b makes an OS API call. This may need to be mocked. Or perhaps b writes to a file. This may need to be mocked.
So, I'm starting to write some logic for a simple program (toy game on the side). You have a specific ship (called a setup) that is a ship + modules. You start with an empty setup based off a ship and then add modules to that setup. Ships also have a numbered array of module positions.
var setup = new Setup(ship); // ship is a stub (IShip) defined someplace else
var module = new Mock<IModule>().Object;
setup.AddModule(module, 1); // 1 = which position
So, this is the code in my test method. I now need to assert on this code. Well, I need a getter method right?
Assert.AreEqual(module, setup.GetModule(1));
This might sound really dumb and I'm worrying about nothing, but for some stupid reason I'm concerned with adding a method just to assert that a test passed.
Is this fine and is in fact part of the design process that TDD is pushing out? For instance I know I need an AddModule method because I want to test it, and the fact that this requires a GetModule method to test is simply an evolution of my design via TDD.
Or is this kind of a smell because I don't even know if I'll really need GetModule in my code and it will only be used in a test?
For example, adding a module is going to ultimately affect different stats of a setup (armor, shield, firepower, etc). The thing is those are going to be complex, and I wanted to start with a simple test. But in the end, those are the public attributes I care about -- a setup is defined by its stats, not by a list of modules.
Interesting question. I'm glad to hear you're writing the tests first.
If you let the design manifest itself through the tests, you're more likely to build only the parts you'll need. But is this the best design? Maybe not, but don't let that discourage you -- your add method works!
It may be too early to tell if you'll need the GetModule method later. For now, build up the functionality you need and go green, then slowly refactor it (going from red to green again) to get the design you want.
Part of evolving the design is to start with baby steps like a simple method and then grow into the complex stats (eventually dropping this method and changing the test) when enough supports it. When doing TDD, don't expect that the first test you write is targeting the ideal interface. It is OK to have some messiness that will get dropped as you evolve the design.
That being said, if you see no public purpose to the method, try to limit its visibility as much as is reasonable to the test code. Although even that should eventually go away as you get to build out the rest of the system and have something real to test as a side effect of the set method.
I would be wary of introducing a public method in my class that is only used for testing.
There are various ways how you could test this:
Reflection: The GetModule method is a private method in your class (this could also work if your 'stats' are private) and you can access it in your test method via reflection. This will work well, the only trouble is you will not get any compiler errors if you change the name of the private method or add / delete some variables (but, of course, your test will fail and you will know early)
Inheritance: The GetModule method could be protected (only inheritance visible) and your test class could inherit from the main class. This way your test class gets access to this method, but this is not really exposed to the outside world.
Assert the side-effect: This is where you really think about what it means to add a module to the system. If it is going to affect some 'stats' as you put it, you could write tests which assert that the stats are appropriately modified.
So I have a factory class and I'm trying to work out what the unit tests should do. From this question I could verify that the interface returned is of a particular concrete type that I would expect.
What should I check for if the factory is returning concrete types (because there is no need - at the moment - for interfaces to be used)? Currently I'm doing something like the following:
[Test]
public void CreateSomeClassWithDependencies()
{
// m_factory is instantiated in the SetUp method
var someClass = m_factory.CreateSomeClassWithDependencies();
Assert.IsNotNull(someClass);
}
The problem with this is that the Assert.IsNotNull seems somewhat redundant.
Also, my factory method might be setting up the dependencies of that particular class like so:
public SomeClass CreateSomeClassWithDependencies()
{
return new SomeClass(CreateADependency(), CreateAnotherDependency(),
CreateAThirdDependency());
}
And I want to make sure that my factory method sets up all these dependencies correctly. Is there no other way to do this then to make those dependencies public/internal properties which I then check for in the unit test? (I'm not a big fan of modifying the test subjects to suit the testing)
Edit: In response to Robert Harvey's question, I'm using NUnit as my unit testing framework (but I wouldn't have thought that it would make too much of a difference)
Often, there's nothing wrong with creating public properties that can be used for state-based testing. Yes: It's code you created to enable a test scenario, but does it hurt your API? Is it conceivable that other clients would find the same property useful later on?
There's a fine line between test-specific code and Test-Driven Design. We shouldn't introduce code that has no other potential than to satisfy a testing requirement, but it's quite alright to introduce new code that follow generally accepted design principles. We let the testing drive our design - that's why we call it TDD :)
Adding one or more properties to a class to give the user a better possibility of inspecting that class is, in my opinion, often a reasonable thing to do, so I don't think you should dismiss introducing such properties.
Apart from that, I second nader's answer :)
If the factory is returning concrete types, and you're guaranteeing that your factory always returns a concrete type, and not null, then no, there isn't too much value in the test. It does allows you to make sure, over time that this expectation isn't violated, and things like exceptions aren't thrown.
This style of test simply makes sure that, as you make changes in the future, your factory behaviour won't change without you knowing.
If your language supports it, for your dependencies, you can use reflection. This isn't always the easiest to maintain, and couples your tests very tightly to your implementation. You have to decide if that's acceptable. This approach tends to be very brittle.
But you really seem to be trying to separate which classes are constructed, from how the constructors are called. You might just be better off with using a DI framework to get that kind of flexibility.
By new-ing up all your types as you need them, you don't give yourself many seams (a seam is a place where you can alter behaviour in your program without editing in that place) to work with.
With the example as you give it though, you could derive a class from the factory. Then override / mock CreateADependency(), CreateAnotherDependency() and CreateAThirdDependency(). Now when you call CreateSomeClassWithDependencies(), you are able to sense whether or not the correct dependencies were created.
Note: the definition of "seam" comes from Michael Feather's book, "Working Effectively with Legacy Code". It contains examples of many techniques to add testability to untested code. You may find it very useful.
What we do is create the dependancies with factories, and we use a dependancy injection framework to substitute mock factories for the real ones when the test is run. Then we set up the appropriate expectations on those mock factories.
You can always check stuff with reflection. There is no need to expose something just for unit tests. I find it quite rare that I need to reach in with reflection and it may be a sign of bad design.
Looking at your sample code, yes the Assert not null seems redundant, depending on the way you designed your factory, some will return null objects from the factory as opposed to exceptioning out.
As I understand it you want to test that the dependencies are built correctly and passed to the new instance?
If I was not able to use a framework like google guice, I would probably do it something like this (here using JMock and Hamcrest):
#Test
public void CreateSomeClassWithDependencies()
{
dependencyFactory = context.mock(DependencyFactory.class);
classAFactory = context.mock(ClassAFactory.class);
myDependency0 = context.mock(MyDependency0.class);
myDependency1 = context.mock(MyDependency1.class);
myDependency2 = context.mock(MyDependency2.class);
myClassA = context.mock(ClassA.class);
context.checking(new Expectations(){{
oneOf(dependencyFactory).createDependency0(); will(returnValue(myDependency0));
oneOf(dependencyFactory).createDependency1(); will(returnValue(myDependency1));
oneOf(dependencyFactory).createDependency2(); will(returnValue(myDependency2));
oneOf(classAFactory).createClassA(myDependency0, myDependency1, myDependency2);
will(returnValue(myClassA));
}});
builder = new ClassABuilder(dependencyFactory, classAFactory);
assertThat(builder.make(), equalTo(myClassA));
}
(if you cannot mock ClassA you can assign a non-mock version to myClassA using new)