Unit testing Visitor pattern architecture - c++

I've introduced visitors as one of core architecture ideas in one of my apps. I have several visitors that operate on a same stuff. Now, how should I test it? Some tests I'm thinking of are a bit larger then a unit test should be (integration test? whatever) but I still wanna do it. How would you test code like the C++ sample from wiki art on Visitor Pattern

Unit testing isn't really about testing patterns, it is about testing the correct implementation of methods and functions. The visitor pattern is a specific class structure, and for each of the classes (ConcreteVisitor and ConcreteElement, specifically) involved you'll want unit tests.
When you've developed confidence that your class methods are behaving OK, you could use your unit test framework to develop integration tests as well. Do not start integration testing rightaway: you'll find that you develop a lot of integration tests that are actually testing the behavior of a specific class, i.e. unit tests.
Whether you need mock objects or can use 'real' objects is a different matter. This depends a lot on whether the objects behave nice enough for unit test purposes (i.e. they do not pull in a lot of additional dependencies etc.), and whether the objects themselves are unit tested (i.e. you need to be able to trust these objects 100%). The mock vs. real objects issue has been addressed on stackflow before, so search the unittest tags.

make a test visitor object and make it visit things.... test that it visited the right things.

You can create mock objects and have your visitors visit them, and then create mock visitors, and test that the right actions were performed.

Related

How to organize integration tests?

When writing unit tests, I usually have one test class per production class, so my hierarchy will look something like that:
src/main
-package1
-classA
-classB
-package2
-classC
src/test
-package1
-classATests
-classBTests
-package2
-classCTests
However when doing integration tests the organization becomes less rigid. For example, I may have a test class that tests classA and classB in conjunction. Where would you put it? What about a test class that tests classA, classB and classC together?
Also, integration tests usually require external properties or configuration files. Where do you place them and do you use any naming convention for them?
Our integration tests tend to be organised the same way our specifications are. And they tend to be gathered by categories and/or feature.
I'd concur with f4's answer. Such kind of tests (level higher than UT) usually has no correlation with particular classes. Your tests should stick to project requirements and specifications.
In case you really need to develop a testing project tailored to your test requirements, I'd recommend the following approach: a separate project with packages per requirement or user story (depending on your approach to manage requirements).
For example:
src/itest
-package1 - corresponds to story#1
-classA - test case1
-classB - test case2
-package2 - corresponds to story#1
-classC - test case2
-packageData - your common test data and utilities
However keep in mind - doing an integration or system-level tests is usually complicated task and its scope could easily be broader than testing software project can cover. You should be ready to consider a third-party test automation tools, because at the level of integration or system test it's often a more efficient approach than developing a tailored testing package.
Maybe create an integration tests directory under src/test? Sure, for integration tests the separation becomes less clear, but there's something that groups A,B and C together, no? I'd start with this and see how things go. It's tough to come up with a perfect solution right away and an "OK" solution is better than no solution.
It seems that your integration tests are higher level unit tests since you still bind them to one or more classes. Try to pick class that depends on all others (transitively) from the group and associate test with such class.
If you have true integration tests then association with concrete classes is of little value. Then tests are classified by application subject areas (domains) and by types of functionality. For example, domains are orders, shipments, invoices, entitlements, etc. and functionality types are transactional, web, messaging, batch, etc. Their permutations would give you nice first cut of how to organize integration tests.
I have found that when doing TDD it is not always the case that in unit tests there is a 1:1 relationship between classes and tests. If you do that you will have a hard time refactoring. In fact after some refactoring I usually end up with about 50% 1:1 couplings and 50% tests that you could link to several classes or clusters of tests that link to a single class.
Integration tests happen if you try to prove that something is or isn't working. This happens either when you're worried because you need to deliver something, or if you find a bug. Trying to get full coverage from integration tests is a bad idea (to put it mildly).
The most important thing is that a test needs to tell a story. In BDD'ish terms: given you have such, when doing this, that should happen. The tests should be examples of how you intend people to use the unit, API, application, service, ...
The granularity and organisation of your tests will follow from your storyline. It should not be designed with simplistic rules up front.

How to avoid duplicating test code

I have written my units tests, and where external resources are needed it is dealt with by using fakes.
All is good so far. Now i' am faced with the other test phases, mainly integration where i want to repeat the unit test methods against real external resources e.g The Database.
So, What are the recommendations for structuring test projects for Unit Vs Integration testing? I understand some people prefer separate assemblies for unit and Integration?
How would one share common test code between the two assemblies? Should i create a thrid assembly which contains all the Abstract Test Classes and let the unit and integration inherit? I am looking for maximum re-usability...
I hear alot of noise about Dependency Injection (StructureMap), How could one utilise such a tool in the given Unit + Integration test setup?
can anyone share some wisdom? Thanks
I don't think you should physically separate the two. A good solution is to put the Microsoft.TeamFoundation.PowerTools.Tasks.CategoryAttribute above your tests to identify regular and integration tests. When running tests (even with MSBuild) you can decide to run only the tests you're interested in.
Alternatively you can put them in seperate namespaces.
For code that will be executed in setup & teardown phases, the base class approach would work well. For integration tests, you can extract the functionality of your unit tests into well-parameterized non-test methods (preferably placed in another namespace) and call these "common" methods from both unit and integration tests. Putting unit tests, integration tests and common methods into separate namespaces would suffice, there would be no need for extra assemblies.
One approach would be to create a separate file with helper methods that would be used across multiple testing contexts, and then include that file in both your unit tests and your functional tests. For the parts that vary, you could use dependency injection - for example, by passing in different factories. In the unit tests, the factory could build a fake object, and in the functional tests it could insert a real object in to your test database.
Whether you split the tests into two projects or keep them in one might depend on the number of classes/tests you have. Too many classes in a single project would make it difficult to dig through. If you do split them out, helper/common methods could be thrown into a third assembly, or you could make them public in the unit test assembly, and let the integration assembly reference that one. Make things only as complex as you have to.
On our project we have both integration and unit tests together but in separate folders. Our project layout is such that we have separate assemblies for the main sections (Domain, Services, etc). Each assembly has a matching test assembly. Test assemblies are allowed to reference other test assemblies.
This means Services.Test can reference Domain.Test which makes sense to us because Services references Domain in the actual code.
In terms of reusable pieces we have
Builders - These provide a fluent interface for creating the most important/complex objects in our domain. These live in the main test folder for our domain. Our domain test assembly is referenced by all other test assemblies.
Mothers - These insert data into the database. They give back an Id for the inserted row which can be used to load the object if required. These live in the main test folder for our services.
Helpers - These are guys that do small things throughout our testing. For instance we prefer to allow access to collections via IEnuermable so we have a CollectionHelper.AssertCountIsEqualTo<_T>(int count, IEnumerable<_T> collection, string message) which wraps the IEnumerable in a List and asserts a count. Our Helpers all live in a common test which every other test references.
As for an IoC container if you can use one on your project they can be a huge help not only in testing (via auto mocking) but also in general development. With the overheard of registering everything with the contain though it might be a bit much for just testing.
After some experimenting this is how you can re-use test methods:
public abstract class TestBase
{
[TestMethod]
public void BaseTestMethod()
{
Assert.IsTrue(true);
}
}
[TestClass]
public class UnitTest : TestBase
{
}
[TestClass]
public class IntegrationTest : TestBase
{
}
The unit and integration test class will pick up the base class test methods and run them as two separate tests.
You should be able to create a parametised constructor on the base class to inject your mocks or resources.
I think this method can olny be used with classed within the same assembly. So it looks like the single assembly approach will have to do for now.
Thanks for the tips ppl!
If the only difference between many of your unit tests and the corresponding integration tests is that the latter use "real" ressources rather than fakes (mocks), one approach is the following:
Make a flag is_unit_test available to your test class from the outside
In the class setup, make fake or real resources available depending on the flag. For instance if you need to use a DB API that is either real (an instance of class DBreal) or fake (an instance of class DBfake), your initialization may look like if is_unit_test then this.dbapi = new DBfake else this.dbapi = new DBreal. (DBreal and DBfake need to conform to the same interface, let's call it DBapi.)
From the point of view of your test methods, step 2 amounts to (manual) Dependency Injection: The method does not know what class actually implements its dependency (the DB API). Rather, the dependency is injected into the method from the outside.
Where your test cases require the DB API, they use this.dbapi
Now you execute one and the same test class with the flag set for unit testing and without the flag set for integration testing. (How to make the flag available depends on your unit testing framework.)
Obviously, the same approach can be used if you need more than one resource in a test class.
Some people will find the explicit if in step 2 ugly. To make it more "elegant", you could employ an Inversion of Control (IoC) container (in Java for instance Spring or PicoContainer) to semi-automate the Dependency Injection instead. The initialization would then look like this.dbapi = myContainer.create(DBapi).
In simple cases, an IoC container will only complicate things, because configuring the container is not trivial, involves learning, opens the possibility of a new class of mistakes, and involves additional files.
In more complex cases however, the container makes things easier, because if the creation of your resources requires still other resources, the container will take care of their initialization as well and complexity would indeed go down. But unless you really get there, I suggest to KISS.
Unless you have an important reason for separate assemblies, they violate KISS. I suggest to wait for that reason first.
(Note that some people may tell you that Dependency Injection is only done at the class level.
I consider this unwarranted dogmatism. Injection simply means that a caller does not know the exact class it is calling, no matter how it obtained the object. It often becomes more useful when applied at the class level, but depending on your test framework this may make things overly complicated in the above case. Note that some test frameworks have their own injection capabilities, though.)

How do I write useful unit tests for a mostly service-oriented app?

I've used unit tests successfully for a while, but I'm beginning to think they're only useful for classes/methods that actually perform a fair amount of logic - parsers, doing math, complex business logic - all good candidates for testing, no question. I'm really struggling to figure out how to use testing for another class of objects: those which operate mostly via delegation.
Case in point: my current project coordinates a lot of databases and services. Most classes are just collections of service methods, and most methods perform some basic conditional logic, maybe a for-each loop, and then invoke other services.
With objects like this, mocks are really the only viable strategy for testing, so I've dutifully designed mocks for several of them. And I really, really don't like it, for the following reasons:
Using mocks to specify expectations for behavior makes things break whenever I change the class implementation, even if it's not the sort of change that ought to make a difference to a unit test. To my mind, unit tests ought to test functionality, not specify "the methods needs to do A, then B, then C, and nothing else, in that order." I like tests because I am free to change things with the confidence that I'll know if something breaks - but mocks just make it a pain in the ass to change anything.
Writing the mocks is often more work than writing the classes themselves, if the intended behavior is simple.
Because I'm using a completely different implementation of all the services and component objects in my test, in the end, all my tests really verify is the most basic skeleton of the behavior: that "if" and "for" statements still work. Boring. I'm not worried about those.
The core of my application is really how all the pieces work together, so I'm considering
ditching unit tests altogether (except for places where they're clearly appropriate) and moving to external integration tests instead - harder to set up, coverage of less possible cases, but actually exercise the system as it is mean to be run.
I'm not seeing any cases where using mocks is actually useful.
Thoughts?
If you can write integration tests that are fast and reliable, then I would say go for it.
Use mocks and/or stubs only where necessary to keep your tests that way.
Notice, though, that using mocks is not necessarily as painful as you described:
Mocking APIs let you use loose/non-strict mocks, which will allow all invocations from the unit under test to its collaborators. Therefore, you don't need to record all invocations, but only those which need to produce some required result for the test, such as a specific return value from a method call.
With a good mocking API, you will have to write little test code to specify mocking. In some cases you may get away with a single field declaration, or a single annotation applied to the test class.
You can use partial mocking so that only the necessary methods of a service/component class are actually mocked for a given test. And this can be done without specifying said methods in strings.
To my mind, unit tests ought to test
functionality, not specify "the
methods needs to do A, then B, then C,
and nothing else, in that order."
I agree. Behavior testing with mocks can lead to brittle tests, as you've found. State-based testing with stubs reduces that issue. Fowler weighs in on this in Mocks Aren't Stubs.
Writing the mocks is often more work
than writing the classes themselves
For mocks or stubs, consider using an isolation (mocking) framework.
in the end, all my tests really verify
is the most basic skeleton of the
behavior: that "if" and "for"
statements still work
Branches and loops are logic; I would recommend testing them. There's no need to test getters and setters, one-line pure delegation methods, and so forth, in my opinion.
Integration tests can be extremely valuable for a composite system such as yours. I would recommend them in addition to unit tests, rather than instead of them.
You'll definitely want to test the classes underlying your low-level or composing services; that's where you'll see the biggest bang for the buck.
EDIT: Fowler doesn't use the "classical" term the way I think of it (which likely means I'm wrong). When I talk about state-based testing, I mean injecting stubs into the class under test for any dependencies, acting on the class under test, then asserting against the class under test. In the pure case I would not verify anything on the stubs.
Writing Integration Tests is a viable option here, but should not replace Unit Tests. But since you stated your writing mocks yourself, I suggest using an Isolation Framework (aka Mocking Framework), which I am pretty sure of will be available for your environment too.
Being that you've posted several questions in one I'll answer them one by one.
How do I write useful unit tests for a mostly service-oriented app?
Do not rely on unit tests for a "mostly service-oriented app"! Yes I said that in a sentence. These types of apps are meant to do one thing: integrate services. It's therefore more pressing that you write integration tests instead of unit tests to very that the integration is working correctly.
I'm not seeing any cases where using mocks is actually useful.
Mocks can be extremely useful, but I wouldn't use them on controllers. Controllers should be covered by integration tests. Services can be covered by unit tests but it may be wise to have them as separate modules if the amount of testing slows down your project.
Thoughts?
For me, I tend to think about a few things:
What is my application doing?
How expensive would it be to perform system level / integration tests?
Can I split my application up into modules that can be tested separately?
In the scenario you've provided, I'd say your application is an integration of many services. Therefore, I'd lean heavily on integration tests over unit tests. I'd bet most of the Mocks you've written have been for http related classes etc.
I'm a bigger fan of integration / system level tests wherever possible for the following reasons:
In this day and age of "moving fast", re-factoring the designs of yesterday happens at an ever increasing rate. Integration tests aren't concerned about implementation details at all so this facilitates rapid change. Dynamic languages are in full swing making mocks even more dangerous / brittle. With a static lang, mocks are much safer because your tests won't compile if they're trying to stub out a non existent or misspelled method name.
The amount of code written in an integration test is usually 60% less than the amount of code written in a unit test to achieve the same level of coverage so development time is less. "Yes but it takes longer to run integration tests..." that's where you need to be pragmatic until it actually slows you down to run integration tests.
Integration tests catch more bugs. Mocking is often contrived and removes the developer from the realities of what their changes will do to the application as a whole. I've allowed way more bugs into production under the "safety net" of 100% unit test coverage than I would have with integration tests.
If integration testing is slow for my application then I haven't split it up into separate modules. This is often an indicator early on that I need to do some extracting into separation.
Integration tests do way more for you than reach code coverage, they're also an indicator of performance issues or network problems etc.

Automated tests: mocking vs creating test object graph ( using IoC container), what is better under what conditions?

How do you decide what to choose:
use mock objects for a test OR
create a test object/ object graph using an IoC framework and run test on that data
It depends what you are trying to test. Unit tests with collaborators mocked out are great because
They are really, really fast
They are small and easy to understand
They don't have dependencies on the wider world which makes them easy to run
They provide excellent defect localisation
However, pure unit tests cannot tell you if you have configured your objects correctly in your IoC container, if the database connection string works etc. You need a test which runs up your IoC container and really reaches out to the Db to prove these things.
If you write as many of your tests as pure, standalone unit tests as possible then your build will stay fast. This is crucial, as a slow bulid gets run less. Even so, don't forget to add a sprinkling of wired tests to prove that your application'hangs together'.
For example, we have a (single) test for every service in our container that proves that we can request it from the IoC container. This proves we're wired up, from then on it is unit tests all the way. We have lots of pure unit tests.
The whole lot is then wrapped in some application level functional tests to prove that the app itself does what the user wants.
The thing to bear in mind is the time cost of each test type. Moving from pure unit -> wired -> functional tests costs an order of magnitude of execution time and complexity when they break.
I am very happy with using IoC for much of my app, and especially I appreciate that test-datasources can be injected for testing.
For more problematic backend connections (currently a single ESB call) or functions that need complicated state I mock.
For unit tests: If an object is not the object tested, mock or stub it.
In that way, you can directly control it so it returns the data that you want.
If you create a test object/object-graph, you have to set it up so that it provides the data that you want. That is probably a lot more work than you want.
For integration tests, of course you'd test a whole object graph at a time.
If you need to write a lot of initialization code - a mocking framework would probably help you write better, easy to understand Unit Tests.
There is no need to re write code that a mocking framework can save you.

Why would I write a fake class and unit test it?

I understand the need to test a class that has logic (for instance, one that can calculate discounts), where you can test the actual class.
But I just started writing unit tests for a project that will act as a repository (get objects from a database). I find myself writing a 'fake' repository that implements an ISomethingRepository interface. It uses a Dictionary<Guid, Something> for storage internally. It implements the Add(Something) and GetById(Guid) methods of the interface.
Why am I writing this? Nothing I'm writing will actually be used by the software when it's deployed, right? I don't really see the value of this exercise.
I also got the advice to use a mock object that I can setup in advance to meet certain expectations. That seems even more pointless to me: of course the test will succeed, I have mocked/faked it to succeed! And I'm still not sure the actual software will perform as it should when connecting to the database...
confused...
Can someone point me in the right direction to help me understand this?
Thank you!
You are not testing your mock object but some other class that is interacting with it. So you could for example test that a controller forwards a save method call to your fake repository. There is something wrong if you are "testing your fake objects"
Don't test the mock class. Do test the production class using the mock class.
The whole point of the test support class is to have something that you can predict its behavior. If you need to test the test support class in order to predict its behavior, there is a problem.
In the fake database article you linked in a comment, the author needs to unit test his fake database because it is his product (at least in the context of the article).
Edit: updated terms to be more consistent.
Mock - created by mocking framework
Fake - created manually, might actually function some.
Test Support - Mocks, Fakes, Stubs, and all the rest. Not production.
The purpose of the mock/stub object is not to be tested instead of the unit you're trying to test, it's to allow you to test that unit without needing other classes.
It's basically so that you can test classes one at a time without having to test all the classes they're also dependent on.
You should not be testing the mock class.
What you normally do is: you create mock classes for all the classes that the class you are testing interact with.
Let's say you are testing a class called Bicycle which takes in the constructor objects of classes Wheel, Saddle, HandleBar,etc.
And then within the class Bike you you want to test test its method GetWeight which probably iterates through each part and calls property/method Weight of them and then returns the total.
What you do:
you write a mock class for each part
(Wheel, saddle etc) which simply
implements the Weight bit
then you pass those mock classes to the Bicycle
test the GetWeight method on the Bicycle class
It that way you can focus on testing the GetWeight on the Bicycle class, in a manner that is independent on other classes (say they are not implemented yet, not deterministic etc.)
Who watches the watchers?
It is interesting for example if the mock implementation throws specific Exceptions for the corner cases, so you know that the classes that use or depend the IRepositorySomething can handle the exceptions that are thrown in real life. Some of these exceptions you can't generate easily with a test database.
You do not test the Mock object with a unit test, but you use it to test classes that depend on it.
Instead of writing a fake class by yourself, you can use a tool (like Rhino or Typemock) to mock it. It is much easier than writing all the mocks yourself. And like others said, there's no need to test fake code, which is no code if you use the tool.
I have actually found two uses for the mock classes that we use in repository implementation testing.
The first is to test the services that use an implementation of the "ISomethingRepository" equivalent that you mention. However, our repository implementations are created by a factory. This means that we do write tests against the "ISomethingRepository", but not against the "MockSomethingRepository" directly. By testing against the interface, we can easily assert that the code coverage for our tests cover 100% of the interface. Code reviews provide simple verification that new interface members are tested. Even if the developers are running against the mock that the factory returns, the build server has a different configuration that tests against the concrete implementation that the factory returns within the nightly builds. It provides the best of both worlds, in terms of test coverage and local performance.
The second use is one that I am surprised that no one else has mentioned. My team is responsible for the middle tier. Our web developers are responsible for the front end of the web products. By building out mock repository implementations, there is not the artificial obstacle of waiting for the database to be modeled and implemented prior to the front-end work starting. Views can be written that will be built off of the mock to provide a minimal amount of "real" data to meet the expectations of the web developers, as well. For example, data can be provided to contain minimum and maximum length string data to verify that neither break their implementation, etc.
Since the factories we use are wired as to which "ISomethingRepository" to return, we have local testing configurations, build testing configurations, production configurations, etc. We purposely are trying to make sure that no team on the project has unreasonable wait times because of another team's implementation time. The largest chunk of wait time is still provided by the development team, but we are able to crank out our domain objects, repositories, and services at a faster pace than the front-end development.
Of course, YMMV. ;-)
You write "fake" class called Stub or Mock object because you want to test an implementation in a simple way without testing the real concrete class. The purpose is to simplify the testing by testing only the Interface (or abstract class).
In your example, you are testing something that has a dictionnary. It might be fill up in real by the database or have a lot of logic behind it. In your "fake" object, you can simplify everything by having all data constant. This way you test only the behavior of the interface and not how the concrete object is built.
There is generally no need to run classical unit tests in the Data Access Layer.
Perhaps you can write integrational style unit test for your Data Acess Classes, that is, Integration Test (= integrating your Data Access Layer Code with the DB) using the features of Unit Testing Frameworks.
For example, in a Spring project you can use Spring Testcontext to start your Spring context inside a unit test, and then connect to a real database and test that the queries returns correct results. You need probably an own database for unit tests, or perhaps you can connect them with a developer DB.
Have a look at the following article for a good explanation of this:
https://web.archive.org/web/20110316193229/http://msdn.microsoft.com/en-us/magazine/cc163358.aspx
Basically, if you write a fake object and it turns out to be fairly complex, it is sometimes worth it to unit test the fake to make sure it works as expected.
Since a repository can be complex, writing unit tests for it often makes sense.