Which is better? Unit-test project per solution or per project? - unit-testing

Is it better to have a unit-test project per solution or a unit-test project per project?
With per solution, if you have 5 projects in the solution you end-up with 1 unit-test project containing tests for each of the 5 projects.
With per project, if you have 5 projects in the solution you end-up with 5 unit-test projects.
What is the right way?
I think it's not the same question as Write Unit tests into an assembly or in a separate assembly?

Assemblies are a packaging/deployment concern, so we usually split them out because we don't want to deploy them with our product. Whether you split them out per library or per solution there are merits to both.
Ultimately, you want tests to be immediately available to all developers, so that developers know where to find them when needed. You also want an obstacle free environment with minimal overhead to writing new tests so that you aren't arming the cynics who don't want to write tests. Tests must also compile and execute quickly - project structure can play a part in all of this.
You may also want to consider that different levels of testing are possible, such as tests for unit, integration or UI automation. Segregating these types of tests is possible in some tools by using test categories, but sometimes it's easier for execution or reporting if they are separate libraries. 
If you have special packaging considerations such as a modular application where modules should not be aware of one another, your test projects should also reflect this.
In small projects where there aren't a lot of projects, a 1:1 ratio is usually the preferred approach. However, Visual Studio performance quickly degrades as the number of projects increases. Around the 40 project mark compilation becomes an obstacle to compiling and running the tests, so larger projects may benefit from consolidating test projects. 
I tend to prefer a pragmatic approach so that complexity is appropriate to the problem.  Typically, an application will be comprised of several layers where each layer may have multiple projects.  I like to start with a single test library per layer and I mimic the solution structure using folders. Divide when complexity warrants it. If you design your test projects for flexibility then changeover is usually painless.

I would say a separate project for each unit test project rather than in one project per solution. I think this is better because it will save you a lot of hassle if you decide to take a particular project out of a solution and move it to another solution.

We have one to one ratio from file to projects of solution in a very large system, we have reached a point where build takes more than 90 mins every time we check-in. I have created new solution configuration which builds only test projects, new configuration runs once a days to make sure all test cases are working ,developers can switch to unittest config to test their code in their development environment. Pl. let me know your feed back

I am going with one of those "it depends" answers. Personally, I tend to put all of the tests in a single project, with separate folders within the project for each assembly (plus further sub-folders as necessary.) This makes it easy to run the entire set, either from within VisualStudio or CruiseControl.net.
If you have thousands of tests, a single project might prove too difficult to maintain. Also, as Peter Kelly mentioned in his response, being able to easily split the tests if you move a project can be useful.

Personally I write one test assembly per project. I then create one nunit project per solution and reference all of the relevant test assemblies from that. This reflects the organisation of the projects and means that if projects are reused in different solutions then only the relevant unit tests need to be run.

Related

Should Test Code be Separate from Source/Production Code

Even if we have a Makefile or something similar to separate the test code when shipping product.
In my opinion, they should be separate, but i am not entirely convinced as to WHY
Yes, they should be separate (folders and preferably projects). Some reasons:
GREP. Searching for a string in production source is easier.
Code coverage. Imagine trying to specify which files to include for coverage.
Different standards. You may want to run static analysis, etc. only on production code.
Simplified makefiles/build scripts.
Modern IDEs will allow you to work on code from separate projects/folders as if they were adjacent.
The worst thing you can do is to include test and production code in the same file (with conditional compilation, different entry points, etc.). Not only can this confuse developers trying to read the code, you always run the risk of accidentally shipping test code.
Since I had a chance to work with both approaches (separated and with project code), here's few tiny-things that were getting in a way to note (C#, Visual Studio, MsBuild).
Same project approach
References/external libraries dependencies: unit testing and mocking frameworks usually come with few dependencies on it's own, combine that with libraries you need for actual project and list grows very quickly (and nobody likes huge lists, right?)
Naming collisions: having class named MyClass, common approach is to name test class MyClassTest - this causes tiny annoyances when using navigation/naming completition tools (since there's bigger chance you'll have more than one result to chose from for quick navigation)
overall feeling of ubiquitous mess
Naming collisions can actually get even more tiresome, considering how classes relating to similar functionality usually share prefix (eg. ListManager ... Converter, Formatter, Provider). Navigating between comprehensible number of items (usually 3-7) is not a problem - enter tests, enter long lists again.
Separated approach
Projects number: you'll have to count the number of libraries you produce twice. Once for project code alone, another time for tests. When bigger projects are involved (200-300+ sub-projects/libraries) having that number doubled by test projects extends IDE startup time in a way you never want to experience
Of course, modern machines will mitigate projects number issue. Unless this really becomes a problem, I'd always go for separated projects approach - it's just more neat and clean and way easier to manage.
In fully automated building and unit testing framework, you can essentially separate them out.
It makes more sense to fire up the automated unit tests after the nightly build is done.
Keeping them separate makes it easier for maintenance purposes.

Increasing testability, when coding with Bold for Delphi framework

Background
I work in a team of 7 developers and 2 testers that work on a logistics system.
We use Delphi 2007 and modeldriven development with Bold for Delphi as framework.
The system has been in production about 7 years now and has about 1,7 millions lines of code.
We release to production after 4-5 weeks and after almost every release we have to do some patches for bugs we don’t found. This is of course irritating both for us and the customers.
Current testing
The solution is of course more automatic testing. Currently we have manual testing. A Testdbgenerator that starts with an empty database and add data from the modelled methods. We also have Testcomplete that runs some very basic scripts for testing the GUI. Lack of time stop us from add more tests, but scripts is also sensitive for changes in the application. For some years ago I really tried unit testing with DUnit, but I gave up after some days. The units have too strong connections.
Unit testing preconditions
I think I know some preconditions for unit testing:
Write small methods that do one thing, but do it well.
Don’t repeat yourself.
First write the test that fails, then write the code so the test pass.
The connections between units shold be loose. They should not know much about each other.
Use dependency injection.
Framework to use
We may upgrade to Delphi XE2, mainly because of the 64-bit compiler.
I have looked at Spring a bit but this require an update from D2007 and that will not happen now. Maybe next year.
The question
Most code is still not tested automatically. So what is the best path to go for increasing testability of old code ? Or maybe it is best to start writing tests for new methods only ?
I’m not sure what is the best way to increase automatic testing and comments about it is welcome. Can we use D2007 + DUnit now and then easily change to Delphi XE2 + Spring later ?
EDIT: About current test methodology for manual testing is just "pound on it and try to break it" as Chris call it.
You want the book by Michael Feathers, Working Effectively with Legacy Code. It shows how to introduce (unit) tests to code that wasn't written with testability in mind.
Some of the chapters are named for excuses a developer might give for why testing old code is hard, and they contain case studies and suggested ways to address each problem:
I don't have much time and I have to change it
I can't run this method in a test harness
This class is too big and I don't want it to get any bigger
I need to change a monster method and I can't write tests for it.
It also covers many techniques for breaking dependencies; some might be new to you, and some you might already know but just haven't thought to use yet.
The requirements for automated unit testing are exactly this:
use an unit testing framework (for example, DUnit).
use some kind of mocking framework.
Item 2 is the tough one.
DRY, small methods, start with a test, and DI are all sugar. First you need to start unit testing. Add DRY, etc. as you go along. Reduced coupling helps to make stuff more easily unit tested, but without a giant refactoring effort, you will never reduce coupling in your existing code base.
Consider writing tests for stuff that is new and stuff that is changed in the release. Over time you will get a reasonable base of unit tests. New and changes stuff can also be refactored (or written nicely).
Also, consider an automated build process that runs unit tests and sends email when the build breaks.
This only covers unit testing. For QA testers, you will need a tool (they exist, but I can't think of any) that allows them to run automated tests (which are not unit tests).
Your testing team is too small, IMO. I've worked in teams where the QA dept outnumbers the Developers. Consider working in "sprints" of manageable chunks (features, fixes) that fit in smaller cycles. "Agile" would encourage 2-week sprints, but that may be too tight. Anyway, it would keep the QA constantly busy, working farther ahead of the release window. Right now, I suspect that they are idle until you give them a huge amount of code, then they're swamped. With shorter release cycles, you could keep more testers busy.
Also, you didn't say much about their testing methodology. Do they have standard scripts that they run, where they verify appearance and behavior against expected appearance and behavior? Or do they just "pound on it and try to break it"?
IMO, Dunit testing is hard to do with lots of dependencies like databases, communication, etc.. But it's do-able. I've created DUnit classes that automatically run database setup scripts (look for a .sql file with the same name as the class being tested, run the sql, then the test proceeds), and it's been very effective. For SOAP communications, I have a SoapUI mockservice running that returns canned results, so I can test my communications.
It does take work, but it's worth it.

Where do you put your unit test?

I have found several conventions to housekeeping unit tests in a project and
I'm not sure which approach would be suitable for our next PHP project. I am
trying to find the best convention to encourage easy development and
accessibility of the tests when reviewing the source code. I would be very
interested in your experience/opinion regarding each:
One folder for productive code, another for unit tests: This separates
unit tests from the logic files of the project. This separation of
concerns is as much a nuisance as it is an advantage: Someone looking into
the source code of the project will - so I suppose - either browse the
implementation or the unit tests (or more commonly: the implementation
only). The advantage of unit tests being another viewpoint to your classes
is lost - those two viewpoints are just too far apart IMO.
Annotated test methods: Any modern unit testing framework I know allows
developers to create dedicated test methods, annotating them (#test) and
embedding them in the project code. The big drawback I see here is that
the project files get cluttered. Even if these methods are separated
using a comment header (like UNIT TESTS below this line) it just bloats
the class unnecessarily.
Test files within the same folders as the implementation files: Our file
naming convention dictates that PHP files containing classes (one class
per file) should end with .class.php. I could imagine that putting unit
tests regarding a class file into another one ending on .test.php would
render the tests much more present to other developers without tainting
the class. Although it bloats the project folders, instead of the
implementation files, this is my favorite so far, but I have my doubts: I
would think others have come up with this already, and discarded this
option for some reason (i.e. I have not seen a java project with the files
Foo.java and FooTest.java within the same folder.) Maybe it's because
java developers make heavier use of IDEs that allow them easier access to
the tests, whereas in PHP no big editors have emerged (like eclipse for
java) - many devs I know use vim/emacs or similar editors with little
support for PHP development per se.
What is your experience with any of these unit test placements? Do you have
another convention I haven't listed here? Or am I just overrating unit test
accessibility to reviewers?
I favour keeping unit-tests in separate source files in the same directory as production code (#3).
Unit tests are not second-class citizens, their code must maintained and refactored just like production code. If you keep your unit tests in a separate directory, the next developer to change your production code may miss that there are unit tests for it and fail to maintain the tests.
In C++, I tend to have three files per class:
MyClass.h
MyClass.cpp
t_MyClass.cpp
If you're using Vim, then my toggle_unit_tests plug-in for toggling between source and unit test files may prove useful.
The current best practice is to separate the unit tests into their own directory, #1. All of the "convention over configuration" systems do it like this, eg. Maven, Rails, etc.
I think your alternatives are interesting and valid, and the tool support is certainly there to support them. But it's just not that popular (as far as I know). Some people object to having tests interspersed with production code. But it makes sense to me that if you always write unit tests, that they be located right with your code. It just seems simpler.
I always go for #1. While it's nice that they are close together. My reasons are as followed:
I feel there's a difference between the core codebase and the unittests. I need a real separation.
End-users rarely need to look at unittests. They are just interested in the API. While it's nice that unittests provide a separate view on the code, in practice I feel it won't be used to understand it better. (more descriptive documentation + examples do).
Because end-users rarely need unittests, I don't want to confuse them with more files and/or methods.
My coding standards are not half as strict for unittests as they are for the core library. This is maybe just my opinion, but I don't care as much for coding standards in my tests.
Hope this helps.

Unit testing. File structure

I have a C++ legacy codebase with 10-15 applications, all sharing several components.
While setting up unittests for both shared components and for applications themselves, I was wondering if there are accepted/common file structures for this.
Because my unit tests have several base classes in order to simplify project/customer specific test setups, there are alot of files that are common for all tests.
To me it seems natural here to create a new directory that contains all the test related files, mocks etc -to have it all centralized, and also keep testing related definitions out of the main make files.
On the other hand I see that it is common practice to have the test files reside together with the code files that they test.
Is there a more/less accepted way of doing this?
Out of sight, out of mind; if you keep the test files together with the code files it may be more obvious to the developers that when they update a code file they should update the tests as well.
As you noted, there are two common ways to locate unit test files: near the implementation code they are testing, and in a separate file hierarchy. The choice is a matter of what is the common practice in your organisation and personal taste.
Regarding the location of common test code, just organize your test code are you would organize implementation code.
In your particular case, if some test infrastructure is common to several independent components, it would be a good idea to create a new component (call it "testing", for example) that other components depends on for their tests, instead of adding dependences between existing components.
I usually organize such code in a file structure that looks (in a simple case) like this:
apps
app1
app1module1
app2module2
app1tests
app2
app2module1
app2tests
components
comp1
comp1module1
comp1module2
comp1tests
common_test_stuff
There is no single right way to do this, but this seems to be a common practice that keeps production and test code separate and attempts to remove the out-of-sight, out-of-mind problem (mentioned by zac) at the same time.
Keep the test code close to the product code, and arrange your Makefile (or whatever you're using) so that the tests compile at the same time as the test, to make them visible, especially if not everyone in the team is writing tests.

Should I mix my UnitTests and my Integration tests in the same project?

I am using NUnit to test my C# code and have so far been keeping unit tests (fast running ones) and integration tests (longer running) separate, and in separate project files. I use NUnit for doing both the unit tests and the integration tests. I just noticed the category attribute that NUnit provides, so that tests can be categorized. This begs the question, should I mix them together and simply use the category attribute to distinguish between them?
if it is not too difficult to separate them, do so now
unit tests should be run early and often (e.g. every time you change something, before check-in, after check-in), and should complete in a short time-span.
integration tests should be run periodically (daily, for example) but may take significant time and resources to complete
therefore it is best to keep them separate
seperate them if possible, because integration tests normally take much longer than UnitTests.
Maybe your project grows and you end up with very much tests, all which take a short amount of time - except the integration tests - and you want to run your UnitTests as often as possible...
I find that using separate projects for unit test and integration tests tends to create a little too many top level artifacts in the projects. Even though we're TDD and all, I still think the code being developed should be deserving at least half of the top-level of my project structure.
I don't think it really matters that much but separating them sounds like a better idea, since isolation, automation will be so easier. And category feature is nice but not that good from usability point of view.
The original motivation behind [Category] was to solve the problem you mention. It was also intended to create broader test suites but that is kind of what you are doing.
Do be careful with [Category]. Not all test runners support it the same way the NUnit gui does (or did, I haven't upgraded in a while). In the past some runners would ignore the attribute if it was on the class itself or just ignore it all together. Most seem to work now.
I would keep with whatever method you're currently using. It's more of an opinion thing, and you wouldn't want to have to re-tool your whole testing method.