I have found several conventions to housekeeping unit tests in a project and
I'm not sure which approach would be suitable for our next PHP project. I am
trying to find the best convention to encourage easy development and
accessibility of the tests when reviewing the source code. I would be very
interested in your experience/opinion regarding each:
One folder for productive code, another for unit tests: This separates
unit tests from the logic files of the project. This separation of
concerns is as much a nuisance as it is an advantage: Someone looking into
the source code of the project will - so I suppose - either browse the
implementation or the unit tests (or more commonly: the implementation
only). The advantage of unit tests being another viewpoint to your classes
is lost - those two viewpoints are just too far apart IMO.
Annotated test methods: Any modern unit testing framework I know allows
developers to create dedicated test methods, annotating them (#test) and
embedding them in the project code. The big drawback I see here is that
the project files get cluttered. Even if these methods are separated
using a comment header (like UNIT TESTS below this line) it just bloats
the class unnecessarily.
Test files within the same folders as the implementation files: Our file
naming convention dictates that PHP files containing classes (one class
per file) should end with .class.php. I could imagine that putting unit
tests regarding a class file into another one ending on .test.php would
render the tests much more present to other developers without tainting
the class. Although it bloats the project folders, instead of the
implementation files, this is my favorite so far, but I have my doubts: I
would think others have come up with this already, and discarded this
option for some reason (i.e. I have not seen a java project with the files
Foo.java and FooTest.java within the same folder.) Maybe it's because
java developers make heavier use of IDEs that allow them easier access to
the tests, whereas in PHP no big editors have emerged (like eclipse for
java) - many devs I know use vim/emacs or similar editors with little
support for PHP development per se.
What is your experience with any of these unit test placements? Do you have
another convention I haven't listed here? Or am I just overrating unit test
accessibility to reviewers?
I favour keeping unit-tests in separate source files in the same directory as production code (#3).
Unit tests are not second-class citizens, their code must maintained and refactored just like production code. If you keep your unit tests in a separate directory, the next developer to change your production code may miss that there are unit tests for it and fail to maintain the tests.
In C++, I tend to have three files per class:
MyClass.h
MyClass.cpp
t_MyClass.cpp
If you're using Vim, then my toggle_unit_tests plug-in for toggling between source and unit test files may prove useful.
The current best practice is to separate the unit tests into their own directory, #1. All of the "convention over configuration" systems do it like this, eg. Maven, Rails, etc.
I think your alternatives are interesting and valid, and the tool support is certainly there to support them. But it's just not that popular (as far as I know). Some people object to having tests interspersed with production code. But it makes sense to me that if you always write unit tests, that they be located right with your code. It just seems simpler.
I always go for #1. While it's nice that they are close together. My reasons are as followed:
I feel there's a difference between the core codebase and the unittests. I need a real separation.
End-users rarely need to look at unittests. They are just interested in the API. While it's nice that unittests provide a separate view on the code, in practice I feel it won't be used to understand it better. (more descriptive documentation + examples do).
Because end-users rarely need unittests, I don't want to confuse them with more files and/or methods.
My coding standards are not half as strict for unittests as they are for the core library. This is maybe just my opinion, but I don't care as much for coding standards in my tests.
Hope this helps.
Related
Even if we have a Makefile or something similar to separate the test code when shipping product.
In my opinion, they should be separate, but i am not entirely convinced as to WHY
Yes, they should be separate (folders and preferably projects). Some reasons:
GREP. Searching for a string in production source is easier.
Code coverage. Imagine trying to specify which files to include for coverage.
Different standards. You may want to run static analysis, etc. only on production code.
Simplified makefiles/build scripts.
Modern IDEs will allow you to work on code from separate projects/folders as if they were adjacent.
The worst thing you can do is to include test and production code in the same file (with conditional compilation, different entry points, etc.). Not only can this confuse developers trying to read the code, you always run the risk of accidentally shipping test code.
Since I had a chance to work with both approaches (separated and with project code), here's few tiny-things that were getting in a way to note (C#, Visual Studio, MsBuild).
Same project approach
References/external libraries dependencies: unit testing and mocking frameworks usually come with few dependencies on it's own, combine that with libraries you need for actual project and list grows very quickly (and nobody likes huge lists, right?)
Naming collisions: having class named MyClass, common approach is to name test class MyClassTest - this causes tiny annoyances when using navigation/naming completition tools (since there's bigger chance you'll have more than one result to chose from for quick navigation)
overall feeling of ubiquitous mess
Naming collisions can actually get even more tiresome, considering how classes relating to similar functionality usually share prefix (eg. ListManager ... Converter, Formatter, Provider). Navigating between comprehensible number of items (usually 3-7) is not a problem - enter tests, enter long lists again.
Separated approach
Projects number: you'll have to count the number of libraries you produce twice. Once for project code alone, another time for tests. When bigger projects are involved (200-300+ sub-projects/libraries) having that number doubled by test projects extends IDE startup time in a way you never want to experience
Of course, modern machines will mitigate projects number issue. Unless this really becomes a problem, I'd always go for separated projects approach - it's just more neat and clean and way easier to manage.
In fully automated building and unit testing framework, you can essentially separate them out.
It makes more sense to fire up the automated unit tests after the nightly build is done.
Keeping them separate makes it easier for maintenance purposes.
Is it better to have a unit-test project per solution or a unit-test project per project?
With per solution, if you have 5 projects in the solution you end-up with 1 unit-test project containing tests for each of the 5 projects.
With per project, if you have 5 projects in the solution you end-up with 5 unit-test projects.
What is the right way?
I think it's not the same question as Write Unit tests into an assembly or in a separate assembly?
Assemblies are a packaging/deployment concern, so we usually split them out because we don't want to deploy them with our product. Whether you split them out per library or per solution there are merits to both.
Ultimately, you want tests to be immediately available to all developers, so that developers know where to find them when needed. You also want an obstacle free environment with minimal overhead to writing new tests so that you aren't arming the cynics who don't want to write tests. Tests must also compile and execute quickly - project structure can play a part in all of this.
You may also want to consider that different levels of testing are possible, such as tests for unit, integration or UI automation. Segregating these types of tests is possible in some tools by using test categories, but sometimes it's easier for execution or reporting if they are separate libraries.
If you have special packaging considerations such as a modular application where modules should not be aware of one another, your test projects should also reflect this.
In small projects where there aren't a lot of projects, a 1:1 ratio is usually the preferred approach. However, Visual Studio performance quickly degrades as the number of projects increases. Around the 40 project mark compilation becomes an obstacle to compiling and running the tests, so larger projects may benefit from consolidating test projects.
I tend to prefer a pragmatic approach so that complexity is appropriate to the problem. Typically, an application will be comprised of several layers where each layer may have multiple projects. I like to start with a single test library per layer and I mimic the solution structure using folders. Divide when complexity warrants it. If you design your test projects for flexibility then changeover is usually painless.
I would say a separate project for each unit test project rather than in one project per solution. I think this is better because it will save you a lot of hassle if you decide to take a particular project out of a solution and move it to another solution.
We have one to one ratio from file to projects of solution in a very large system, we have reached a point where build takes more than 90 mins every time we check-in. I have created new solution configuration which builds only test projects, new configuration runs once a days to make sure all test cases are working ,developers can switch to unittest config to test their code in their development environment. Pl. let me know your feed back
I am going with one of those "it depends" answers. Personally, I tend to put all of the tests in a single project, with separate folders within the project for each assembly (plus further sub-folders as necessary.) This makes it easy to run the entire set, either from within VisualStudio or CruiseControl.net.
If you have thousands of tests, a single project might prove too difficult to maintain. Also, as Peter Kelly mentioned in his response, being able to easily split the tests if you move a project can be useful.
Personally I write one test assembly per project. I then create one nunit project per solution and reference all of the relevant test assemblies from that. This reflects the organisation of the projects and means that if projects are reused in different solutions then only the relevant unit tests need to be run.
I have a C++ legacy codebase with 10-15 applications, all sharing several components.
While setting up unittests for both shared components and for applications themselves, I was wondering if there are accepted/common file structures for this.
Because my unit tests have several base classes in order to simplify project/customer specific test setups, there are alot of files that are common for all tests.
To me it seems natural here to create a new directory that contains all the test related files, mocks etc -to have it all centralized, and also keep testing related definitions out of the main make files.
On the other hand I see that it is common practice to have the test files reside together with the code files that they test.
Is there a more/less accepted way of doing this?
Out of sight, out of mind; if you keep the test files together with the code files it may be more obvious to the developers that when they update a code file they should update the tests as well.
As you noted, there are two common ways to locate unit test files: near the implementation code they are testing, and in a separate file hierarchy. The choice is a matter of what is the common practice in your organisation and personal taste.
Regarding the location of common test code, just organize your test code are you would organize implementation code.
In your particular case, if some test infrastructure is common to several independent components, it would be a good idea to create a new component (call it "testing", for example) that other components depends on for their tests, instead of adding dependences between existing components.
I usually organize such code in a file structure that looks (in a simple case) like this:
apps
app1
app1module1
app2module2
app1tests
app2
app2module1
app2tests
components
comp1
comp1module1
comp1module2
comp1tests
common_test_stuff
There is no single right way to do this, but this seems to be a common practice that keeps production and test code separate and attempts to remove the out-of-sight, out-of-mind problem (mentioned by zac) at the same time.
Keep the test code close to the product code, and arrange your Makefile (or whatever you're using) so that the tests compile at the same time as the test, to make them visible, especially if not everyone in the team is writing tests.
I have an existing VS 2005 Std .NET Compact Framework application that I want to do some major refactorings on. Currently there is no unit testing in place, but I want to add this before messing with the code. I have no practical experience with unit testing, even though I know the theory (just never got around actually implementing it; I know: shame on me :-))
Here are some questions I am pondering at the moment:
a) As a beginner, should I use NUnit or NUnitLite (which claims to be easier to use)?
b) Should I aim for running the tests on the mobile device or on the desktop (except for device-specific code of course)? Currently the desktop looks more appealing, especially for including the tests in automated builds...
c) How is the class that I want to test usually included in the test project? My application is a .EXE file, i.e. I can not just reference it like a .DLL assembly from the test project (or can I? Never tried this ...). I checked various NUnit tutorials but either found no mention of that, or one tutorial suggested to copy and paste the class that I want to test into the test project (yuk!). Should I link to the original source code file in my test project? What about private methods, or dependencies on other classes?
d) Should I start modifying my original code to allow better testability, e.g. make private methods public, decouple etc.? This is a bit like refactoring before being able to test, which does not sound good to me ... Or is it better practice to not touch the original code at all in the beginning, even if this means less code coverage etc.?
e) Should I look into any other tools or addons that most people use?
Thanks in advance for any answers (I also appreciate answers if they are only to one or some of the above items).
First, I would recommend you a good book on unit testing: Pragmatic Unit Testing in C#.
It will introduce you to NUnit, but what's more important, the author will provide you a lot of advices how to write good unit tests in general. The xUnit test frameworks are not very complex and you'll get used to their API/workflow very quick. Challenging is the actual process of identifying boundary conditions, decrease coupling and design for testability. It's available as an eBook (PDF) or a printed copy.
Regarding your actual questions (the book will give you some answers, too):
#a) I've no experience with NUnit lite, thus I cannot give you any advice on this point.
#b) Unit tests are supposed to be very local in regard of their dependencies. You aim to test classes independent of each other, thus there would be no need to deploy it on a mobile device first. You won't run the full app, just test components in isolation. Hence, I would recommend to use your desktop machine as the target for your unit test environment. You'll get better turn-around times, too.
#c) You have to reference the assembly that contains the classes you want to test in your test project. The test project will be an assembly itself (DLL). A test runner executes this assembly and uses the stored meta information to run the contained test cases.
#d) It depends a lot on the state and design of your software. But in general I would use a divide and conquer strategy: Introduce interfaces between classes and start to refactor step by step. Write unit tests before you start to change the implementation. The interfaces keep the contracts up and running, but you can change the underlying implementation if necessary. Don't make private methods public just to make them testable. Private methods are internal helpers of the class that support public methods in doing their job. Since you test your public methods you'll assert that your private methods do the right thing.
#e) A helpful add in for Visual Studio is TestDriven.Net. It allows you to run NUnit tests directly from the IDE without changing to NUnit's GUI or console runner.
#c) I haven't tried it, but I would think VisualStudio would let you add a project reference from your test assembly to your actual code assembly, even if its an exe.
As for private methods and such, generally I don't test private methods. In theory, all the private stuff should be used by public or internal methods, so testing those should indirectly test the privates.
I do test public and internals though. One thing I find very helpful is the InternalsVisibleTo attribute:
[assembly:InternalsVisibleTo("MyTestAssembly")]
which you can use to make the internals of one assembly visible to another. I use this attribute to expose the internals to my test assembly, so that it can directly reference them.
I am using NUnit to test my C# code and have so far been keeping unit tests (fast running ones) and integration tests (longer running) separate, and in separate project files. I use NUnit for doing both the unit tests and the integration tests. I just noticed the category attribute that NUnit provides, so that tests can be categorized. This begs the question, should I mix them together and simply use the category attribute to distinguish between them?
if it is not too difficult to separate them, do so now
unit tests should be run early and often (e.g. every time you change something, before check-in, after check-in), and should complete in a short time-span.
integration tests should be run periodically (daily, for example) but may take significant time and resources to complete
therefore it is best to keep them separate
seperate them if possible, because integration tests normally take much longer than UnitTests.
Maybe your project grows and you end up with very much tests, all which take a short amount of time - except the integration tests - and you want to run your UnitTests as often as possible...
I find that using separate projects for unit test and integration tests tends to create a little too many top level artifacts in the projects. Even though we're TDD and all, I still think the code being developed should be deserving at least half of the top-level of my project structure.
I don't think it really matters that much but separating them sounds like a better idea, since isolation, automation will be so easier. And category feature is nice but not that good from usability point of view.
The original motivation behind [Category] was to solve the problem you mention. It was also intended to create broader test suites but that is kind of what you are doing.
Do be careful with [Category]. Not all test runners support it the same way the NUnit gui does (or did, I haven't upgraded in a while). In the past some runners would ignore the attribute if it was on the class itself or just ignore it all together. Most seem to work now.
I would keep with whatever method you're currently using. It's more of an opinion thing, and you wouldn't want to have to re-tool your whole testing method.