In MVC/MVP style applications that have Controller/Presenter classes within the Client Application assembly and then a Services layer assembly containing Service classes, where do people recommend storing their unit tests?
For ease of execution it'd be nice to have a single Test assembly that references both the Client and Services, and contains tests for both. But is anything about a dual responsibility Test assembly considered bad design?
Where are other people storing their unit test code?
It's pretty important that you keep unit test projects isolated to only one System Under Test (SUT). If required, you can have more than one unit test project that exercises the same SUT, but not the other way around.
The reason for this is that if you have a single unit test project that exercises multiple SUTs, you are artificially coupling those two SUTs together. This means that you will never be able to migrate one of the SUTs without the other.
In the case of a web application, you may find such concerns irrelevant if you don't plan to reuse any of your libraries, but imagine that you are unexpectedly asked to implement a web service based on the same business logic as the web site currently uses. Your service layer sounds like a good candidate for code reuse, so you may want to reuse it without the MVC-based UI.
The problem now is that you can't, because the single unit test project drags along the unwarranted UI project.
Having a one-to-one relationship between SUTs and their unit tests just makes your life a lot easier.
I usually create a unit test project per .NET assembly tested.
Related
I have a more general question:
Assuming I have a web application, for example using the Struts2 Framework.
Therefore it becomes quite complicated to write Unit tests for functions, as you have to mock every aspect of the Framework.
The Database+Connection, The Session, a LDAP-Connection or what ever else is needed, which I do not have written on my own
It would be much easier to write the unit Tests so, that they run in a WebInterface inside the Base-Application, as all these things then already would exist.
The question:
Would you guys still consider this as unit testing?
Some thoughts..
The question is very general. My suggestion is that you still want to write some sort of Unit Tests for number of reasons. Firstly you can run them as an automated test suite so if something breaks you know quickly. Secondly you get a better designed system - Your objects are loosely coupled. You get more confident on the code you write.
If you have a framework harder to test,
a. Try abstracting away some dependencies, so they code can be injected without interfering with real instances.
b. Use a testing framework that can break any tightly coupled harder dependencies.
Harder to provide a comprehensive answer, but this is the general direction, which I would suggest.
You should consider what you really want to test first. A framework, for its definition, will use the classes you provide to do some "magic". Do you want to test that has already been tested "magic" or the business core of the app you programmed?.
Also, something you should consider, is where to stop testing. You probably don't want to test the connection to the database (considering what you wrote) so just mock it.
Take in consideration that you will have to test just one functionality at the time, don't think of having, for example, the database connection and the ldap in the same test, it wouldn't be unit testing.
Take a look at this tutorial also :http://tutorials.jenkov.com/java-unit-testing/index.html
I would like to start doing more unit testing in my applications, but it seems to me that most of the stuff I do is just no suitable to be unit tested. I know how unit tests are supposed to work in textbook examples, but in real world applications they do not seem of much use.
Some applications I write have very simple logic and complex interactions with things that are outside my control. For instance I would like to write a daemon which reacts to signals sent by some applications, and changes some user settings in the OS. I can see three difficulties:
first I have to be able to talk with the applications and be notified of their events;
then I need to interact with OS whenever I receive a signal, in order to change the appropriate user settings;
finally all of this should work as a daemon.
All these things are potentially delicate: I will have to browse possibly complex APIs and I may introduce bugs, say by misinterpreting some parameters. What can unit testing do for me? I can mock both the external application and the OS, and check that given a signal from the application, I will call the appropriate API method on the OS. This is... well, the trivial part of the application.
Actually most of the things I do involve interaction with databases, the filesystem or other applications, and these are the most delicate parts.
For another example look at my build tool PHPmake. I would like to refactor it, as it is not very well-written, but I fear to do this as I have no tests. So I would like to add some. The point is that the things which may be broken by refactoring may not be caught by unit tests:
One of things to do is deciding which things are to be built and which one are already up to date, and this depends on the time of last modification of the files. This time is actually changed by external processes, when some build command is fired.
I want to be sure that the output of external processes is displayed correctly. Sometimes the buikd commands require some input, and that should be also managed correctly. But I do not know a priori which processes will be ran - it may be anything.
Some logic is involved in pattern matching, and this may seem to be testable part. But the functions which do the pattern matching use (ni addition to their own logic) the PHP function glob, which works with the filesystem. If I just mock a tree in place of the actual filesystem, glob will not work.
I could go on with more examples, but the point is the following. Unless I have some delicate algorithms, most of what I do involves interaction with external resources, and this is not suitable for unit testing. More that this, often this interaction is actually the non-trivial part. Still many people see unit testing as a basic tool. What am I missing? How can I learn be a better tester?
I think you open a number of issues in your question.
Firstly, when your application integrates with external environments such as OS, other threads, etc. then you have to separate (1) the logic that is tied in with the external enviroment and (2) your business-code.. that is, the stuff your application does. This is no different to how you would separate GUI and SERVER in an application (or web application).
Secondly, you ask if you should test simple logic. I'd say, it depends. Often simple fetch/store functionality is nice to have tests for. It's like the foundation of your application.. hence its important. Other business stuff built upon your foundation that is very simple, you may easily find yourself both feeling that you are wasting your time, and mostly you are :-)
Thirdly, refactory an existing program and testing it in its existing state may be a problem. If your PHP program produces a set of files on the basis of some input, well, maybe thats your entry point to tests are. Sure the tests may be high-level, but it's an easy way to ensure that after the refactoring, your program produces the same output. Hence, aim for higher-level tests in that situation in the start phase of your refactoring efforts.
I'd like to recommend some literature, but I can only come up with one title. "Working Effectively with Legacy Code" By Micheal Feathers. It's a good start. Another would be "xUnit Test Patterns: Refactoring Test Code" by Gerard Meszaros (although that book is much more sloppy and FULL of copy paste text).
As regards your issue about existing code bases that aren't currently covered by tests in which you would like to start refactoring, I would suggest reading:
Working Effectively with Legacy Code By Micheal Feathers.
That book gives you techniques on how to deal with the issues you might be facing with PHPMake. It provides ways to introduce seams for testing, where there previously weren't any.
Additionally, with code that touches say the file systems, you can abstract the file system calls behind a thin wrapper, using the Adapter Pattern. The unit tests would be against a fake implementation of the abstract interface that the wrapping class implements.
At some point you get to a low enough level where a unit of code can't be isolated for unit testing as these depend on library or API calls (such as in the production implementation of the wrapper). Once this happens integration tests are really the only automated developer tests you can write.
I recommend this google tech-talk on unit testing.
The video boils down to
write your code so that it knows as little about how it will be used as possible. The less assumptions your code makes, the easier it is to test. Avoid complex logic in constructors, the use of singletons, static class members, and so on.
isolate your code from the external world (comms, databases, real time), and make sure that your code only talks to your isolation layer. Otherwise, writing tests will be a nightmare in terms of 'fake environment' setup.
unit tests should test stories; that is what we really understand and care for; given a class with a method foo(), testFoo() is uninformative. They actually recommend test names like itShouldCloseConnectionEvenWhenExceptionThrown(). Ideally, your stories should cover enough functionality that you can rebuild the spec from the stories.
NOTE: the video and this post use Java as an example; however, the main points stand for any language.
"Unit tests" tests one unit of your code. No external tools should be involved. This seems to be complicated for your first app (without knowing to much about it ;)) but the phpMake is unit-testable - I'm sure ... because ant, gradle and maven are unit-testable too ;)!
But of course you can test your first application automated too. There are several different layers one could test an application.
So the task for you is to find an automated way to test your app - be it integration testing or whatever.
E.g. you could write shell scripts, which asserts some output! With that you make sure your application behaves correctly ...
Tests of interactions with external resources are integration tests, not unit tests.
Tests of your code to see how it would behave if particular external interactions had occurred can be unit tests. These should be done by writing your code to use dependency injection, and then, in the unit test, injecting mock objects as dependencies.
For example, consider a piece of code that adds the results of a call to one service to the results of a call to another service:
public int AddResults(IService1 svc1, IService2 svc2, int parameter)
{
return svc1.Call(parameter) + svc2.Call(parameter);
}
You can test this by passing in mock objects for the two services:
private class Service1Returns1 : IService1
{
public int Call(int parameter){return 1;}
}
private class Service2Returns1 : IService2
{
public int Call(int parameter){return 1;}
}
public void Test1And1()
{
Assert.AreEqual(2, AddResults(new Service1Returns1(), new Service2Returns1(), 0));
}
First of all, if unit testing doesn't seem like it would be much use in your applications, why do you even want to start doing more of it? What is motivating you to care about it? It is definitely a waste of time if a) you do everything perfect the first time and nothing ever changes or b) you decide it's a waste of time and do it poorly.
If you do think that you really want to do unit testing, the answer to your questions are all the same: encapsulation. In your daemon example, you could create a ApplcationEventObeservationProxy with a very narrow interface that just implements pass through methods. The purpose of this class is to do nothing but completely encapsulate the rest of your code from the third-party event observing library (nothing means nothing -- no logic here). Do the same thing for OS settings. Then you can completely unit test the class that does actions based on events. I'd recommend have a separate class for the daemon-ness that just wraps your main class -- it will make the testing easier.
There are a couple of benefits to this approach outside of unit testing. One is that if you encapsulate the code that interacts directly with the OS, it's easier to switch it out. This kind of code is particularly prone to breakage outside of your control (i.e., MS patchsets). You will also probably want to support more than one OS, and if the OS specific logic is not tangled with the rest of your logic, it will be easier. The other benefit is that you'll be forced to realize that there is more business logic in your app than you think. :)
Finally, don't forget that unit testing is a foundation for a good product, but not the only ingredient. Having a set of tests that explore and verify the OS API calls you'll be using is a good strategy for the "hard" parts of this problem. You should also have end to end tests that ensure the events in your applications cause the OS setting changes.
As other answers suggested Working Effectively with Legacy Code By Micheal Feathers is a good read. If you have to deal with legacy code, and you want to make sure that the systems interaction work as expected, try writing integration tests first. And then it is more appropriate to write Unit Tests to test the behaviour of methods that are valued from the requirements point of view. You Tests serve a whole different purpose than the integration tests. Unit Tests are more likely to improve the design of your system than testing how everything hangs to gather.
I have a 3-tier application with Data access, service and presentation tiers. Where should i place the unit tests? On each layer? one presentation layer?
Some of this depends on you're approach. If you've already got code in place, and you're putting unit tests in after the creation of the code - I'd start by adding some end to end acceptance tests. This will exercise all layers.
Then as you go in to make changes to any part of the system I'd start putting unit tests around the component being changed - first off just testing the existing functionality, then move onto adding unit tests for the new functionality. This works well if you're code is loosely coupled and split into well defined components. If not you're got a lot of work to do.
Depending on how thin your data access is, I would be less inclined to write any unit tests here as errors should be picked up quite cost effectively without having to unit test these.
If you're starting fresh, read up on BDD/TDD and use that approach to ensure quality.
Typically you would use unit tests - assuming you think JUnit, NUnit, etc. - for small isolated portions of your code, ideally single classes (doesn't always work that way, and that's why someone invented mocks).
Of the three layers you are mentioning you can find this type of test in all three of them. With proper design (e.g. MVC) you can test even large portions of the presentation layer. When testing the service layer or the domain layer (= business logic) it helps to mock the data access layer. When testing the data access layer try using an in-memory database for speed, or use an off-the-shelf ORM (Object-Relational Mapping) tool in the first place.
In general I'd recommend using unit tests, wherever they provide value. Most development teams are not at risk to write too many unit tests. But also be aware that unit tests are just one aspect and that other types of tests and/or tools might be a better fit.
I was about to answer "each layer, you can't have too much test code", but that's not necessarily true.
I would have to say that it depends on how tightly coupled the layers are. If you can get full coverage using only the top layer, which is ideal anyway, you would not have to create separate tests for the lower layers.
If not, for instance if the data access layer is a generic library that's only partially used for the moment, you would at least add tests in that layer to get the coverage up and be a bit more confident that you don't have problems there that will show up later.
Typically you will have all your unit tests in the middle tier.
You can certainly have integration tests in the DAL and manual or automated UI testing of that layer. If you find parts of those layers that can be unit tested then by all means do - although you should also question if they should be in those layers in that case.
Where should i place the unit tests?
Frankly, everywhere.
If possible, every piece of code you write should have an automated test around it. If you're going the object oriented approach, every object should have a suite of unit tests. If your program is procedural, every function should have unit tests that exercise it.
Just make sure your unit tests run fast enough that other developers won't mind using them.
Im developing with Grails. Since the framework will bootstrap data and a fully flushed out spring context, I find that I write a lot of integration tests for services. Let me rephrase that: I find I write no unit tests for services, only integration tests. Is this a bad idea? The only downside I see to it is that my tests take a bit longer to run.
I do use unit testing on controllers, as in a controller I am testing the various application flows, result types, redirect logic, etc. But the majority of tests I write are integration tests. This seems a break from tradition J2EE tests, where mostly unit tests are written.
edit- to be clear, Im not writing integration tests because the code is so complicated only integration tests will do. Im writing integration tests because it is easier to test everything together, since the framework gives you so much. I do mock certain things, like if a service collaborates with the acegi authenticationService, I mock that. I also mock anytime we interact with a webservice, because you have to in order to get a test that runs without special setup.
I clearly see a trend towards more functional tests and fewer unit tests, in particular for heavily-connected components low on logic. If I implement a complicated algorithm in a particular class, I will usually write unit tests for it, but if the complexity is due to integration with other components (which is very frequently the case), unit tests are not worth the hassle.
In general, the important measure is the code coverage.
In can be advantageous to do integration tests if the global interface is more stable (less subject to change).
You may be able to use a mock framework to isolate different parts of your services for individual testing. Instead of relying on the massive Spring context, directly instantiate your service class and fill in the dependencies not directly relevant to the test with mocks. You may need a little refactoring to decouple your dependencies, but it will usually be for the best. This can lead to writing more and smaller tests instead of relying on a few large integration tests.
I am in a similar situation where many established tests are really integration tests, and it can be difficult, but not impossible, to change this going forward.
You should test as soon as possible, because you want errors to show up as soon as possible, ideally while our code is still under your own control. The later an error is discovered the higher the costs of the correction will be.
If you are exposing non trivial logic, you should test the logic of the main decision paths with unit tests, and the exposure of that logic and the interaction with external dependencies in the integration tests.
If you are a lone developer or working in a small team, sometimes it's difficult to see the distinction between unit and integration tests. If you are in an environment where multiple teams are working on a single product, the assumption is that the logic inside your code has already been verified (unit test) and now you want to see how the modules play with each other (integration test).
It is not a good pattern to load too much into the integration test, because you are deferring testing to a later phase in your project or development environment, and sooner or later you are going to find that you are discovering errors once the code has left your desk. That will mean holding up the entire and possibly the product release while you fix something you could and should have discovered earlier.
When writing unit tests, I usually have one test class per production class, so my hierarchy will look something like that:
src/main
-package1
-classA
-classB
-package2
-classC
src/test
-package1
-classATests
-classBTests
-package2
-classCTests
However when doing integration tests the organization becomes less rigid. For example, I may have a test class that tests classA and classB in conjunction. Where would you put it? What about a test class that tests classA, classB and classC together?
Also, integration tests usually require external properties or configuration files. Where do you place them and do you use any naming convention for them?
Our integration tests tend to be organised the same way our specifications are. And they tend to be gathered by categories and/or feature.
I'd concur with f4's answer. Such kind of tests (level higher than UT) usually has no correlation with particular classes. Your tests should stick to project requirements and specifications.
In case you really need to develop a testing project tailored to your test requirements, I'd recommend the following approach: a separate project with packages per requirement or user story (depending on your approach to manage requirements).
For example:
src/itest
-package1 - corresponds to story#1
-classA - test case1
-classB - test case2
-package2 - corresponds to story#1
-classC - test case2
-packageData - your common test data and utilities
However keep in mind - doing an integration or system-level tests is usually complicated task and its scope could easily be broader than testing software project can cover. You should be ready to consider a third-party test automation tools, because at the level of integration or system test it's often a more efficient approach than developing a tailored testing package.
Maybe create an integration tests directory under src/test? Sure, for integration tests the separation becomes less clear, but there's something that groups A,B and C together, no? I'd start with this and see how things go. It's tough to come up with a perfect solution right away and an "OK" solution is better than no solution.
It seems that your integration tests are higher level unit tests since you still bind them to one or more classes. Try to pick class that depends on all others (transitively) from the group and associate test with such class.
If you have true integration tests then association with concrete classes is of little value. Then tests are classified by application subject areas (domains) and by types of functionality. For example, domains are orders, shipments, invoices, entitlements, etc. and functionality types are transactional, web, messaging, batch, etc. Their permutations would give you nice first cut of how to organize integration tests.
I have found that when doing TDD it is not always the case that in unit tests there is a 1:1 relationship between classes and tests. If you do that you will have a hard time refactoring. In fact after some refactoring I usually end up with about 50% 1:1 couplings and 50% tests that you could link to several classes or clusters of tests that link to a single class.
Integration tests happen if you try to prove that something is or isn't working. This happens either when you're worried because you need to deliver something, or if you find a bug. Trying to get full coverage from integration tests is a bad idea (to put it mildly).
The most important thing is that a test needs to tell a story. In BDD'ish terms: given you have such, when doing this, that should happen. The tests should be examples of how you intend people to use the unit, API, application, service, ...
The granularity and organisation of your tests will follow from your storyline. It should not be designed with simplistic rules up front.