Should model classes be tested?
Model classes does not contain logic, but usually contains some logic in the constructor for initialization.
These class are also shared, therefore it a very important that these class initializes correctly.
What is the standard for testing, should model classes be tested? Or the class that make use of them have responsibility to test them?
If you care whether code works or not, it should be tested. That applies to all code.
Given that, if your model code is getting exercised when you run other tests, it may not be necessary to have tests specific to your models. However, tests are relatively cheap to write and potentially pay off big dividends, so there's no reason not to write a few quick tests for your models.
The point is model classes should contain all the logic, according to Model-View-Controller design pattern. Remember -
Thinner views, thin controllers, fat models.
Tutorial
So, by that principle all business logic should be contained in the models. Thereby, models should be thoroughly tested.
Besides, data is the most important component in any web project. Thereby, ensuring models have sufficient validations, and they don't allow junk data into db is extremely crucial.
That's what MVC says. Although I agree that trying to fit everything into MVC constructs is a very common antipattern. An even better approach would be to use distinct classes to maintain business logic which don't fall into any MVC constructs, although they must be encapsulated in models too.
Besides, as far as testing is concerned is general, I believe any working piece of code should have atleast modest test suite for them. Tests are working specifications of the code or rather they should be. They guides some else on what your code is doing, how to change it without breaking anything.
Note:- Don't test your libraries though i.e. don't test django code or mongoengine
No.
Such test provide little to no value as you will be creating model instances in other tests, for components that work with/use said models. As such, you don't need dedicated model tests because they will be tested indirectly many, many times.
Related
I've recently started practising TDD and unit testing, with my main primers being the excellent GOOSGBT and a perusal of TDD-tagged questions here on SO.
Occasionally, the process I use creates a "controller" class - generally, a class which is a facade over a fairly complex subsystem where, as the number of features implemented in the subsystem grows, responsibilities are continually driven out into helper classes until the original class has essentially no responsibilities beyond making correct calls to a small set of collaborator classes and shunting the returned information (if any) to its other collaborator classes.
Originally, the tests for the soon-to-be controller classes were written at the level of intention of end-users of the class: "If I make this call, what should be the observable effects that I, as an end-user of the class, actually care about?". But as more and more responsibilities and tests for edge-cases were driven out into helper classes (which are replaced by Test Doubles in the tests for the controller class), these tests began to seem really ... vague and non-specific: they were invariably "happy-path" tests that didn't really seem to get to the heart of the matter. It's hard to explain what I mean, but reading the tests back left me with a kind of "So what? Why did you choose this particular happy-path test over any other? What is the significance? If someone breaks the code, will this test pinpoint the exact reason why the code is now broken?" As time went by, I was more and more strongly inclined to instead write the tests in terms of how the classes' collaborators were supposed to be used together: "the class should call this method on this collaborator, and pass the result to this other collaborator" which gave a much more focussed, descriptive and clearly-motivated set of tests (and generally, the resulting set of tests is small).
This obviously has its flaws: the tests are now strongly coupled to the specifics of the implementation of the controller class, rather than the much more flexible "what would an end-user of this class see that he would care about?". But really, the tests are already quite coupled to it by virtue of the fact that they must configure all of the Test Double collaborators to behave in the exact way required by the implementation to give the correct results from an end-user of the classes' point of view.
So my question is: do fellow TDD'ers find that a minority of classes do little but marshall their (small) set of collaborators around? Do you find keeping the tests for such classes to be written from an end-user of the classes' point of view to be imprecise and unsatisfactory and if so, is it acceptable to write tests for such classes explicitly in terms of how it calls and transfers data between their collaborators?
Hope it's reasonably clear what I'm driving at, here! :)
As a concrete example: one practise project I was working on was a TV listings downloader/ viewer (if you've ever seen "Digiguide", you'll know the kind of thing I mean), and I was implementing a core part of the app - the part that actually updates the listings over the net and integrates the newly downloaded listings into the current set of known TV programs. The interface to this (surprisingly complex when all requirements are taken on board) functionality was a class called ListingsUpdater, which had a method called "updateListings".
Now, end-users of ListingsUpdater only really care about a few things: after listingsUpdate has been called, is the database of TV listings now correct, and were the changes made to the database (adding TV programs, changing them if broadcast changes occurred etc) described to the provided change listeners? When the implementation was a very, very simple "fake it till you make it" type of deal, this worked fine: but as I progressively drove the implementation towards one that would work in the real-world, the "real work" got driven further and further away from ListingsUpdater, until it mainly just marshalled a few collaborators: a ListingsRequestPreparer for assessing the current state of the listings and building HTTP requests for a ListingsDownloader, and a ListingsIntegrator which unpacked the newly downloaded listings and incorporated them (it too delegating to collaborators) into the listings database. Now, note that in order to fulfil the contract of ListingsUpdater from a user's point of view, I must, in the test, instruct its ListingsIntegrator Test Double to populate the (fake) database with the correct data(!), which seems bizarre. It seems much more sensible to drop the "from the end-user of ListingsUpdater's point of view" tests and instead add a test that says "when the ListingsDownloader has downloaded the new listings ensure they are handed over to the ListingsIntegrator".
This obviously has its flaws: the tests are now strongly coupled to the specifics of the implementation of the controller class, rather than the much more flexible "what would an end-user of this class see that he would care about?". But really, the tests are already quite coupled to it by virtue of the fact that they must configure all of the Test Double collaborators to behave in the exact way required by the implementation to give the correct results from an end-user of the classes' point of view.
I'll repeat what I said in answer to another question:
I need to create either a mock a stub or a dummy object [a test double] for each dependency
This is commonly stated. But I think it is wrong. If a Car is associated with an Engine object, why not use a real Engine object when unit testing your Car class?
But, someone will declare, if you do that you are not unit testing your code; your test depends on both the Car class and the Engine class: two units, so an integration test rather than a unit test. But do those people mock the String class too? Or HashSet<String>? Of course not. The line between unit and integration testing is not so clear.
More philosophically, you can not create good mock objects [test doubles] in many cases. The reason is that, for most methods, the manner in which an object delegates to associated objects is undefined. Whether it does delegate, and how, is left by the contract as an implementation detail. The only requirement is that, on delegating, the method satisfies the preconditions of its delegate. In such a situation, only a fully functional (non-mock) delegate will do. If the real object checks its preconditions, failure to satisfy a precondition on delegating will cause a test failure. And debugging that test failure will be easy.
And I'll add in response to
they were invariably "happy-path" tests that didn't really seem to get to the heart of the matter
This is a more general testing problem, not specific to TDD or unit testing: how to you select a good set of test-cases, given that comprehensive testing is impossible? I rely on equivalence partitioning. When I start work on some code, I use equivalence partitioning to select the set of test-cases I want the code to pass, then work on each in turn in a TDD manner, but if passing one of the test-cases does not require a code change (because early work has created code that also satisfies that test case) I still add the test-case to my test-suite. My test suite therefore has better coverage of potential error paths.
I'm starting a new project and I want to use unit testing.
So I wrote my services classes which are implementing interface and waiting for interface in their parameters so I can easily mock these classes.
My question: there is absolutely no code in my business class! (like Customer)
Is it normal? is it normal even without unit test ? what kind of code would you put in a class like "Customer"?
No, it doesn't sound normal to me - unless you are at the very beginning of your project and Customer is as yet just a skeleton, and you know it will get more functionality over time.
Otherwise it may be a sign of a design issue, such as an anemic domain model.
It is not the unit tests' fault. Unit tests don't in any way enforce one to create dumb classes without real functionality.
I don't know if normal is the right word here, I'd rather say that the situation you have found yourself in is very common.
I see this happen most often with people starting in on Domain Driven Design and also when people use design patterns such as MVVM - all the logic falls into services and controllers and managers (which are themself a smell IMO), and the core domain model becomes a very anaemic set of DTOs.
What I would suggest is returning to your object modelling and looking at your services and seeing where you have removed logic from your Customer object which is actually a core concern of the customer. That is - what does the customer object do? Some of this will belong in external services, but there will also be key processes which are the domain of the customer.
When you design clearly, there might be the case, where some classes are just aggregates of Data. This is part of the MVC Pattern, where the models should not contain much logic. However if you do have absolutely no code in your classes there is something seriously wrong.
To me it sounds, like you are trying some kind of dependency injection, but you are not only injecting the dependencies, but rather everything. This is taking the pattern to far, so it might be becoming it's own anti-pattern.
I have a project I am trying to learn unit testing and TDD practices with. I'm finding that I'm getting to quite confusing cases where I am spending a long time setting up mocks for a utility class that's used practically everywhere.
From what I've read about unit testing, if I am testing MyClass, I should be mocking any other functionality (such as provided by UtilityClass). Is it acceptable (assuming that UtilityClass itself has a comprehensive set of tests) to just use the UtilityClass rather than setting up mocks for all the different test cases?
Edit: One of the things I am making a lot of setup for.
I am modelling a map, with different objects in different locations. One of the common methods on my utility class is GetDistanceBetween. I am testing methods that have effects on things depending on their individual properties, so for example a test that selects all objects within 5 units of a point and an age over 3 will need several tests (gets old objects in range, ignores old objects out of range, ignores young objects in range, works correctly with multiples of each case) and all of those tests need setup of the GetDistanceBetween method. Multiply that out by every method that uses GetDistanceBetween (almost every one) and the different results that the method should return in different circumstances, and it gets to be a lot of setup.
I can see as I develop this further, there may be more utility class calls, large numbers of objects and a lot of setup on those mock utility classes.
The rule is not "mock everything" but "make tests simple". Mocking should be used if
You can't create an instance with reasonable effort (read: you need a single method call but to create the instance, you need a working database, a DB connection, and five other classes).
Creation of the additional classes is expensive.
The additional classes return unstable values (like the current time or primary keys from a database)
TDD isn't really about testing. Its main benefit is to help you design clean, easy-to-use code that other people can understand and change. If its main benefit was to test then you would be able to write tests after your code, rather than before, with much of the same effect.
If you can, I recommend you stop thinking of them as "unit tests". Instead, think of your tests as examples of how you can use your code, together with descriptions of its behaviour which show why your code is valuable.
As part of that behaviour, your class may want to use some collaborating classes. You can mock these out.
If your utility classes are a core part of your class's behaviour, and your class has no value or its behaviour makes no sense without them, then don't mock them out.
Aaron Digulla's answer is pretty good; I'd rephrase each of his answers according to these principles as:
The behaviour of the collaborating class is complex and independent of the behaviour of the class you're interested in.
Creation of the collaborating class is not a valuable aspect of your class and does not need to be part of your class's responsibility.
The collaborating class provides context which changes the behaviour of your class, and therefore plays into the examples of how you can use it and what kind of behaviour you might expect.
Hope that makes sense! If you liked it, take a look at BDD which uses this kind of vocabulary far more than "test".
In theory you should try to mock all dependencies, but in reality it's never possible. E.g. you are not going to mock the basic classes from the standard library. In your case if the utility class just contains some basic helper methods I think I wouldn't bother to mock it.
If it's more complicated than that or connects to some external resources, you have to mock it. You could consider creating a dedicated mock builder class, that would create you a standard mock (with some standard stubs defined etc), so that you can avoid mocking code duplication in all test classes.
No, it is not acceptable because you are no longer testing the class in isolation which is one of the most important aspects of a unit test. You are testing it with its dependency to this utility even if the utility has its own set of tests. To simplify the creation of mock objects you could use a mock framework. Here are some popular choices:
Rhino Mocks
Moq
NSubstitute
Of course if this utility class is private and can only be used within the scope of the class under test then you don't need to mock it.
Yes, it is acceptable. What's important is to have the UtilityClass thoroughly unit tested and to be able to differentiate if a test is failing because of the Class under test or because of the UtilityClass.
Testing a class in isolation means testing it in a controlled environment, in an environment where one control how the objects behave.
Having to create too many objects in a test setup is a sign that the environment is getting too large and thus is not controlled enough. Time has come to revert to mock objects.
All the previous answers are very good and really match with my point of view about static utility classes and mocking.
You have two types of utilities classes, your own classes you write and the third party utility classes.
As the purpose of an utility class is to provide small set of helper methods, your utility classes or a third party utility classes should be very well tested.
First Case: the first condition to use your own utility class (even if static) without mocking, is to provide a set of valid unit tests for this class.
Second Case: if you use a third party utility library, you should have enough confidence to this library. Most of the time, those libraries are well tested and well maintained. You can use it without mocking its methods.
What unit tests generally tend to be hard to write and why? I am particularly interested in methods which don't need mocking.
Thanks
Two cases where unit testing is made difficult:
Methods that invoke static methods that belong to other classes, particularly when those other classes have static state, or do significant work. Being stuck trying to "unit" test a method that, through transitive closure, does database queries can suck.
Methods that create instances of other classes directly (i.e., via new), particularly when the constructor of the other class does itself requires static state, or when it does significant work in the constructor.
A great A to Z guide of testability concerns with side by side code examples of easy/hard to test code can be found in Misko's extensive testability guide.
Click on the "flaw #x" links (they look like plain text but they're separate links).
Big, complex methods that do lots of things at the same time that really should've been separated. (example: get something from a configuration object, create a URL based on some variables, encode the URL, send a request, do something with the response... you get the drill).
Everything static. Things created with New, although I haven't found a proper way to avoid it without spamming the entire application with factories.
It's almost always about dependencies.
Most code depends on external systems such as databases, file systems, email clients, networks, etc. It's also common to have dependencies on major internal systems (e.g, the spell checking module, or the recalc engine...).
If these dependences are not easily substitutable, then the system becomes hard to test.
Classes that call statics and singletons are the worst offenders, but any class that doesn't accept it's dependencies via constructor or properties will be hard to test.
There are some legitimate situations that are hard to test:
Concurrency
User Interface - this is why the trend is towards MVC architectures that create ViewModels which can be easily tested. The actual rendering is minimized - this is called the humble dialog or humble object pattern in the test literature.
I realize there have already been a number of posts on n-tier design and this could possibly be me over thinking things and going round in circles, but I have myself all confused now and would like to get some clarity from the community please.
I am trying to separate a project I created, (and didn't design architecturally very well to start with), out into different layers (each in their own project):
UI
Business Objects
Logic / Business
DAL
The UI should only call the Logic layer to get its stuff
The Business Objects should not call or have references to anything else, just be a way of storing the data
The Logic / BUSINESS layer should hold all of the methods to get, create, update, delete (CRUD) objects in the system and would have references to both the BO and the DAL. It would apply the business logic to the operations then delegate the actual CRUD to the DAL.
The DAL would just do the CRUD operations on the DB. It would have a reference to the BO's as it would return them for the Gets etc.
My question is should the Logic classes only call their equivalent DAL class and just call logic classes instead? In other words, CompanyLogic class should only call CompanyDAL class. So if it wanted to get A Client object by ID it would call ClientLogic.GetClientByID(int) rather than the ClientDAL.GetClientByID(int).
The reason I thought it maybe should stay on the its own layer was that:
It would seem to loosen the coupling between projects
What about the Logic, if getting a client object had some logic validation in it (possibly not the best example, but hope it gets the point across).
EDIT:
I am not sure if it is bad design by me but at the moment the BUSINESS layer has a number of classes including ClientBULL and CompnayBULL, both classes have a call to one another. I use an interface for each class and have a factory to build the objects to try and reduce any coupling but they can not exist without each other now due to calling methods in both classes. Is this a bad idea?
Well, here's my comments on your design:
Logic is a bad name for what essentially is a layer assigned to abstract persistence. I would probably call it "Repository" or "Persistence" or DAO (data access objects) instead of "Logic", which is ambiguous and could absolutely mean anything.
If you really want to decouple your business layer from your DAL, your Logic layer should only accept interfaces to the DAL, and not concrete DAL classes.
There are two schools of thought as to where validation should reside. Some are completely fine with validation sitting at the UI layer; others would rather throw exceptions or pass messages from the business layer. Whichever way you go, just be consistent, don't duplicate validations in multiple places, and you'll be fine.
Go ahead and try coding it would probably be the best piece of advice I could give you. It's well and fine thinking it through, but at one point you'll need to see it while you're coding it and only then will subtle quirks and pitfalls reveal themselves. Whatever prototypes you can come up with will definitely be valuable to the direction your development and design takes you.
Goodluck!
Update
Re your edit: Within the same namespace or assembly, calls to concrete classes are definitely fine. I think it will be overly convoluted for you to need to put up interfaces for business logic -- I mean is there more than one set of rules you should follow?
I'm a believer of keeping things simple and following YAGNI. Don't make an interface until there are more than two classes that are going to implement/already implementing that interface (the DAL is always an exception to this though).