I have this CacheManager class which keeps a static dictionary with all sorts of cached data. However, with this dictionary being static it gets filled up with data from the other unit tests. This keeps me from unit testing whether or not the CacheManager is empty on init, and breaks with the principles of unit testing.
Any ideas how to create a proper unit test for this?
Code
public class CacheManager
{
private static readonly Dictionary<ICacheKey, ListCacheItem> cacheEntries =
new Dictionary<ICacheKey, ListCacheItem>();
public static Dictionary<ICacheKey, ListCacheItem> CacheEntries
{
get
{
lock (cacheEntries)
{
return cacheEntries;
}
}
}
Generally, this is not a good idea from a testing perspective. By making the members of CacheManager static, you will never be able to isolate it in such a way to make it nice to unit test.
Perhaps a better solution is the Singleton Pattern. To do this, get rid of the static modifiers on CacheManager's members. Then you can have one static instance in your app that is used by everyone else. Therefore, in your unit test, you could create a new instance of the class that you can test in isolation, but still have the desired functionality.
Short answer: you can not do it properly. Unit testing and statics do not play really well together, you will (almost) always run into problems like the one you mentioned.
Longer answer: best solution would be to refactor your code. Even if you need the singleton behavior you have several options (e.g. dependency injection). David's recommendation is of course also an option that would at least let you test your cache, but you may still have problems when you want to test the rest of the system.
If for some reason you want to stick to your current design you can still have some (not necessary nice) workarounds. Some examples:
Easiest might be to add a "cleanCache" method. In some situations it might be even useful for the rest of the system, and each of your tests could also do it as the first step (in "setup/beforeTest or similar methods").
You could also play with visibility and let your tests do cleanup that is not allowed for the rest of the code.
These hacks will probably work as long as you do not run your tests in parallel.
Related
I have a class that looks similar to this:
public class DiscoveryDocumentCache
{
static DiscoveryDocumentCache()
{
cache = new MemoryCache(discoveryDocumentCacheName);
}
private static MemoryCache cache;
// Other stuff...
}
I want the memory cache to be static so that it is shared across all instances of the class.
It is working great for the actual use cases. Where I am struggling is with my unit tests. Because the cache variable is static, it accumulates all the different values that get put in it.
My use case does not have a "remove" option. (They just expire out of the cache.)
I have added this to my DiscoveryDocumentCache class:
public void __UnitTesting__CleanUpMemoryCacheBetweenTests()
{
cache.Dispose();
cache = new MemoryCache(discoveryDocumentCacheName);
}
But it feels dirty to add a method that is just to facilitate unit testing.
.NET Framework had some ways to access class's private variables if you really thought you should. Is there something similar I can do to reset this static variable in my TearDown unit test method?
Not sure if you'll think this is an answer, or just a long comment. :-)
Your tests are working well because they are telling you something. They're saying "This is a bad use of a static instance."
Statics are always suspicious, although not always bad. What's bad here is you use a static to implement the singleton pattern in a way that is subsequently out of your control. That is, your individual instances are deciding for themselves which cache (the singleton) to use, rather than being told what to use.
As #Vaccano suggests, you are better off to use some sort of dependency injection. Translate that to "Tell each object what cache to use rather than having them figure it out for itself." This is one of those things that requires a bit of up-front investment in order to avoid saving trouble later.
As an aside... there is nothing wrong with changing your application to make it more testable. Testability is just another of those "ilities" that define good software design. The trick is to change it so as to improve the software, usually simplifying it rather than complicating it.
Consider unit testing a dictionary object. The first unit tests you might write are a few that simply adds items to the dictionary and check exceptions. The next test may be something like testing that the count is accurate, or that the dictionary returns a correct list of keys or values.
However, each of these later cases requires that the dictionary can first reliably add items. If the tests which add items fail, we have no idea whether our later tests fail because of what they're testing is implemented incorrectly, or because the assumption that we can reliably add items is incorrect.
Can I declare a set of unit tests which cause a given unit test to be inconclusive if any of them fail? If not, how should I best work around this? Have I set up my unit tests wrong, that I'm running into this predicament?
It's not as hard as it might seem. Let's rephrase the question a bit:
If I test my piece of code which requires System.Collections.Generic.List<T>.Add to work, what should I do when one day Microsoft decides to break .Add on List<T>? Do I make my tests depending on this to work inconclusive?
Answer to the above is obvious; you don't. You let them fail for one simple reason - your assumptions have failed, and test should fail. It's the same here. Once you get your add tests to work, from that point on you assume add works. You shouldn't treat your tested code any differently than 3rd party tested code. Once it's proven to work, you assume it indeed does.
On a different note, you can utilize concept called guard assertions. In your remove test, after the arrange phase you introduce additional assert phase, which verifies your initial assumptions (in this case - that the add is working). More information about this technique can be found here.
To add an example, NUnit uses the concept above disguised under the name Theory. This does exactly what you proposed (yet it seems to be more related to data driven testing rather than general utility):
The theory itself is responsible for ensuring that all data supplied meets its assumptions. It does this by use of the Assume.That(...) construct, which works just like Assert.That(...) but does not cause a failure. If the assumption is not satisfied for a particular test case, that case returns an Inconclusive result, rather than a Success or Failure.
However, I think what Mark Seemann states in an answer to the question I linked makes the most sense:
There may be many preconditions that need to be satisfied for a given test case, so you may need more than one Guard Assertion. Instead of repeating those in all tests, having one (and one only) test for each precondition keeps your test code more mantainable, since you will have less repetition that way.
Nice question, I often ponder this and had this problem the other day. What I did was get the basics of our collection working using a dictionary behind the scenes. For example:
public class MyCollection
{
private IDictionary<string, int> backingStore;
public MyCollection(IDictionary<string, int> backingStore)
{
_backingStore = backingStore;
}
}
Then we test drove the addition implementation. As we had the dictionary by reference we could assert that after adding items our business logic was correct.
For example the pseudo code for the additon was something like:
public void Add(Item item)
{
// Check we have not added before
// More business logic...
// Add
}
Then the test could be written such as:
var subject = new MyCollection(backingStore);
subject.Add(new Item())
Assert.That(backingStore.Contains(itemThatWeAdded)
We then went on to drive out the other methods such as retrieval, and deletion.
Your question is what should you do with regards the addition breaking, in turn breaking the retrieval. This is a catch 22 scenario. Personally I'd rather ditch the backing store and use this as an implementation detail. So this is what we did. We refactored the tests to use the system under test, rather than the backing store for the asserts. The great thing about the backing store being public initially is it allows you test drive small parts of the codebase, rather than having to implement both addition and retrieval in one go.
The test for addition then looked like the following after we refactored the collection to not expose the backing store.
var subject = new MyCollection();
var item = new Item()
subject.Add(item)
Assert.That(subject.Has(item), Is.True);
In this case I think this is fine. If you can not add items successfully then you sure as hell cannot retrieve anything because you've not added them. As long as your tests are named well any developer seeing some test such as "CanOnlyAddUniqueItemsToCollection" will point future developers in the right direction, in other words, the addition is broken. Just make sure your tests are named well and you should be giving as much help as possible.
I don't see this as too much of a problem. If your Dictionary class is not too big, and the unit test for that class is the only unit test testing that code, then when your add method is broken and multiple tests fail, you still know the problem is in the Dictionary class and can identify it, debug and fix it easily.
Where it becomes a problem is when you have other code smells or design problems such as:
unit tests tests are testing many application classes, using mocks instead can help here.
unit tests are actually system tests creating and testing many application classes at once.
the Dictionary class is too big and complex so when it breaks and tests fail it's difficult to figure out what part is broken.
This is very interesting. We use NUnit and the best I can tell it runs test-methods alphabetically. That might be an overly-artificial way to order your tests, but if you built up your test classes such that alphabetically/numerically-named pre-req methods came first you might accomplish what you want.
I find myself writing a test method, firing just it to watch it fail, and then writing the code to make it pass. When I'm all done I can run the whole class and everything passes - it doesn't matter what order the tests ran in becuase everything 'works' becuase of the incremental dev I did.
Now later on if I break something in the thing i'm testing who knows what all will fail in the harness. I guess it doesn't really matter to me - I've got a long list of failures and I can tease out what went wrong.
I've been reading that static methods tend to be avoided when using TDD because they tend to be hard to mock. I find though, that the easiest thing to unit test is a static method that has simple functionality. Don't have to instantiate any classes, encourages methods that a simple, do one thing, are "standalone" etc.
Can someone explain this discrepancy between TDD best practices and pragmatic ease?
thanks,
A
A static method is easy to test, but something that directly calls a static method generally is not easy to test independent of the static method it depends on. With a non-static method you can use a stub/mock/fake instance to ease testing, but if the code you're testing calls static methods it's effectively "hard-wired" to that static method.
The answer to the asked question is, in my opinion "Object Oriented seems to be all that TDD people think about."
Why? I don't know. Maybe they are all Java programmers who've been infected with the disease of making everything rely on six indirection layers, dependency injection and interface adapters.
Java programmers seem to love to make everything difficult up front in order to "save time later."
I advise applying some Agile principles to your TDD: If it isn't causing a problem then don't fix it. Don't over design.
In practice I find that if the static methods are tested well first then they are not going to be the cause of bugs in their callers.
If the static methods execute quickly then they don't need a mock.
If the static methods work with stuff from outside the program, then you might need a mock method. In this case you'd need to be able to simulate many different kinds of function behavior.
If you do need to mock a static method remember that there are ways to do it outside of OO programming.
For example, you can write scripts to process your source code into a test form that calls your mock function. You could link different object files that have different versions of the function into the test programs. You could use linker tricks to override the function definition (if it didn't get inlined). I am sure there are some more tricks I haven't listed here.
It's easy to test the static method. The problem is that there is no way to isolate your other code from that static method when testing the other code. The calling code is tightly-coupled to the static code.
A reference to a static method cannot be mocked by many mocking frameworks nor can it be overridden.
If you have a class that is making lots of static calls, then to test it you have to configure the global state of the application for all of those static calls - so maintenance becomes a nightmare. And if your test fails, then you don't know which bit of code caused the failure.
Getting this wrong, is one of the reasons that many developers think TDD is nonsense. They put in a huge maintenance effort for test results that only vaguely indicate what went wrong. If they'd only reduced the coupling between their units of code, then maintenance would be trivial and the test results specific.
That advice is true for the most part.. but not always. My comments are not C++ specific..
writing tests for static methods (which are pure/stateless functions): i.e. the work off the inputs to produce a consistent result. e.g. Add below - will always give the same value given a particular set of inputs. There is no problem in writing tests for these or code that calls such pure static methods.
writing tests for static methods which consume static state : e.g. GetAddCount() below. Calling it in multiple tests can yield different values. Hence one test can potentially harm the execution of another test - tests need to be independent. So now we need to introduce a method to reset static state such that each test can start from a clean slate (e.g. something like ResetCount()).
Writing tests for code that accesses static methods but no source-code access to the dependency: Once again depends on the properties of the static methods themselves. However if they are gnarly, you have a difficult dependency. If the dependency is an object, then you could add a setter to the dependent type and set/inject a fake object for your tests. When the dependency is static, you may need a sizable refactoring before you can get tests running reliably. (e.g. Add an object middle-man dependency that delegates to the static method. Now plugin a fake middle-man for your tests)
Lets take an example
public class MyStaticClass
{
static int __count = 0;
public static int GetAddCount()
{ return ++__count; }
public static int Add(int operand1, int operand2)
{ return operand1 + operand2; }
// needed for testability
internal static void ResetCount()
{
__count = 0;
}
}
...
//test1
MyStaticClass.Add(2,3); // => 5
MyStaticClass.GetAddCount(); // => 1
// test2
MyStaticClass.Add(2,3); // => 5
//MyStaticClass.ResetCount(); // needed for tests
MyStaticClass.GetAddCount(); // => unless Reset is done, it can differ from 1
So, I'm starting to write some logic for a simple program (toy game on the side). You have a specific ship (called a setup) that is a ship + modules. You start with an empty setup based off a ship and then add modules to that setup. Ships also have a numbered array of module positions.
var setup = new Setup(ship); // ship is a stub (IShip) defined someplace else
var module = new Mock<IModule>().Object;
setup.AddModule(module, 1); // 1 = which position
So, this is the code in my test method. I now need to assert on this code. Well, I need a getter method right?
Assert.AreEqual(module, setup.GetModule(1));
This might sound really dumb and I'm worrying about nothing, but for some stupid reason I'm concerned with adding a method just to assert that a test passed.
Is this fine and is in fact part of the design process that TDD is pushing out? For instance I know I need an AddModule method because I want to test it, and the fact that this requires a GetModule method to test is simply an evolution of my design via TDD.
Or is this kind of a smell because I don't even know if I'll really need GetModule in my code and it will only be used in a test?
For example, adding a module is going to ultimately affect different stats of a setup (armor, shield, firepower, etc). The thing is those are going to be complex, and I wanted to start with a simple test. But in the end, those are the public attributes I care about -- a setup is defined by its stats, not by a list of modules.
Interesting question. I'm glad to hear you're writing the tests first.
If you let the design manifest itself through the tests, you're more likely to build only the parts you'll need. But is this the best design? Maybe not, but don't let that discourage you -- your add method works!
It may be too early to tell if you'll need the GetModule method later. For now, build up the functionality you need and go green, then slowly refactor it (going from red to green again) to get the design you want.
Part of evolving the design is to start with baby steps like a simple method and then grow into the complex stats (eventually dropping this method and changing the test) when enough supports it. When doing TDD, don't expect that the first test you write is targeting the ideal interface. It is OK to have some messiness that will get dropped as you evolve the design.
That being said, if you see no public purpose to the method, try to limit its visibility as much as is reasonable to the test code. Although even that should eventually go away as you get to build out the rest of the system and have something real to test as a side effect of the set method.
I would be wary of introducing a public method in my class that is only used for testing.
There are various ways how you could test this:
Reflection: The GetModule method is a private method in your class (this could also work if your 'stats' are private) and you can access it in your test method via reflection. This will work well, the only trouble is you will not get any compiler errors if you change the name of the private method or add / delete some variables (but, of course, your test will fail and you will know early)
Inheritance: The GetModule method could be protected (only inheritance visible) and your test class could inherit from the main class. This way your test class gets access to this method, but this is not really exposed to the outside world.
Assert the side-effect: This is where you really think about what it means to add a module to the system. If it is going to affect some 'stats' as you put it, you could write tests which assert that the stats are appropriately modified.
Suppose you have a method:
public void Save(Entity data)
{
this.repositoryIocInstance.EntitySave(data);
}
Would you write a unit test at all?
public void TestSave()
{
// arrange
Mock<EntityRepository> repo = new Mock<EntityRepository>();
repo.Setup(m => m.EntitySave(It.IsAny<Entity>());
// act
MyClass c = new MyClass(repo.Object);
c.Save(new Entity());
// assert
repo.Verify(m => EntitySave(It.IsAny<Entity>()), Times.Once());
}
Because later on if you do change method's implementation to do more "complex" stuff like:
public void Save(Entity data)
{
if (this.repositoryIocInstance.Exists(data))
{
this.repositoryIocInstance.Update(data);
}
else
{
this.repositoryIocInstance.Create(data);
}
}
...your unit test would fail but it probably wouldn't break your application...
Question
Should I even bother creating unit tests on methods that don't have any return types* or **don't change anything outside of internal mock?
Don't forget that unit tests isn't just about testing code. It's about allowing you to determine when behaviour changes.
So you may have something that's trivial. However, your implementation changes and you may have a side effect. You want your regression test suite to tell you.
e.g. Often people say you shouldn't test setters/getters since they're trivial. I disagree, not because they're complicated methods, but someone may inadvertently change them through ignorance, fat-finger scenarios etc.
Given all that I've just said, I would definitely implement tests for the above (via mocking, and/or perhaps it's worth designing your classes with testability in mind and having them report status etc.)
It's true your test is depending on your implementation, which is something you should avoid (though it is not really that simple sometimes...) and is not necessarily bad. But these kind of tests are expected to break even if your change doesn't break the code.
You could have many approaches to this:
Create a test that really goes to the database and check if the state was changed as expected (it won't be a unit test anymore)
Create a test object that fakes a database and do operations in-memory (another implementation for your repositoryIocInstance), and verify the state was changed as expected. Changes to the repository interface would incurr in changes to this object as well. But your interfaces shouldn't be changing much, right?
See all of this as too expensive, and use your approach, which may incur on unnecessarily breaking tests later (but once the chance is low, it is ok to take the risk)
Ask yourself two questions. "What is the manual equivalent of this unit test?" and "is it worth automating?". In your case it would be something like:
What is manual equivalent?
- start debugger
- step into "Save" method
- step into next, make sure you're inside IRepository.EntitySave implementation
Is it worth automating? My answer is "no". It is 100% obvious from the code.
From hundreds of similar waste tests I didn't see a single which would turn out to be useful.
The general rule of thumb is, that you test all things, that could probably break. If you are sure, that the method is simple enough (and stays simple enough) to not be a problem, that let it out with testing.
The second thing is, you should test the contract of the method, not the implementation. If the test fails after a change, but not the application, then your test tests not the right thing. The test should cover cases that are important for your application. This should ensure, that every change to the method that doesn't break the application also don't fail the test.
A method that does not return any result still changes the state of your application. Your unit test, in this case, should be testing whether the new state is as intended.
"your unit test would fail but it probably wouldn't break your application"
This is -- actually -- really important to know. It may seem annoying and trivial, but when someone else starts maintaining your code, they may have made a really bad change to Save and (improbably) broken the application.
The trick is to prioritize.
Test the important stuff first. When things are slow, add tests for trivial stuff.
When there isn't an assertion in a method, you are essentially asserting that exceptions aren't thrown.
I'm also struggling with the question of how to test public void myMethod(). I guess if you do decide to add a return value for testability, the return value should represent all salient facts necessary to see what changed about the state of the application.
public void myMethod()
becomes
public ComplexObject myMethod() {
DoLotsOfSideEffects()
return new ComplexObject { rows changed, primary key, value of each column, etc };
}
and not
public bool myMethod()
DoLotsOfSideEffects()
return true;
The short answer to your question is: Yes, you should definitely test methods like that.
I assume that it is important that the Save method actually saves the data. If you don't write a unit test for this, then how do you know?
Someone else may come along and remove that line of code that invokes the EntitySave method, and none of the unit tests will fail. Later on, you are wondering why items are never persisted...
In your method, you could say that anyone deleting that line would only be doing so if they have malign intentions, but the thing is: Simple things don't necessarily stay simple, and you better write the unit tests before things get complicated.
It is not an implementation detail that the Save method invokes EntitySave on the Repository - it is part of the expected behavior, and a pretty crucial part, if I may say so. You want to make sure that data is actually being saved.
Just because a method does not return a value doesn't mean that it isn't worth testing. In general, if you observe good Command/Query Separation (CQS), any void method should be expected to change the state of something.
Sometimes that something is the class itself, but other times, it may be the state of something else. In this case, it changes the state of the Repository, and that is what you should be testing.
This is called testing Inderect Outputs, instead of the more normal Direct Outputs (return values).
The trick is to write unit tests so that they don't break too often. When using Mocks, it is easy to accidentally write Overspecified Tests, which is why most Dynamic Mocks (like Moq) defaults to Stub mode, where it doesn't really matter how many times you invoke a given method.
All this, and much more, is explained in the excellent xUnit Test Patterns.