For example, I'm writing tests against a CsvReader. It's a simple class that enumerates and splits rows of text. Its only raison d'être is ignoring commas within quotes. It's less than a page.
By "black box" testing the class, I've checked things like
What if the file doesn't exist?
What if I don't have permission on the file?
What if the file has non-Windows line-breaks?
But in fact, all of these things are the StreamReader's business. My class works without doing anything about these cases. So in essence, my tests are catching errors thrown by StreamReader, and testing behavior handled by the framework. It feels like a lot of work for nothing.
I've seen the related questions
Should QA test from a strictly black-box perspective?
Rigor in capturing test cases for unit testing
My question is, am I missing the point of "glass box" testing if I use what I know to avoid this kind of work?
This really depends on the interface of your CsvReader, you need to consider what the user of the class is expecting.
For example, if one of the parameters is a file name and the file does not exist what should happen? This should not be dependent upon whether you use a stream reader or not. The unit tests should test for the observable external behaviour of your class and in some cases, to dig slightly deeper and additionally ensure certain implementation details are covered, e.g. the file is closed when the reader has finished.
However you don't want the Unit Tests to be dependent upon all of the details or to assume that because of an implementation detail something will happen.
All of the examples you mention in your question involve observable behaviour (in this case exceptional circumstances) of your class and therefore should have unit tests related to them.
I don't think you should waste time testing things that are not your code. It's a design choice, not a testing choice, whether to handle the errors of the underlying framework or let them propagate up to the caller. FWIW, I think you're right to let them propagate up. Once you've made the design decision, though, your unit testing should cover your code (and cover it well) without testing the underlying framework. Using dependency injection and a mock Stream is probably a good idea, too.
[EDIT] Example of dependency injection (see link above for more info)
Not using dependency injection we have:
public class CvsReader {
private string filename;
public CvsReader(string filename)
{
this.filename = filename;
}
public string Read()
{
StreamReader reader = new StreamReader( this.filename );
string contents = reader.ReadToEnd();
.... do some stuff with contents...
return contents;
}
}
With dependency injection (constructor injection) we do:
public class CvsReader {
private IStream stream;
public CvsReader( IStream stream )
{
this.stream = stream;
}
public string Read()
{
StreamReader reader = new StreamReader( this.stream );
string contents = reader.ReadToEnd();
... do some stuff with contents ...
return contents;
}
}
This allows the CvsReader to be more easily testable. We pass an instance implementing the interface we depend on in the constructor, in this case IStream. Because of this we can create another class (perhaps a mock class) that implements IStream, but doesn't necessarily do file I/O. We can use this class to feed to our reader whatever data we want without involving any of the underlying framework. In this case, I'd use a MemoryStream since we just reading from it. In we wanted to, though, we could use a mock class and give it a richer interface that lets our tests configure the responses that it gives. This way we can test the code that we write and not involve the underlying framework code at all. Alternatively we could also pass a TextReader, but the normal dependency injection pattern uses interfaces and I wanted to show the pattern with interfaces. Arguably passing in a TextReader would be better since the code above still is dependent on the StreamReader implementation.
Yes, but that would strictly be for the purposes of unit testing:
You could abstract your CSV reader implementation from any particular StreamReader by defining an abstract stream reader interface and testing your own implementation with a mock stream reader that implements that interface. Your mock reader would obviously be immune to errors like non-existent files, permission problems, OS differences etc. You would therefore entirely be testing your own code and can achieve 100% code coverage.
I tend to agree with tvanfosson: If you inherit from a StreamReader and extend it in some way, your unit tests should only exercise the functionality you've added or altered. Otherwise, you're going to waste a lot of time and cognitive energy writing, reading and maintaining tests that don't add any value.
Although markj is correct that tests should cover the "observable external behaviour" of a class, I think it's appropriate to consider where that behaviour comes from. If it's behaviour via inheritance from another (presumably unit tested) class, then I see no benefit in adding your own unit tests. OTOH, if it's behaviour via composition then it might justify some tests to ensure the pass-throughs are working properly.
My preference would be to unit test the specific functionality you alter, and then write integration tests that check for error conditions, but in the context of the business need you're ultimately supporting.
Just an FYI, if this is .NET, you should consider not reinventing the wheel.
For C#
Add a reference to Microsoft.VisualBasic
Use the fantastic class Microsoft.VisualBasic.FileIO.TextFieldParser() to handle your CSV parsing needs.
Microsoft already tested it, so you won't have to.
Enjoy.
You should always be managing errors that your framework throws; that way your application is robust & doesn't crash on catastrophic errors...
Related
It is often told that writing unit tests one must test only a single class and mock all the collaborators. I am trying to learn TDD to make my code design better and now I am stuck with a situation where this rule should be broken. Or shouldn't it?
An example: class under test has a method that gets a Person, creates an Employee based on the Person and returns the Employee.
public class EmployeeManager {
private DataMiner dataMiner;
public Employee getCoolestEmployee() {
Person dankestPerson = dataMiner.getDankestPerson();
Employee employee = new Employee();
employee.setName(dankestPerson.getName() + "bug in my code");
return employee;
}
// ...
}
Should Employee be considered a collaborator? If not, why not? If yes, how do I properly test that 'Employee' is created correctly?
Here is the test I have in mind (using JUnit and Mockito):
#Test
public void coolestEmployeeShouldHaveDankestPersonsName() {
when(dataMinerMock.getDankestPerson()).thenReturn(dankPersonMock);
when(dankPersonMock.getName()).thenReturn("John Doe");
Employee coolestEmployee = employeeManager.getCoolestEmployee();
assertEquals("John Doe", coolestEmployee.getName());
}
As you see, I have to use coolestEmployee.getName() - method of the Employee class that is not under Test.
One possible solution that comes to mind is to extract the task of transforming Persons into Employees to a new method of Employee class, something like
public Employee createFromPerson(Person person);
Am I overthinking the problem? What is the correct way?
The goal of a unit test is to quickly and reliably determine whether a single system is broken. That doesn't mean you need to simulate the entire world around it, just that you should ensure that collaborators you use are fast, deterministic, and well-tested.
Data objects—POJOs and generated value objects in particular—tend to be stable and well-tested, with very few dependencies. Like other heavily-stateful objects, they also tend to be very tedious to mock, because mocking frameworks don't tend to have powerful control over state (e.g. getX should return n after setX(n)). Assuming Employee is a data object, it is likely a good candidate for actual use in unit tests, provided that any logic it contains is well-tested.
Other collaborators not to mock in general:
JRE classes and interfaces. (Never mock a List, for instance. It'll be impossible to read, and your test won't be any better for it.)
Deterministic third-party classes. (If any classes or methods change to be final, your mock will fail; besides, if you're using a stable version of the library, it's no spurious source of failure either.)
Stateful classes, just because mocks are much better at testing interactions than state. Consider a fake instead, or some other test double.
Fast and well-tested other classes that have few dependencies. If you have confidence in a system, and there's no hazard to your test's determinism or speed, there's no need to mock it.
What does that leave? Non-deterministic or slow service classes or wrappers that you've written, that are stateless or that change very little during your test, and that may have many collaborators of their own. In these cases, it would be hard to write a fast and deterministic test using the actual class, so it makes a lot of sense to use a test double—and it'd be very easy to create one using a mocking framework.
See also: Martin Fowler's article "Mocks Aren't Stubs", which talks about all sorts of test doubles along with their advantages and disadvantages.
Getters that only read a private field are usually not worth testing. By default, you can rely on them pretty safely in other tests. Therefore I wouldn't worry about using dankestPerson.getName() in a test for EmployeeManager.
There's nothing wrong with your test as far as testing goes. The design of the production code might be different - mocking dankestPerson probably means that it has an interface or abstract base class, which might be a sign of overengineering especially for a business entity. What I would do instead is just new up a Person, set its name to the expected value and set up dataMinerMock to return it.
Also, the use of "Manager" in a class name might indicate a lack of cohesion and too broad a range of responsibilities.
Consider you have the following method:
public Foo ParseMe(string filepath)
{
// break up filename
// validate filename & extension
// retrieve info from file if it's a certain type
// some other general things you could do, etc
var myInfo = GetFooInfo(filename);
// create new object based on this data returned AND data in this method
}
Currently I have unit tests for GetFooInfo, but I think I also need to build unit tests for ParseMe. In a situation like this where you have a two methods that return two different properties - and a change in either of them could break something - should unit tests be created for both to determine the output is as expected?
I like to err on the side of caution and be more wary about things breaking and ensuring that maintenance later on down the road is easier, but I feel very skeptical about adding very similar tests in the test project. Would this be bad practice or is there any way to do this more efficiently?
I'm marking this as language agnostic, but just in case it matters I am using C# and NUnit - Also, I saw a post similar to this in title only, but the question is different. Sorry if this has already been asked.
ParseMe looks sufficiently non-trivial to require a unit test. To answer your precise question, if "you have a two methods that return two different properties - and a change in either of them could break something" you should absolutely unit test them.
Even if the bulk of the work is in GetFooInfo, at minimum you should test that it's actually called. I know nothing about NUnit, but I know in other frameworks (like RSpec) you can write tests like GetFooInfo.should be_called(:once).
It is not a bad practice to test a method that is calling another method. In fact, it is a good practice. If you have a method calling another method, it is probably performing additional functionality, which should be tested.
If you find yourself unit testing a method that calls a method that is also being unit tested, then you are probably experiencing code reuse, which is a good thing.
I agree with #tsm - absolutely test both methods (assuming both are public).
This may be a smell that the method or class is doing too much - violating the Single Responsibility Principle. Consider doing an Extract Class refactoring and decoupling the two classes (possibly with Dependency Injection). That way you could test both pieces of functionality independently. (That said, I'd only do that if the functionality was sufficiently complex to warrant it. It's a judgment call.)
Here's an example in C#:
public interface IFooFileInfoProvider
{
FooInfo GetFooInfo(string filename);
}
public class Parser
{
private readonly IFooFileInfoProvider _fooFileInfoProvider;
public Parser(IFooFileInfoProvider fooFileInfoProvider)
{
// Add a null check
_fooFileInfoProvider = fooFileInfoProvider;
}
public Foo ParseMe(string filepath)
{
string filename = Path.GetFileName(filepath);
var myInfo = _fooFileInfoProvider.GetFooInfo(filename);
return new Foo(myInfo);
}
}
public class FooFileInfoProvider : IFooFileInfoProvider
{
public FooInfo GetFooInfo(string filename)
{
// Do I/O
return new FooInfo(); // parameters...
}
}
Many developers, me included, take a programming by contract approach. That requires you to consider each method as a black box. If the method delegates to another method to accomplish its task does not matter, when you are testing the method. But you should also test all large or complicated parts of your program as units. So whether you need to unit test the GetFooInfo depends on how complicated that method is.
I have just started to read Professional Test Driven Development with C#: Developing Real World Applications with TDD
I have a hard time understanding stubs, fakes and mocks. From what I understand so far, they are fake objects used for the purpose of unit testing your projects, and that a mock is a stub with conditional logic into it.
Another thing I think I have picked up is that mocks are somehow related with dependency injection, a concept which I only managed to understand yesterday.
What I do not get is why I would actually use them. I cannot seem to find any concrete examples online that explains them properly.
Can anyone please explain to me this concepts?
As I've read in the past, here's what I believe each term stands for
Stub
Here you are stubbing the result of a method to a known value, just to let the code run without issues. For example, let's say you had the following:
public int CalculateDiskSize(string networkShareName)
{
// This method does things on a network drive.
}
You don't care what the return value of this method is, it's not relevant. Plus it could cause an exception when executed if the network drive is not available. So you stub the result in order to avoid potential execution issues with the method.
So you end up doing something like:
sut.WhenCalled(() => sut.CalculateDiskSize()).Returns(10);
Fake
With a fake you are returning fake data, or creating a fake instance of an object. A classic example are repository classes. Take this method:
public int CalculateTotalSalary(IList<Employee> employees) { }
Normally the above method would be passed a collection of employees that were read from a database. However in your unit tests you don't want to access a database. So you create a fake employees list:
IList<Employee> fakeEmployees = new List<Employee>();
You can then add items to fakeEmployees and assert the expected results, in this case the total salary.
Mocks
When using mock objects you intend to verify some behaviour, or data, on those mock objects. Example:
You want to verify that a specific method was executed during a test run, here's a generic example using Moq mocking framework:
public void Test()
{
// Arrange.
var mock = new Mock<ISomething>();
mock.Expect(m => m.MethodToCheckIfCalled()).Verifiable();
var sut = new ThingToTest();
// Act.
sut.DoSomething(mock.Object);
// Assert
mock.Verify(m => m.MethodToCheckIfCalled());
}
Hopefully the above helps clarify things a bit.
EDIT:
Roy Osherove is a well-known advocate of Test Driven Development, and he has some excellent information on the topic. You may find it very useful :
http://artofunittesting.com/
They are all variations of the Test Double. Here is a very good reference that explains the differences between them: http://xunitpatterns.com/Test%20Double.html
Also, from Martin Fowler's post: http://martinfowler.com/articles/mocksArentStubs.html
Meszaros uses the term Test Double as the generic term for any kind of
pretend object used in place of a real object for testing purposes.
The name comes from the notion of a Stunt Double in movies. (One of
his aims was to avoid using any name that was already widely used.)
Meszaros then defined four particular kinds of double:
Dummy objects: are passed around but never actually used. Usually they
are just used to fill parameter lists.
Fake objects actually have working implementations, but usually take some shortcut which makes
them not suitable for production (an in memory database is a good
example).
Stubs provide canned answers to calls made during the test,
usually not responding at all to anything outside what's programmed in
for the test. Stubs may also record information about calls, such as
an email gateway stub that remembers the messages it 'sent', or maybe
only how many messages it 'sent'.
Mocks are what we are talking about here: objects pre-programmed with expectations which form a
specification of the calls they are expected to receive.
Of these kinds of doubles, only mocks insist upon behavior verification. The
other doubles can, and usually do, use state verification. Mocks
actually do behave like other doubles during the exercise phase, as
they need to make the SUT believe it's talking with its real
collaborators.
This PHP Unit's manual helped me a lot as introduction:
"Sometimes it is just plain hard to test the system under test (SUT) because it depends on other components that cannot be used in the test environment. This could be because they aren't available, they will not return the results needed for the test or because executing them would have undesirable side effects. In other cases, our test strategy requires us to have more control or visibility of the internal behavior of the SUT." More: https://phpunit.de/manual/current/en/test-doubles.html
And i find better "introductions" when looking for "test doubles" as mocks, fakes, stubs and the others are known.
Im reading The Art Of Unit Testing" and there is a specific paragraph im not sure about.
"One of the reasons you may want to avoid using a base class instead of an interface is that a base class from the production code may already have (and probably has) built-in production dependencies that you’ll have to know about and override. This makes implementing derived classes for testing harder than implementing an interface, which lets you know exactly what the underlying implementation is and gives you full control over it."
can someone please give me an example of a built-in production dependency?
Thanks
My interpretation of this is basically anything where you have no control over the underlying implementation, but still rely on it. This could be in your own code or in third party libraries.
Something like:
class MyClass : BaseConfigurationProvider
{
}
abstract class BaseConfigurationProvider
{
string connectionString;
protected BaseConfigurationProvider()
{
connectionString = GetFromConfiguration();
}
}
This has a dependency on where the connection string is returned from, perhaps a config file or perhaps a random text file - either way, difficult external state handling for a unit test on MyClass.
Whereas the same given an interface:
class MyClass : IBaseConfigurationProvider
{
string connectionString;
public MyClass(IBaseConfigurationProvider provider)
{
connectionString = provider.GetConnectionString();
}
}
interface IBaseConfigurationProvider
{
string GetConnectionString();
}
You are in full control of the implementation at least, and the use of an interface means that test versions of implementations can be used during unit tests, or you can inject dependencies into consuming classes (as I have done above). In this scenario, the dependency is on the need to resolve a connection string. The tests can provide a different or empty string.
one example which i can think of is use of Session variable inside that of asp.net (im a .net guy)
because you have no control over how asp.net populates the session variable you cannot test it simply by making a test case you will have to either override it somehow or make a mock object
and this happens because all the context and cookies arent present when you are testing
So I have a factory class and I'm trying to work out what the unit tests should do. From this question I could verify that the interface returned is of a particular concrete type that I would expect.
What should I check for if the factory is returning concrete types (because there is no need - at the moment - for interfaces to be used)? Currently I'm doing something like the following:
[Test]
public void CreateSomeClassWithDependencies()
{
// m_factory is instantiated in the SetUp method
var someClass = m_factory.CreateSomeClassWithDependencies();
Assert.IsNotNull(someClass);
}
The problem with this is that the Assert.IsNotNull seems somewhat redundant.
Also, my factory method might be setting up the dependencies of that particular class like so:
public SomeClass CreateSomeClassWithDependencies()
{
return new SomeClass(CreateADependency(), CreateAnotherDependency(),
CreateAThirdDependency());
}
And I want to make sure that my factory method sets up all these dependencies correctly. Is there no other way to do this then to make those dependencies public/internal properties which I then check for in the unit test? (I'm not a big fan of modifying the test subjects to suit the testing)
Edit: In response to Robert Harvey's question, I'm using NUnit as my unit testing framework (but I wouldn't have thought that it would make too much of a difference)
Often, there's nothing wrong with creating public properties that can be used for state-based testing. Yes: It's code you created to enable a test scenario, but does it hurt your API? Is it conceivable that other clients would find the same property useful later on?
There's a fine line between test-specific code and Test-Driven Design. We shouldn't introduce code that has no other potential than to satisfy a testing requirement, but it's quite alright to introduce new code that follow generally accepted design principles. We let the testing drive our design - that's why we call it TDD :)
Adding one or more properties to a class to give the user a better possibility of inspecting that class is, in my opinion, often a reasonable thing to do, so I don't think you should dismiss introducing such properties.
Apart from that, I second nader's answer :)
If the factory is returning concrete types, and you're guaranteeing that your factory always returns a concrete type, and not null, then no, there isn't too much value in the test. It does allows you to make sure, over time that this expectation isn't violated, and things like exceptions aren't thrown.
This style of test simply makes sure that, as you make changes in the future, your factory behaviour won't change without you knowing.
If your language supports it, for your dependencies, you can use reflection. This isn't always the easiest to maintain, and couples your tests very tightly to your implementation. You have to decide if that's acceptable. This approach tends to be very brittle.
But you really seem to be trying to separate which classes are constructed, from how the constructors are called. You might just be better off with using a DI framework to get that kind of flexibility.
By new-ing up all your types as you need them, you don't give yourself many seams (a seam is a place where you can alter behaviour in your program without editing in that place) to work with.
With the example as you give it though, you could derive a class from the factory. Then override / mock CreateADependency(), CreateAnotherDependency() and CreateAThirdDependency(). Now when you call CreateSomeClassWithDependencies(), you are able to sense whether or not the correct dependencies were created.
Note: the definition of "seam" comes from Michael Feather's book, "Working Effectively with Legacy Code". It contains examples of many techniques to add testability to untested code. You may find it very useful.
What we do is create the dependancies with factories, and we use a dependancy injection framework to substitute mock factories for the real ones when the test is run. Then we set up the appropriate expectations on those mock factories.
You can always check stuff with reflection. There is no need to expose something just for unit tests. I find it quite rare that I need to reach in with reflection and it may be a sign of bad design.
Looking at your sample code, yes the Assert not null seems redundant, depending on the way you designed your factory, some will return null objects from the factory as opposed to exceptioning out.
As I understand it you want to test that the dependencies are built correctly and passed to the new instance?
If I was not able to use a framework like google guice, I would probably do it something like this (here using JMock and Hamcrest):
#Test
public void CreateSomeClassWithDependencies()
{
dependencyFactory = context.mock(DependencyFactory.class);
classAFactory = context.mock(ClassAFactory.class);
myDependency0 = context.mock(MyDependency0.class);
myDependency1 = context.mock(MyDependency1.class);
myDependency2 = context.mock(MyDependency2.class);
myClassA = context.mock(ClassA.class);
context.checking(new Expectations(){{
oneOf(dependencyFactory).createDependency0(); will(returnValue(myDependency0));
oneOf(dependencyFactory).createDependency1(); will(returnValue(myDependency1));
oneOf(dependencyFactory).createDependency2(); will(returnValue(myDependency2));
oneOf(classAFactory).createClassA(myDependency0, myDependency1, myDependency2);
will(returnValue(myClassA));
}});
builder = new ClassABuilder(dependencyFactory, classAFactory);
assertThat(builder.make(), equalTo(myClassA));
}
(if you cannot mock ClassA you can assign a non-mock version to myClassA using new)