G'day,
I am working with a group of offshore developers who have been using the term unit testing quite loosely.
Their QA document talks about writing unit tests and then performing unit testing of the system.
This doesn't line up with my interpretation of what unit testing is at all.
I am used to unit testing being a test or suite of tests that are being used to exercise a single class, usually as a black box. The class under test may require other classes to be included by the implementation but generally it is a single class that is being exercised by the unit test(s).
Then you have system functional testing, intergration testing, acceptance testing, etc.
I want to know is this a bit pedantic on my part? Or is this what you think of when referring to unit tests and unit testing?
Edit: Rob Wells. I need to clarify that approaching such testing from a black box perspective is only one aspect. When using mock objects to verify internal behaviours, you are really testing from a white box perspective because you know what you want to happen inside the box.
Unit tests are generally used by developers to test isolated sections of code. They cover border cases, error cases, and normal cases. They are intended to demonstrate the correctness of a limited segment of code. If all of your unit tests pass, then you have demonstrated that your isolated segments of code do what they are supposed to do.
When you do integration testing, you are looking at end-to-end cases, to see if all the segments that have passed unit testing work together. Functional testing checks to see if the code meets the requirements as specified. Acceptance testing is done by end users to see if they approve of the final product.
I try to implement unit tests to test only a single method. and I make an effort to crete "mock" classes for dependant classes and methods used by the method I am testing...
... so that the exercise of the code in that method does not in fact call code in other methods the unit test is not supposed to be "Testing" (There are other unit tests for those methods) This way, a failure of the unit test reliably indicates a failure of the method the unit test is testing...
Mock classes are designed to "simulate" the interface and behavior of dependant classes so that the method I am testing can call them and they will behave ina standard, well-defined way according to system requirements. In order to make this approach work, calls to such dependant classes and to their methods must be made on a well defined interface, so that the "tester" process can "inject" the Mock version of teh dependant class into the class being tested instead of the actual production version... . This is kinda like a common design pattern referred to as "Dependency Injection", or "Inversion of Control" (IOC)
There are several third party tools on the market to help you implement this kind of pattern. One I have heard of is called "Rhino-Mock" or something like that...
Edit: Rob Wells. #Charles. Thanks for this. I'd forgotten using mock objects to completely replace using other classes except for the one under test.
A couple of other things I've remembered after you mentioning mock objects is that:
they can be used to simulate errors being returned by the included classes.
they can be used to raise specific exceptions to check exception handling in the class under test.
they can be used to simulate items where setup costs are high, e.g. a large SQL DB back end.
they can be used to verify the contents of an incoming request.
For more information, have a look at Martin Fowler's paper called "Mocks Aren't Stubs" and The Pragmatic Programmers's article "Mock Objects"
There is no reason why unit tests can't span multiple classes, or even submodules, as long as the test is treating only one consistent business operation. Think about "calculateWage", a method of a BO that uses different strategies to calculate the salary of a person. That's one unit test in my opinion.
I have heard of techniques in which many of the unit tests are done first, and development is done around them. Someone has just commented saying that this is "Test Driven Development" - TDD (Thanks Elie).
But if this is an offshore operation that's possibly going to charge you more money because they're spending time doing these unit tests - then I'd be careful. Get a second opinion from someone, experienced with unit tests who will verify that they're actually doing as they say.
From my understanding, unit testing will add a bit more time to any development project, but of course may offer some quality control. Nonetheless, this is the type of quality control I would want with an in house project. This may just be something the offshore company throws out there to give you a warm fuzzy.
There is a difference between the process you use to test and the technology that is used to support it. The various frameworks used for unit testing are generally very flexible and can be used for testing small units of code, large units and even testing entire processes. This flexibility can lead to confusion.
My recommendation is that whatever specific methodology or process you adopt that you segregate the various Unit Test into distinct assemblies or modules. The exact arrangement depends on your code and your company's organization.
The accumulative effect of using the Unit Test framework is that much of the testing of the code is automated. Adopted correctly developers can evaluate changes to the code code better with out going through a full Q&A process. As for the Q & A Process itself it makes their time more productive as the quality of the code coming out of development should be higher.
Understand it is not THE answer to all quality issues it just a useful tool like the other you use.
Wikipedia would seem to suggest that unit testing is about testing the smallest amount of code which would be a method on a class in the case of OO programming.
Some may have a more general term of what they mean by unit tests, where some may think of some integration tests as being unit tests where the unit is a mixture of components.
there is a traditional view of http://en.wikipedia.org/wiki/Software_testing as part of http://en.wikipedia.org/wiki/Software_engineering. but i like the idea of an agile unit test: a test is an agile unit test if it is fast enough so that the programmers always run it.
A unit test is the smallest and only piece of confidence you can get yourself on your way to being done. That is what matters, iteratively building a shield against regression and spec deviation, not how you actually integrate it to your Object-Oriented architecture.
This is almost a repeat of the "What is a 'Unit'?" question.
"Unit" can be defined flexibly. If their document doesn't have a definition of "unit", you'll need to clarify that.
It might be that they think of unit as a big assembly of code. Which is not the most desirable definition.
While I agree that you have several layers of testing (unit, module, package, application), I also think that much of this can be done with unit testing tools. Leading to "what is a unit?" questions coming up all the time.
Unit depends on context. For an individual developer, unit must be Class. Sometimes, it will also mean module or package.
For a team, however, their unit may be a package or a complete application.
What does it matter what we think? The issue here is your unhappiness with the terms they use in the document. Why don't you discuss it with them?
Ten years ago, before the current usage of "unit testing" as tests written in code, the same designation was applied to manual tests. I worked for a software development firm with a very formalized software development process. We had to write "unit tests" before writing any code. In that era, the unit tests were written in a text document (such as in Word). They described the exact steps that the user was to follow in using the app. For example, they described the exact input to type on the page to set up a new customer. Or, the user was to search for a particular product, and see that the displayed information matched the test document. So, the test was a script that the tester followed, where they also recorded the results. When the new incarnation of unit testing came along, it was confusing for a while trying to figure out if they meant the old, human tests or the new, coded tests.
I lead a group of offshore team too. Supposely we have a set of unit tests...but it doesn't mean much. :) So we rely much more on the functional and testers for quality. The inherit problem with unit testing is that you have perfect knowledge of the functionals, and you trust the developers. In the real world, that's hard to assume..
Related
I am starting to buy into BDD. Basically, as I understand it, you write scenario that describes well acceptance criteria for certain story. You start by simple tests, from the outside-in, using mocks in place of classes you do not implement as yet. As you progress, you should replace mocks with real classes. From Introduction to BDD:
At first, the fragments are
implemented using mocks to set an
account to be in credit or a card to
be valid. These form the starting
points for implementing behaviour. As
you implement the application, the
givens and outcomes are changed to use
the actual classes you have
implemented, so that by the time the
scenario is completed, they have
become proper end-to-end functional
tests.
My question is: When you finish implementing a scenario, should all classes you use be real, like in integration tests? For example, if you use DB, should your code write to a real (but lightweight in-memory) DB? In the end, should you have any mocks in your end-to-end tests?
Well, it depends :-) As I understand, the tests produced by BDD are still unit tests, so you should use mocks to eliminate dependency on external factors like DB.
In full fledged integration / functionality tests, however, you should obviously test against the whole production system, without any mocks.
Integration tests might contain stubs/mocks to fake the code/components outside the modules that you are integrating.
However, IMHO the end-to-end test should mean no stubs/mocks along the way but production code only. Or in other words - if mocks are present - it is not really end-to-end test.
Yes, by the time a scenario runs, ideally all your classes will be real. A scenario exercises the behaviour from a user's point of view, so the system should be as a user would see it.
In the early days of BDD we used to start with mocks in the scenarios. I don't bother with this any more, because it's a lot of work to keep mocking as you go down the levels. Instead I will sometimes do things like hard-code data or behaviour if it lets me get feedback from the stakeholders more quickly.
We still keep mocks in the unit tests though.
For things like databases, sure, you can use an in-memory DB or whatever helps you get feedback faster. At some point you should probably run your scenarios on a system that's as close to production as possible. If this is too slow, you might do it overnight instead of as part of your regular build cycle.
As for what you "should" do, writing the right code is far more tricky than writing the code right. I worry about using my scenarios to get feedback from the stakeholders and users before I worry about how close my environment is to a production environment. When you get to the point where changes are deployed every couple of weeks, sure, then you probably want more certainty that you're not introducing any bugs.
Good luck!
I agree with Peter and ratkok. I think you keep the mocks forever, so you always have unit tests.
Separately, it is appropriate to additionally have integration tests (no mocks, use a DB, etc. etc.).
You may even find in-betweens helpful at times (mock one piece of depended-on code (DOC), but not another).
I've only recently been looking into BDD and in particular jBehave. I work in fairly large enterprises with a lot of waterfall, ceremony orientated people. I'm looking at BDD as a way to take the businesses use cases and turn then directly into tests which the developer can then turn into either unit test or integration tests.
BDD seems to me to be not just a way to help drive the developers understanding of what the business wants, but also a way to ensure as much a spossible that those requirements are accurately represented.
My view that if you are dealing with mocks then you are doing unit tests. You need both unit testing to test out the details of a classes operation, and integrations to test out that the class plays well with others. I find developers often get infused between the two, but it's best to be as clear as possible and keep there separate from each other.
Let me start from definition:
Unit Test is a software verification and validation method in which a programmer tests if individual units of source code are fit for use
Integration testing is the activity of software testing in which individual software modules are combined and tested as a group.
Although they serve different purposes very often these terms are mixed up. Developers refer to automated integration tests as unit tests. Also some argue which one is better which seems to me as a wrong question at all.
I would like to ask development community to share their opinions on why automated integration tests cannot replace classic unit tests.
Here are my own observations:
Integration tests can not be used with TDD approach
Integration tests are slow and can not be executed very often
In most cases integration tests do not indicate the source of the problem
it's more difficult to create test environment with integration tests
it's more difficult to ensure high coverage (e.g. simulating special cases, unexpected failures etc)
Integration tests can not be used with Interaction based testing
Integration tests move moment of discovering defect further (from paxdiablo)
EDIT: Just to clarify once again: the question is not about whether to use integration or unit testing and not about which one is more useful. Basically I want to collect arguments to the development teams which write ONLY integration tests and consider them as unit tests.
Any test which involve components from different layers is considered as integration test. This is to compare to unit test where isolation is the main goal.
Thank you,
Andrey
Integration tests tell you whether it's working. Unit tests tell you what isn't working. So long as everything is working, you "don't need" the unit tests - but once something is wrong, it's very nice to have the unit test point you directly to the problem. As you say, they serve different purposes; it's good to have both.
To directly address your subject: integration tests aren't a problem, aren't the problem. Using them instead of unit tests is.
There have been studies(a) that show that the cost of fixing a bug becomes higher as you move away from the point where the bug was introduced.
For example, it will generally cost you relatively little to fix a bug in software you haven't even pushed up to source control yet. It's your time and not much of it, I'd warrant (assuming you're any good at your job).
Contrast that with how much it costs to fix when the customer (or all your customers) find that problem. Many level of people get involved and new software has to be built in a hurry and pushed out to the field.
That's the extreme comparison. But even the difference between unit and integration tests can be apparent. Code that fails unit testing mostly affects only the single developer (unless other developers/testers/etc are waiting on it, of course). However, once your code becomes involved in integration testing, a defect can begin holding up other people on your team.
We wouldn't dream of replacing our unit tests with integration tests since:
Our unit tests are automated as well so, other than initial set-up, the cost of running them is small.
They form the beginning of the integration tests. All unit tests are rerun in the integration phase to check that the integration itself hasn't broken anything, and then there are the extra tests that have been added by the integration team.
(a) See, for example, http://slideshare.net/Vamsipothuri/defect-prevention, slide # 5, or search the net for Defect prevention : Reducing costs and enhancing quality. Th graph from the chart is duplicated below in case it ever becomes hard to find on the net:
I find integration tests markedly superior to unit tests. If I unit test my code, I'm only testing what it does versus my understanding of what it should do. That only catches implementation errors. But often a much bigger problem is errors of understanding. Integration tests catch both.
In addition, there is a dramatic cost difference; if you're making intensive use of unit tests, it's not uncommon for them to outweigh all the rest of your code put together. And they need to be maintained, just like the rest of the code does. Integration tests are vastly cheaper -- and in most cases, you already need them anyway.
There are rare cases where it might be necessary to use unit tests, e.g. for internal error handling paths that can't be triggered if the rest of the system is working correctly, but most of the time, integration tests alone give better results for far lower cost.
Integration tests are slow.
Integration tests may break different
reasons (it is not focused and
isolated). Therefore you need more
debugging on failures.
Combination of
scenarios are to big for integration
test when it is not unit tested.
Mostly I do unit tests and 10 times less integration tests (configuration, queries).
In many cases you need both. Your observations are right on track as far as I'm concerned with respect to using integration tests as unit tests, but they don't mean that integration tests are not valuable or needed, just that they serve a different purpose. One could equally argue that unit tests can't replace integration tests, precisely because they remove the dependencies between objects and they don't exercise the real environment. Both are correct.
It's all about reducing the iteration time.
With unit tests, you can write a line of code and verify it in a minute or so. With integration tests, it usually takes significantly longer (and the cost increases as the project grows).
Both are clearly useful, as both will detect issues that the other fails to detect.
OTOH, from a "pure" TDD approach, unit tests aren't tests, they're specifications of functionality. Integration tests, OTOH, really do "test" in the more traditional sense of the word.
Integration testing generally happens after unit testing. I'm not sure what value there is in testing interactions between units that have not themselves been tested.
There's no sense in testing how the gears of a machine turn together if the gears might be broken.
The two types of tests are different. Unit tests, in my opinion are not a alternative to integration tests. Mainly because integration tests are usually context specific. You may well have a scenario where a unit test fails and your integration doesn't and vice versa. If you implement incorrect business logic in a class that utilizes many other components, you would want your integration tests to highlight these, your unit tests are oblivious to this.I understand that integration testing is quick and easy. I would argue you rely on your unit tests each time you make a change to your code-base and having a list of greens would give you more confidence that you have not broken any expected behavior at the individual class level. Unit tests give you a test against a single class is doing what it was designed to do. Integration tests test that a number of classes working together do what you expect them to do for that particular collaboration instance. That is the whole idea of OO development: individual classes that encapsulate particular logic, which allows for reuse.
I think coverage is the main issue.
A unit test of a specific small component such as a method or at most a class is supposed to test that component in every legal scenario (of course, one abstracts equivalence classes but every major one should be covered). As a result, a change that breaks the established specification should be caught at this point.
In most cases, an integration uses only a subset of the possible scenarios for each subunit, so it is possible for malfunctioning units to still produce a program that initially integrates well.
It is typically difficult to achieve maximal coverage on the integration testing for all the reasons you specified below. Without unit tests, it is more likely that a change to a unit that essentially operates it in a new scenario would not be caught and might be missed in the integration testing. Even if it is not missed, pinpointing the problem may be extremely difficult.
I am not sure that most developers refer to unit tests as integration tests. My impression is that most developers understand the differences, which does not mean they practice either.
A unit test is written to test a method on a class. If that class depends on any kind of external resource or behavior, you should mock them, to ensure you test just your single class. There should be no external resources in a unit test.
An integration test is a higher level of granularity, and as you stated, you should test multiple components to check if they work together as expected. You need both integration tests and unit tests for most projects. But it is important they are kept separate and the difference is understood.
Unit tests, in my opinion, are more difficult for people to grasp. It requires a good knowledge of OO principles (fundamentally based on one class one responsibility). If you are able to test all your classes in isolation, chances are you have a well design solution which is maintainable, flexible and extendable.
When you check-in, your build server should only run unit tests and
they should be done in a few seconds, not minutes or hours.
Integration tests should be ran overnight or manually as needed.
Unit tests focus on testing an individual component and do not rely on external dependencies. They are commonly used with mocks or stubs.
Integration tests involve multiple components and may rely on external dependencies.
I think both are valuable and neither one can replace the other in the job they do. I do see a lot of integration tests masquerading as unit tests though having dependencies and taking a long time to run. They should function separately and as part of a continuous integration system.
Integration tests do often find things that unit tests do not though...
Integration tests let you check that whole use cases of your application work.
Unit tests check that low-level logic in your application is correct.
Integration tests are more useful for managers to feel safer about the state of the project (but useful for developers too!).
Unit tests are more useful for developers writing and changing application logic.
And of course, use them both to achieve best results.
It is a bad idea to "use integration tests instead of unit tests" because it means you aren't appreciating that they are testing different things, and of course passing and failing tests will give you different information. They make up sort of a ying and yang of testing as they approach it from either side.
Integration tests take an approach that simulates how a user would interact with the application. These will cut down on the need for as much manual testing, and passing tests will can tell you that you app is good to go on multiple platforms. A failing test will tell you that something is broken but often doesn't give you a whole lot of information about what's wrong with the underlying code.
Unit tests should be focusing on making sure the inputs and outputs of your function are what you expect them to be in all cases. Passing units tests can mean that your functions are working according to spec (assuming you have tests for all situations). However, all your functions working properly in isolation doesn't necessarily mean that everything will work perfectly when it's deployed. A failing unit test will give you detailed, specific information about why it's failing which should in theory make it easier to debug.
In the end I believe a combination of both unit and integration tests will yield the quickest a most bug-free software. You could choose to use one and not the other, but I avoid using the phrase "instead of".
How I see integration testing & unit testing:
Unit Testing: Test small things in isolation with low level details including but not limited to 'method conditions', checks, loops, defaulting, calculations etc.
Integration testing: Test wider scope which involves number of components, which can impact the behaviour of other things when married together. Integration tests should cover end to end integration & behaviours. The purpose of integration tests should be to prove systems/components work fine when integrated together.
(I think) What is referred here by OP as integration tests are leaning more to scenario level tests.
But where do we draw the line between unit -> integration -> scenario?
What I often see is developers writing a feature and then when unit testing it mocking away every other piece of code this feature uses/consumes and only test their own feature-code because they think someone else tested that so it should be fine. This helps code coverage but can harm the application in general.
In theory the small isolation of Unit Test should cover a lot since everything is tested in its own scope. But such tests are flawed and do not see the complete picture.
A good Unit test should try to mock as least as possible. Mocking API and persistency would be something for example. Even if the application itself does not use IOC (Inversion Of Control) it should be easy to spin up some objects for a test without mocking if every developer working on the project does it as well it gets even easier. Then the test are useful. These kind of tests have an integration character to them aren't as easy to write but help you find design flaws of your code. If it is not easy to test then adapt your code to make it easy to test. (TDD)
Pros
Fast issue identification
Helps even before a PR merge
Simple to implement and maintain
Providing a lot of data for code quality checking (e.g. coverage etc.)
Allows TDD (Test Driven Development)
Cons
Misses scenario integration errors
Succumbs to developer blindness in their own code(happens to all of us)
A good integration test would be executed for complete end to end scenarios and even check persistency and APIs which the unit test could not cover so you might know where to look first when those fail.
Pros:
Test close to real world e2e scenario
Finds Issues that developers did not think about
Very helpful in microservices architectures
Cons:
Most of the time slow
Need often a rather complex setup
Environment (persistency and api) pollution issues (needs cleanup steps)
Mostly not feasible to be used on PR's (Pull Requests)
TLDR: You need both you cant replace one with the other! The question is how to design such tests to get the best from both. And not just have them to show good statistics to the management.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Could anyone suggest books or materials to learn unit test?
Some people consider codes without unit tests as legacy codes. Nowadays, Test Driven Development is the approach for managing big software projects with ease. I like C++ a lot, I learnt it on my own without any formal education. I never looked into Unit Test before, so feel left out. I think Unit Tests are important and would be helpful in the long run. I would appreciate any help on this topic.
My main points of concern are:
What is a Unit Test? Is it a comprehensive list of test cases which should be analyzed? So lets us a say i have a class called "Complex Numbers" with some methods in it (lets says finding conjugate, an overloaded assignment operator and an overloaded multiplication operator. What should be typical test cases for such a class? Is there any methodology for selecting test cases?
Are there any frameworks which can create unit tests for me or i have to write my own class for tests? I see an option of "Test" in Visual Studio 2008, but never got it working.
What is the criteria for Units tests? Should there be a unit test for each and every function in a class? Does it make sense to have Unit Tests for each and every class?
An important point (that I didn't realise in the beginning) is that Unit Testing is a testing technique that can be used by itself, without the need to apply the full Test Driven methodology.
For example, you have a legacy application that you want to improve by adding unit tests to problem areas, or you want to find bugs in an existing app. Now you write a unit test to expose the problem code and then fix it. These are semi test-driven, but can completely fit in with your current (non-TDD) development process.
Two books I've found useful are:
Test Driven Development in Microsoft .NET
A very hands on look at Test Driven development, following on from Kent Becks' original TDD book.
Pragmatic Unit Testing with C# and nUnit
It comes straight to the point what unit testing is, and how to apply it.
In response to your points:
A Unit test, in practical terms is a single method in a class that contains just enough code to test one aspect / behaviour of your application. Therefore you will often have many very simple unit tests, each testing a small part of your application code. In nUnit for example, you create a TestFixture class that contains any number of test methods. The key point is that the tests "test a unit" of your code, ie a smallest (sensible) unit as possible. You don't test the underlying API's you use, just the code you have written.
There are frameworks that can take some of the grunt work out of creating test classes, however I don't recommmend them. To create useful unit tests that actually provide a safety net for refactoring, there is no alternative but for a developer to put thought into what and how to test their code. If you start becoming dependent on generating unit tests, it is all too easy to see them as just another task that has to be done. If you find yourself in this situation you're doing it completely wrong.
There are no simple rules as to how many unit tests per class, per method etc. You need to look at your application code and make an educated assessment of where the complexity exists and write more tests for these areas. Most people start by testing public methods only because these in turn usually exercise the remainder of the private methods. However this is not always the case and sometimes it is necessary to test private methods.
In short, even experienced unit testers start by writing obvious unit tests, then look for more subtle tests that become clearer once they have written the obvious tests. They don't expect to get every test up-front, but instead add them as they come to their mind.
While you've already accepted an answer to your question I'd like to recommend a few other books not yet mentioned:
Working Effectively with Legacy Code - Michael Feathers - As far as I know this is the only book to adequately tackle the topic of turning existing code that wasn't designed for testability into testable code. Written as more of a reference manual, its broken down into three sections: An overview of the tools and techniques, A series of topical guides to common road blocks in legacy code, A set of specific dependency breaking techniques referenced throughout the rest of the book.
Agile Principles, Patterns, and Practices - Robert C. Martin - Examples in java, there is a sequel with examples in C#. Both are easy to adapt to C++
Clean Code:A Handbook of Agile Software Craftsmanship - Robert C. Martin - Martin describes this as a prequel to his APPP books and I would agree. This book makes a case for professionalism and self-discipline, two essential qualities in any serious software developer.
The two books by Robert (Uncle Bob) Martin cover much more material than just Unit testing but they drive home just how beneficial unit testing can be to code quality and productivity. I find myself referring to these three books on a regular basis.
In .NET I strongly recommend "The Art of Unit Testing" by Roy Osherove, it is very comprehensive and full of good advice.
Nowadays, Test Driven Development is
the approach for managing big software
projects with ease.
That is because TDD allows you to make sure after each change that everything that worked before the change still works, and if it doesn't it allows you to pinpoint what was broken, much easier. (see at the end)
What is a Unit Test? Is it a
comprehensive list of test cases which
should be analyzed?
A Unit Test is a piece of code that asks a "unit" of your code to perform an operation, then verifies that the operation was indeed performed and the result is as expected. If the result is not correct, it raises / logs an error.
So lets us a say i have a class called
"Complex Numbers" with some methods in
it (lets says finding conjugate, an
overloaded assignment operator and an
overloaded multiplication operator.
What should be typical test cases for
such a class? Is there any methodology
for selecting test cases?
Ideally, you would test all the code.
when you create an instance of the
class, it is created with the correct
default values
when you ask it to find the
conjugates, it does finds the correct
ones (also test border cases, like the
conjugate for zero)
when you assign a value the value is
assigned and displayed correctly
when you multiply a complex by a
value, it is multiplied correctly
Are their any frameworks which can
create unit tests for me or i have to
write my own class for tests?
See CppUnit
I see an option of "Test" in Visual
Studio 2008, but never got it working.
Not sure on that. I haven't used VS 2008 but it may be available just for .NET.
What is the criteria for Units tests?
Should there be a unit test for each
and every function in a class? Does it
make sense to have Unit Tests for each
and every class?
Yes, it does. While that is an awful lot of code to write (and maintain with every change) the price is worth paying for large projects: It guarantees that your changes to the code base do what you want them to and nothing else.
Also, when you make a change, you need to update the unit-tests for that change (so that they pass again).
In TDD, you first decide what you want the code to do (say, your complex numbers class), then write the tests that verify those operations, then write the class so that the tests compile and execute correctly (and nothing more).
This ensures that you write the minimal code possible (and don't over-complicate the design of the complex class) and it also ensures that your code does what it does. At the end of writing the code, you have a way to test it's functionality and ensuring it's correctness.
You also have an example of using the code that you will be able to access at any point.
For further reading/documentation, look into "dependency injection" and method stubs as used in unit testing and TDD.
With test driven design, you normally want to write the tests first. They should cover the operations you're actually using/going to use. I.e. unless they're necessary for the client code to do its job, they shouldn't exist. Selecting test cases is something of an art. There are obvious things like testing boundary conditions, but in the end, nobody's found a really reliable, systematic way of assuring that tests (unit or otherwise) cover all the conditions that matter.
Yes, there are frameworks. A couple of the best known are:
Boost Unit Test Framework
CPPUNit
CPPUnit is a port of JUnit, so those who've used JUnit previously will probably find it comfortable. Otherwise, I'd tend to recommend Boost -- they also have a Test Library to help write the individual tests -- rather a handy addition.
Unit tests should be sufficient to ensure that the code works. If (for example) you have a private function that's used internally, you generally don't need to test it directly. Instead, you test whatever provides the public interface. As long as that works correctly, it's no business of the outside world how it does its job. Of course, in some cases it's easier to test little pieces, and when it is, that's perfectly legitimate -- but ultimately you care about the visible interface, not the internals. Certainly the whole external interface should be exercised, and test cases generally chosen to exercise the paths through the code. Again, there's nothing massively different about unit tests versus other kinds. It's mostly just a more systematic way of applying normal testing techniques.
Unit tests are simply a way to exercise a given body of code to ensure that a defined set of conditions leads to the expected set of out comes. As Steven points out, these "exercises" should check across a range of criteria ("BICEP"). Yes, ideally you should test all of your classes and all of the methods in these classes although there is always some room for judgement: testing shouldn't be an end in itself but rather should support the wider project goals.
Ok, so...theory is nice but to really understand Unit Testing, my recommendation would be to pull together the appropriate tools and just get started. Like most things in programming, if you have the right tools, it is easy to learn by doing.
First, pick up a copy of NUnit. It is free, easy to install and easy to work with. If you'd like some documentation, check out Pragmatic Unit Testing in C# with NUnit.
Next, go to http://www.testdriven.net/ and get a copy of TestDriven.net. It installs into Visual Studio 2008 and gives you right-click access to a full range of testing tools including the ability to run NUnit tests against a file, directory or project (typically, tests are written in a separate project). You can also run tests with debugging or, coolest of all, run all the tests against a copy of NCover. NCover will show you exactly what code is being exercised so you can figure out where you need to improve your test coverage. TestDriven.net costs $170 for a professional license but, if you are like me, it will very quickly become an integral tool in your toolbox. Anyway, I've found it to be an excellent professional investment.
Good luck!
I can't answer your question for Visual Studio 2008, but I know that Netbeans has a few integrated tools for you to use.
The code coverage two allows for you to see which paths have been checked, and how much of the code is actually covered by the unit tests.
It has the support for the unit tests built in.
As far as quality of tests I'm borrowing a bit from the "Pragmatic Unit Testing in Java with JUnit" by Andrew Hunt and David Thomas:
Unit testing should check for BICEP:
Boundary, Inverse relationships, Cross-checking, Error conditions, and Performance.
Also quality of the tests are determined by A-TRIP:
Automatic, Thorough, Repeatable, Independent, and Professional.
Here's something on when not to write unit tests (i.e. on when it's viable and even preferable to skip unit testing): Should one test internal implementation, or only test public behaviour?
The short answer is:
When you can automate integration tests (because it's important to have automated tests, but those tests don't have to be unit tests)
When it's cheap to run the integration test suite (no good if it takes two days to run, or if you can't afford to let every developer have access to an integration test equipment)
When it isn't necessary to find bugs before integration testing (which depends in part on whether components are developed separately or incrementally)
Buy the book "xUnit Test Patterns: Refactoring Test Code". Its very excellent. It does cover high-level strategy decisions as well as low level test patterns.
Nowadays, Test Driven Development is the approach for managing big software projects with ease.
TDD built on unit tests but they are different. You don't need to be use TDD to make use of unit tests. My personal preference is to write test first, but I don't feel I do the whole TDD thing.
What is a Unit Test?
A Unit Test is a bit of code that tests the behaviour of one unit. How one unit is defined differs between people. But in general they are:
Quick to run
Independent from each other
Test only a small part (a unit ;) of your code base.
Binary outcome - That is it passes or fails.
Should only test one outcome of the unit (for each outcome create a different unit test)
Repeatable
Are their any frameworks which can create unit tests
To write the tests - Yes but I've never seen anyone say anything nice about them.
To help you write & run tests, a whole bunch of them.
Should there be a unit test for each and every function in a class?
You have a few different camps in this - the 100%ers would say yes. Every method must be tested and you should have 100% code coverage. The other extreme is that unit tests should only cover areas that you have even encounter bugs or you expect to find bugs. The middle ground (and the stand I take) is to unit tests everything that is not "too simple to break". Setters/getters and anything that just calls a single other method. I aim to have 80% code coverage and a low CRAP factor (so a low chance I've been naughty and decided to not test something as it was "too complex to test).
The book that helped me "get" unit tests JUnit in Action. Sorry I don't do much in the C++ world, so I can not suggest a C++ based alternative.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have tried to introduce unit testing to our current project but it doesn't seem to be working. The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.
We have an abstract base class for unit testing our controllers that acts as a template calling into the child classes' abstract method implementations i.e. Framework calls Initialize so our controller classes all have their own Initialize method.
I used to be an advocate of unit testing but it doesn't seem to be working on our current project.
Can anyone help identify the problem and how we can make unit tests work for us rather than against us?
Tips:
Avoid writing procedural code
Tests can be a bear to maintain if they're written against procedural-style code that relies heavily on global state or lies deep in the body of an ugly method.
If you're writing code in an OO language, use OO constructs effectively to reduce this.
Avoid global state if at all possible.
Avoid statics as they tend to ripple through your codebase and eventually cause things to be static that shouldn't be. They also bloat your test context (see below).
Exploit polymorphism effectively to prevent excessive ifs and flags
Find what changes, encapsulate it and separate it from what stays the same.
There are choke points in code that change a lot more frequently than other pieces. Do this in your codebase and your tests will become more healthy.
Good encapsulation leads to good, loosely coupled designs.
Refactor and modularize.
Keep tests small and focused.
The larger the context surrounding a test, the more difficult it will be to maintain.
Do whatever you can to shrink tests and the surrounding context in which they are executed.
Use composed method refactoring to test smaller chunks of code.
Are you using a newer testing framework like TestNG or JUnit4?
They allow you to remove duplication in tests by providing you with more fine-grained hooks into the test lifecycle.
Investigate using test doubles (mocks, fakes, stubs) to reduce the size of the test context.
Investigate the Test Data Builder pattern.
Remove duplication from tests, but make sure they retain focus.
You probably won't be able to remove all duplication, but still try to remove it where it's causing pain. Make sure you don't remove so much duplication that someone can't come in and tell what the test does at a glance. (See Paul Wheaton's "Evil Unit Tests" article for an alternative explanation of the same concept.)
No one will want to fix a test if they can't figure out what it's doing.
Follow the Arrange, Act, Assert Pattern.
Use only one assertion per test.
Test at the right level to what you're trying to verify.
Think about the complexity involved in a record-and-playback Selenium test and what could change under you versus testing a single method.
Isolate dependencies from one another.
Use dependency injection/inversion of control.
Use test doubles to initialize an object for testing, and make sure you're testing single units of code in isolation.
Make sure you're writing relevant tests
"Spring the Trap" by introducing a bug on purpose and make sure it gets caught by a test.
See also: Integration Tests Are A Scam
Know when to use State Based vs Interaction Based Testing
True unit tests need true isolation. Unit tests don't hit a database or open sockets. Stop at mocking these interactions. Verify you talk to your collaborators correctly, not that the proper result from this method call was "42".
Demonstrate Test-Driving Code
It's up for debate whether or not a given team will take to test-driving all code, or writing "tests first" for every line of code. But should they write at least some tests first? Absolutely. There are scenarios in which test-first is undoubtedly the best way to approach a problem.
Try this exercise: TDD as if you meant it (Another Description)
See also: Test Driven Development and the Scientific Method
Resources:
Test Driven by Lasse Koskela
Growing OO Software, Guided by Tests by Steve Freeman and Nat Pryce
Working Effectively with Legacy Code by Michael Feathers
Specification By Example by Gojko Adzic
Blogs to check out: Jay Fields, Andy Glover, Nat Pryce
As mentioned in other answers already:
XUnit Patterns
Test Smells
Google Testing Blog
"OO Design for Testability" by Miskov Hevery
"Evil Unit Tests" by Paul Wheaton
"Integration Tests Are A Scam" by J.B. Rainsberger
"The Economics of Software Design" by J.B. Rainsberger
"Test Driven Development and the Scientific Method" by Rick Mugridge
"TDD as if you Meant it" exercise originally by Keith Braithwaite, also workshopped by Gojko Adzic
Are you testing small enough units of code? You shouldn't see too many changes unless you are fundamentally changing everything in your core code.
Once things are stable, you will appreciate the unit tests more, but even now your tests are highlighting the extent to which changes to your framework are propogated through.
It is worth it, stick with it as best you can.
Without more information it's hard to make a decent stab at why you're suffering these problems. Sometimes it's inevitable that changing interfaces etc. will break a lot of things, other times it's down to design problems.
It's a good idea to try and categorise the failures you're seeing. What sort of problems are you having? E.g. is it test maintenance (as in making them compile after refactoring!) due to API changes, or is it down to the behaviour of the API changing? If you can see a pattern, then you can try to change the design of the production code, or better insulate the tests from changing.
If changing a handful of things causes untold devastation to your test suite in many places, there are a few things you can do (most of these are just common unit testing tips):
Develop small units of code and test
small units of code. Extract
interfaces or base classes where it
makes sense so that units of code
have 'seams' in them. The more
dependencies you have to pull in (or
worse, instantiate inside the class
using 'new'), the more exposed to
change your code will be. If each
unit of code has a handful of
dependencies (sometimes a couple or
none at all) then it is better
insulated from change.
Only ever assert on what the test
needs. Don't assert on intermediate,
incidental or unrelated state. Design by
contract and test by contract (e.g.
if you're testing a stack pop method,
don't test the count property after
pushing -- that should be in a
separate test).
I see this problem
quite a bit, especially if each test
is a variant. If any of that
incidental state changes, it breaks
everything that asserts on it
(whether the asserts are needed or
not).
Just as with normal code, use factories and builders
in your unit tests. I learned that one when about 40 tests
needed a constructor call updated after an API change...
Just as importantly, use the front
door first. Your tests should always
use normal state if it's available. Only used interaction based testing when you have to (i.e. no state to verify against).
Anyway the gist of this is that I'd try to find out why/where the tests are breaking and go from there. Do your best to insulate yourself from change.
One of the benefits of unit testing is that when you make changes like this you can prove that you're not breaking your code. You do have to keep your tests in sync with your framework, but this rather mundane work is a lot easier than trying to figure out what broke when you refactored.
I would insists you to stick with the TDD. Try to check your Unit Testing framework do one RCA (Root Cause Analysis) with your team and identify the area.
Fix the unit testing code at suite level and do not change your code frequently specially the function names or other modules.
Would appreciate if you can share your case study well, then we can dig out more at the problem area?
Good question!
Designing good unit tests is hard as designing the software itself. This is rarely acknowledged by developers, so the result is often hastily-written unit tests that require maintenance whenever the system under test changes. So, part of the solution to your problem could be spending more time to improve the design of your unit tests.
I can recommend one great book that deserves its billing as The Design Patterns of Unit-Testing
HTH
If the problem is that your tests are getting out of date with the actual code, you could do one or both of:
Train all developers to not pass code reviews that don't update unit tests.
Set up an automatic test box that runs the full set of units tests after every check-in and emails those who break the build. (We used to think that that was just for the "big boys" but we used an open source package on a dedicated box.)
Well if the logic has changed in the code, and you have written tests for those pieces of code, I would assume the tests would need to be changed to check the new logic. Unit tests are supposed to be fairly simple code that tests the logic of your code.
Your unit tests are doing what they are supposed to do. Bring to the surface any breaks in behavior due to changes in the framework, immediate code or other external sources. What this is supposed to do is help you determine if the behavior did change and the unit tests need to be modified accordingly, or if a bug was introduced thus causing the unit test to fail and needs to be corrected.
Don't give up, while its frustrating now, the benefit will be realized.
I'm not sure about the specific issues that make it difficult to maintain tests for your code, but I can share some of my own experiences when I had similar issues with my tests breaking. I ultimately learned that the lack of testability was largely due to some design issues with the class under test:
Using concrete classes instead of interfaces
Using singletons
Calling lots of static methods for business logic and data access instead of interface methods
Because of this, I found that usually my tests were breaking - not because of a change in the class under test - but due to changes in other classes that the class under test was calling. In general, refactoring classes to ask for their data dependencies and testing with mock objects (EasyMock et al for Java) makes the testing much more focused and maintainable. I've really enjoyed some sites in particular on this topic:
Google testing blog
The guide to writing testable code
Why should you have to change your unit tests every time you make changes to your framework? Shouldn't this be the other way around?
If you're using TDD, then you should first decide that your tests are testing the wrong behavior, and that they should instead verify that the desired behavior exists. Now that you've fixed your tests, your tests fail, and you have to go squish the bugs in your framework until your tests pass again.
Everything comes with price of course. At this early stage of development it's normal that a lot of unit tests have to be changed.
You might want to review some bits of your code to do more encapsulation, create less dependencies etc.
When you near production date, you'll be happy you have those tests, trust me :)
Aren't your unit tests too black-box oriented ? I mean ... let me take an example : suppose you are unit testing some sort of container, do you use the get() method of the container to verify a new item was actually stored, or do you manage to get an handle to the actual storage to retrieve the item directly where it is stored ? The later makes brittle tests : when you change the implementation, you're breaking the tests.
You should test against the interfaces, not the internal implementation.
And when you change the framework you'd better off trying to change the tests first, and then the framework.
I would suggest investing into a test automation tool. If you are using continuous integration you can make it work in tandem. There are tools aout there which will scan your codebase and will generate tests for you. Then will run them. Downside of this approach is that its too generic. Because in many cases unit test's purpose is to break the system.
I have written numerous tests and yes I have to change them if the codebase changes.
There is a fine line with automation tool you would definatelly have better code coverage.
However, with a well wrttien develper based tests you will test system integrity as well.
Hope this helps.
If your code is really hard to test and the test code breaks or requires much effort to keep in sync, then you have a bigger problem.
Consider using the extract-method refactoring to yank out small blocks of code that do one thing and only one thing; without dependencies and write your tests to those small methods.
The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.
The alternative is that when your Framework changes, you don't test the changes. Or you don't test the Framework at all. Is that what you want?
You may try refactoring your Framework so that it is composed from smaller pieces that can be tested independently. Then when your Framework changes, you hope that either (a) fewer pieces change or (b) the changes are mostly in the ways in which the pieces are composed. Either way will get you better reuse of both code and tests. But real intellectual effort is involved; don't expect it to be easy.
I found that unless you use IoC / DI methodology that encourages writing very small classes, and follow Single Responsibility Principle religiously, the unit-tests end up testing interaction of multiple classes which makes them very complex and therefore fragile.
My point is, many of the novel software development techniques only work when used together. Particularly MVC, ORM, IoC, unit-testing and Mocking. The DDD (in the modern primitive sense) and TDD/BDD are more independent so you may use them or not.
Sometime designing the TDD tests launch questioning on the design of the application itself. Check if your classes have been well designed, your methods are only performing one thing a the time ... With good design it should be simple to write code to test simple method and classes.
I have been thinking about this topic myself. I'm very sold on the value of unit tests, but not on strict TDD. It seems to me that, up to a certain point, you may be doing exploratory programming where the way you have things divided up into classes/interfaces is going to need to change. If you've invested a lot of time in unit tests for the old class structure, that's increased inertia against refactoring, painful to discard that additional code, etc.
Unit testing is, roughly speaking, testing bits of your code in isolation with test code. The immediate advantages that come to mind are:
Running the tests becomes automate-able and repeatable
You can test at a much more granular level than point-and-click testing via a GUI
Rytmis
My question is, what are the current "best practices" in terms of tools as well as when and where to use unit testing as part of your daily coding?
Lets try to be somewhat language agnostic and cover all the bases.
Ok here's some best practices from some one who doesn't unit test as much as he should...cough.
Make sure your tests test one
thing and one thing only.
Write unit tests as you go. Preferably before you write the code you are testing against.
Do not unit test the GUI.
Separate your concerns.
Minimise the dependencies of your tests.
Mock behviour with mocks.
You might want to look at TDD on Three Index Cards and Three Index Cards to Easily Remember the Essence of Test-Driven Development:
Card #1. Uncle Bob’s Three Laws
Write no production code except to pass a failing test.
Write only enough of a test to demonstrate a failure.
Write only enough production code to pass the test.
Card #2: FIRST Principles
Fast: Mind-numbingly fast, as in hundreds or thousands per second.
Isolated: The test isolates a fault clearly.
Repeatable: I can run it repeatedly and it will pass or fail the same way each time.
Self-verifying: The Test is unambiguously pass-fail.
Timely: Produced in lockstep with tiny code changes.
Card #3: Core of TDD
Red: test fails
Green: test passes
Refactor: clean code and tests
The so-called xUnit framework is widely used. It was originally developed for Smalltalk as SUnit, evolved into JUnit for Java, and now has many other implementations such as NUnit for .Net. It's almost a de facto standard - if you say you're using unit tests, a majority of other developers will assume you mean xUnit or similar.
A great resource for 'best practices' is the Google Testing Blog, for example a recent post on Writing Testable Code is a fantastic resource. Specifically their 'Testing on the Toilet' series weekly posts are great for posting around your cube, or toilet, so you can always be thinking about testing.
The xUnit family are the mainstay of unit testing. They are integrated into the likes of Netbeans, Eclipse and many other IDEs. They offer a simple, structured solution to unit testing.
One thing I always try and do when writing a test is to minimise external code usage. By that I mean: I try to minimise the setup and teardown code for the test as much as possible and try to avoid using other modules/code blocks as much as possible. Well-written modular code shouldn't require too much external code in it's setup and teardown.
NUnit is a good tool for any of the .NET languages.
Unit tests can be used in a number of ways:
Test Logic
Increase separation of code units. If you can't fully test a function or section of code, then the parts that make it up are too interdependant.
Drive development, some people write tests before they write the code to be tested. This forces you to think about what you want the code to do, and then gives you a definite guideline on when you have acheived that.
Don't forget refactoring support. ReSharper on .NET provides automatic refactoring and quick fixes for missing code. That means if you write a call to something that does not exist, ReSharper will ask if you want to create the missing piece.