Do you separate your unit tests from your integration tests? [closed] - unit-testing

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I was just wondering if anyone else just sees integration tests as just a special unit test. I have, however, heard from other programmers that it is a good idea to separate your unit tests and integration test. I was wondering if someone could explain why this is a good idea. What sorts of advantages would there be to treating integration and unit tests as completely different? For example, I have seen separate folders and packages for integration tests and unit tests. I would be of the opinion that a single test package containing both unit tests and integration tests would be sufficient as they both are basically the same concept.

I see them as different for the following reason.
Unit tests can be performed on a single class/module in the developer's environment.
Integration tests should be performed in an environment that resembles the actual production setup.
Unit tests are kept deliberately "lightweight" so that the developer can run them as often as needed with minimal cost.

Speed is the primary reason. You want your unit tests to be as fast as possible so that you can run them as often as possible. You should still run your integration tests, but running them once before a check-in should be enough IMO. The unit test suite should be run much more often - ideally with every refactoring.
I work in an environment where we have about 15k junit tests with unit and integration tests completely mixed. The full suite takes about a half hour to run. Developers avoid running it and find mistakes later than they should. Sometimes they check in after running only a subset of the tests and include a bug which breaks the continuous build.
Start separating your tests early. It's very hard to once you have a large suite.

Yep. Typically unit tests are scoped at class level, so they exist in an environment together with mock objects. Integration tests, on the other hand, do all the tricks by keeping references to your real assembly types.
I just don't see how one would organize both unit and integration into a single project.

if you limit the notion of 'unit test' to scope at the class level, then yes, keep them separate
however, if you define the smallest relevant testable unit as a feature then some of your 'unit' tests will technically be 'integration' tests
the rehashing of the various definitions/interpretations of the terms is largely irrelevant though, partitioning of your test suites should be a function of the scope of the components being tested and the time required to perform the test.
For example, if all of your tests (unit, integration, regression, or whatever) apply against a single assembly and run in a few seconds, then keep them all together. But if some of your tests require six clean installation machines on a subnet while others do not, it makes sense to separate the first set of tests from the latter
summary: the distinction of 'unit' and 'integration' tests is irrelevant; package test suites based on operational scope

Related

Unit Test, Test Driven Development [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a debate with my colleague regarding unit test and test driven development. The topic is below:
1) Writing a unit test before you write functional code does not constitute a Test Driven Development approach
I think Writing a unit test does constitute a test driven development, it is part of TDD.
2) a suite of unit tests is simply a by product of TDD.
A suite of unit test is NOT a by product of TDD.
What do you say?
1) Writing tests before you write functional code is necessary to do TDD, but does not by itself constitute TDD, at least according to the classic definition, where the important point is that making the tests pass is what drives the design (rather than some formal design document).
2) Again, the classic view says that the important point is that the design evolves from the tests, forcing it to be very modular. It is (or was) a novel concept that tests could (and should) influence the design, and perhaps often rejected or overlooked so that TDD proponents started to feel it needs to be stressed. But saying that the tests themselves are "just a byproduct" is IMO a counterproductive exaggeration.
Writing unit tests prior to writing functional code is the whole point of TDD. So the suite of unit tests is not the by-product, it is the central tenent.
Wikipedia's article
Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle: first the developer writes a failing automated test case that defines a desired improvement or new function, then produces code to pass that test and finally refactors the new code to acceptable standards
TDD is all about Red - Green - Refactor.
When you have code before test there is no Red and that is bad because you may have errors in your tests or test things other then you think you are testing and get Green from the start. You should be Red and then go to Green only after you add code that is tested by that test.
Red-Green-Refactor http://reddevnews.com/~/media/ECG/visualstudiomagazine/Images/2007/11/listingsID_148_0711_rdn_tb%20gif.ashx
It really depends on what your tests do.
As far as I'm concerned TDD means that code classes, properties and methods are created due to the test being written for them first. In fact some development tools allow you to create code stubs directly from the test screens.
Writing a unit test for a method in a class that you've already created isn't TDD. Writing code that so that your test cases pass is.
TDD will give you far greater test coverage than standard unit testing. It will also focus your thoughts on what is wanted of the code, and although it may appear to take longer to produce a product by definition, it's more or less fully tested when built.
You will always end up with a suite of unit tests at the end.
However, a pragmatic approach must be taken as to how far you go with this as some areas are notoriously difficult to produce in a TDD environment e.g. WPF MVVM style views or web pages with javascript.

How we have unit testing philosophy? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Hi stackoverflow family.
It is doubtless that unit testing is of great importance in software development. But i think it is practice and philosophy which is test first. And majority of developers want to use this philosophy, but they can't perform it on their project because they aren't used to Test Driven Development. Now my question is for those who follow this philosophy. What are the properties of the good test according to your experiences? And how you enable it to be a part of your lives.
Good days.
The way of Testivus brings enlightment on unit testing.
If you write code, write tests.
Don’t get stuck on unit testing dogma.
Embrace unit testing karma.
Think of code and test as one.
The test is more important than the unit.
The best time to test is when the code is fresh.
Tests not run waste away.
An imperfect test today is better than a perfect test someday.
An ugly test is better than no test.
Sometimes, the test justifies the means.
Only fools use no tools.
Good tests fail.
Some characteristics of a good test:
its execution doesn't depend on context (or state) - i.e. whether it's run in isolation or together with other tests;
it tests exactly one functional unit;
it covers all possible scenarios of the tested functional unit.
The discussion cannot be better phrased.
http://discuss.joelonsoftware.com/default.asp?joel.3.732806.3
http://discuss.joelonsoftware.com/default.asp?joel.3.39296.27
As per the idea of good test, it is one which catches a defect :).But TDD is more than defect catching, it is more about development and continuity.
I always think the rules and philosophy of TDD are best summed up in this article by Robert C. Martin:
The Three Rules of TDD
In it, he summarises TDD with the following three rules:
You are not allowed to write any production code unless it is to make a
failing unit test pass.
You are not allowed to write any more of a unit test than is sufficient
to fail; and compilation failures are
failures.
You are not allowed to write any more production code than is
sufficient to pass the one failing
unit test.
There is an implied fourth rule:
You should refactor your code while the tests are passing.
While there are many more detailed examples, articles and books, I think these rules sum up TDD nicely.

The value of high level unit tests and mock objects [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am beginning to believe that unit testing high level, well-written code, which requires extensive use of mock objects, has little to no value. I am wondering if this assertion is correct, or am I missing something?
What do I mean by high level? These are the classes and functions near the top of the food chain. Their input and output tends to be user input, and user interface. Most of their work consists of taking user input and making a series of calls to lower-level entities. They often have little or no meaningful return values.
What do I mean by well-written? In this case, I am referring to code that is decoupled from its dependencies (using interfaces and dependency injection), and line by line is at a consistent level of abstraction. There's no tricky algorithms, and few conditionals.
I hate writing unit tests for this kind of code. The unit tests consist almost entirely of mock object setup. Line by line, the unit test read almost like a mirror image of the implementation. In fact, I write the unit tests by looking at the implementation. "First I assert this mock method is called, then I assert this mock method is called...", etc. I should be testing the behavior of the method, not that it's calling the right sequence of methods. Another thing: I have found that these tests are extremely fragile to refactoring. If a test is so brittle it utterly shatters and must be rewritten when the code under test is refactored, then hasn't one of the major benefits of unit testing been lost?
I don't want this post to be flagged as argumentative, or not a question. So I'll state my question directly: What is the correct way to unit test the kind of code I have described, or is it understood that not everything needs a unit test?
In my experience, the lower level your code is (short of being trivial), the more value unit tests are, relative to the effort required to write them. As you get higher up the food chain, tests become increasingly elaborate and more expensive.
Unit tests are critical because they tell you when you break something during refactoring.
Higher level tests have their own value, but then they are no longer called unit tests; they are called integration tests and acceptance tests. Integration tests are needed because they tell you how well the different software components work together.
Acceptance tests are what the customer signs off. Acceptance tests are typically written by other people (not the programmer) in order to provide a different perspective; programmers tend to write tests for what works, testers try to break it by testing what doesn't work.
Mocking is only useful for unit tests. For integration and acceptance tests, mocking is useless because it doesn't exercise the actual system components, such as the database and the communication infrastructure.
An aside
Just to touch on your bolded statement:
"I should be testing the behavior of
the method, not that it's calling the
right sequence of methods"
The behaviour of the object-under-test is the sequence of actions it takes. This is actually "behaviour" testing, whereas when you say "behaviour of the method", I think you mean stateful testing, as in, give it an input and verify the correct output.
I make this distinction because some BDD purists go so far as to argue that it is much more meaningful to test what your class should be calling on, rather than what the inputs and outputs are, because if you know fully how your system is behaving, then your inputs and outputs will be correct.
A response
That aside, I personally never write comprehensive tests for the UI layer. If you are using an MVVM, MVP or MVC pattern for your application, then at a "1-developer team" level, it's mind-numbing and counter-productive for me to do so. I can see the bugs in the UI, and yes, mocking behaviours at this level tends to be brittle. I'm much more concerned with making sure that my underlying domain and DAL layers are performing properly.
What is of value at the top level is an integration test. Got a web app? Instead of asserting that your controller methods are returning an ActionResult (test of little value), write an integration test that requests all the pages in your app and makes sure there are no 404's or 403's. Run it once on every deployment.
Final answer
I always follow the 80/20 rule with unit testing. To get that last 20% coverage at the high level you are talking about, is going to be 80% of your effort. For my personal and most of my work projects, this doesn't pay off.
In short, I agree. I would write integration tests, and ignore unit tests for the code you describe.
I think it is highly dependent on environment. If you are on relatively small team, and can maintain test integrity, then the more complex parts of your application should have unit tests. It is my experience that maintaining test integrity on large teams is quite difficult, as the tests are initially ok until they inevitably break...at which point they are either a) "fixed" in a way which completely negates their usefulness, or b) promptly commented out.
The main point of Mock testing seems to be so that managers can claim that the code-coverage metric is at Foo%....so everything must be working! The one exceptional case where they are possibly useful is when you need to test a class which is a huge pain to recreate authentically(testing an Action Class in Struts, for example).
I am a big believer in writing raw tests. Real code, with real objects. The code inside of methods will change over time, but the purpose and therefore the overall behaviour usually does not.
If doing TDD you should not write tests after implementation but rather the other way around. This way you'll also avoid the problem of making a test conform to the written code. You probably do have to test certain method calls within those units, but not their sequence (if it's not imperative to the domain problem - business process).
And sometimes it's perfectly feasible not to write a test for a certain method.
In general, I consider testing this type of method/command to be ripe for the integration testing level. Specifically, I "unit test" for smaller, low level commands that (generally) don't have side effects. If I really want to unit test something that doesn't fit that model, the first thing I do is see if I can refactor/redesign to make it fit.
At the higher, integration (and/or system) testing level, I get into the testing of things that have side effects. I try to mock as little as possible (possibly only external resources) at this point. An example would be mocking the database layer to:
Record how it was called to get data
Return canned data Record how it was
Record how it was called to insert manipulated data

least 'worth it' unit test you've ever written? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
On the SO blog and podcast Joel and Jeff have been discussing the, often ignored, times when unit testing a particular feature simply isn't worth the effort. Those times when unit testing a simple feature is so complicated, unpredictable, or impractical that the cost of the test doesn't reflect the value of the feature. In Joel's case, the example called for a complicated image comparison to simply determine compression quality if they had decided to write the test.
What are some cases you've run into where this was the case? Common areas I can think of are GUIs, page layout, audio testing (testing to make sure an audible warning sounded, for example), etc.
I'm looking for horror stories and actual real-world examples, not guesses (like I just did). Bonus points if you, or whoever had to write said 'impossible' test, went ahead and wrote it anyways.
#Test
public void testSetName() {
UnderTest u = new UnderTest();
u.setName("Hans");
assertEquals("Hans", u.getName());
}
Testing set/get methods is just stupid, you don't need that. If you're forced to do this, your architecture has some serious flaws.
Foo foo = new Foo();
Assert.IsNotNull(foo);
My company writes unit tests and integration tests seperately. If we write an Integration test for, say, a Data Access class, it gets fully tested.
They see Unit Tests as the same thing as an Integration test, except it can't go off-box (i.e. make calls to databases or webservices). Yet we also have Unit Tests as well as Integration Tests for the Data Access classes.
What good is a test against a data access class that can't connect to the data?
It sounds to me like the writing of a useless unit test is not the fault of unit tests, but of the programmer who decided to write the test.
As mentioned in the podcast (I believe (or somewhere else)) if a unit test is obscenely hard to create then it's possible that the code could stand to be refactored, even if it currently "works".
Even the "stupid" unit tests are necessary sometimes, even in the case of "get/set Name". When dealing which clients with complicated business rules, some of the most straightforward properties can have ridiculous caveats attached, and you mind find that some incredibly basic functions might break.
Taking the time to write a complicated unit test means that you've taken the time to fine-tune your understanding of the code, and you might fix bugs in doing so, even if you never complete the unit test itself.
Once I wrote a unit test to expose a concurrency bug, in response to a challenge on C2 Wiki.
It turned out to be unreasonably hard, and hinted that guaranteeing correctness of concurrent code is better handled at a more fundamental level.

Tricks for writing better unit tests [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What are some of the tricks or tools or policies (besides having a unit testing standard) that you guys are using to write better unit tests? By better I mean 'covers as much of your code in as few tests as possible'. I'm talking about stuff that you have used and saw your unit tests improve by leaps and bounds.
As an example I was trying out Pex the other day and I thought it was really really good. There were tests I was missing out and Pex easily showed me where. Unfortunately it has a rather restrictive license.
So what are some of the other great stuff you guys are using/doing?
EDIT: Lots of good answers. I'll be marking as correct the answer that I'm currently not practicing but will definitely try and that hopefully gives the best gains. Thanks to all.
Write many tests per method.
Test the smallest thing possible. Then test the next smallest thing.
Test all reasonable input and output ranges. IOW: If your method returns boolean, make sure to test the false and true returns. For int? -1,0,1,n,n+1 (proof by mathematical induction). Don't forget to check for all Exceptions (assuming Java).
4a. Write an abstract interface first.
4b. Write your tests second.
4c. Write your implementation last.
Use Dependency Injection. (for Java: Guice - supposedly better, Spring - probably good enough)
Mock your "Unit's" collaborators with a good toolkit like mockito (assuming Java, again).
Google much.
Keep banging away at it. (It took me 2 years - without much help but for google - to start "getting it".)
Read a good book about the topic.
Rinse, repeat...
Write tests before you write the code (ie: Test Driven Development). If for some reason you are unable to write tests before, write them as you write the code. Make sure that all the tests fail initially. Then, go down the list and fix each broken one in sequence. This approach will lead to better code and better tests.
If you have time on your side, then you may even consider writing the tests, forgetting about it for a week, and then writing the actual code. This way you have taken a step away from the problem and can see the problem more clearly now. Our brains process tasks differently if they come from external or internal sources and this break makes it an external source.
And after that, don't worry about it too much. Unit tests offer you a sanity check and stable ground to stand on -- that's all.
On my current project we use a little generation tool to produce skeleton unit tests for various entities and accessors, it provides a fairly consistent approach for each modular unit of work which needs to be tested, and creates a great place for developers to test out their implementations from (i.e the unit test class is added when the rest of the entities and other dependencies are added by default).
The structure of the (templated) tests follows a fairly predictable syntax, and the template allows for implementation of module/object-specific buildup/tear down (we also use a base class for all the tests to encapsule some logic).
We also create instances of entities (and assign test data values) in static functions so that objects can be created programatically and used within different test scenarios and across test classes, whcih is proving to be very helpful.
Read a book like The Art of Unit Testing will definitely help.
As far as policy goes read Kent Beck's answer on SO, particularly:
to test as little as possible to reach a given level of confidence
Write pragmatic unit tests for tricky parts of your code and don't lose site of the fact that it's the program you are testing that's important not the unit tests.
I have a ruby script that generates test stubs for "brown" code that wasnt built with TDD. It writes my build script, sets up includes/usings and writes a setup/teardown to instantiate the test class in the stub. Helps to start with a consistent starting point without all the typing tedium when I hack at code written in the Dark Times.
One practice I've found very helpful is the idea of making your test suite isomorphic to the code being tested. That means that the tests are arranged in the same order as the lines of code they are testing. This makes it very easy to take a piece of code and the test suite for that code, look at them side-by-side and step through each line of code to verify there is an appropriate test. I have also found that the mere act of enforcing isomorphism like this forces me to think carefully about the code being tested, such as ensuring that all the possible branches in the code are exercised by tests, or that all the loop conditions are tested.
For example, given code like this:
void MyClass::UpdateCacheInfo(
CacheInfo *info)
{
if (mCacheInfo == info) {
return;
}
info->incrRefCount();
mCacheInfo->decrRefCount();
mCacheInfo = info
}
The test suite for this function would have the following tests, in order:
test UpdateCacheInfo_identical_info
test UpdateCacheInfo_increment_new_info_ref_count
test UpdateCacheInfo_decrement_old_info_ref_count
test UpdateCacheInfo_update_mCacheInfo