As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Hi stackoverflow family.
It is doubtless that unit testing is of great importance in software development. But i think it is practice and philosophy which is test first. And majority of developers want to use this philosophy, but they can't perform it on their project because they aren't used to Test Driven Development. Now my question is for those who follow this philosophy. What are the properties of the good test according to your experiences? And how you enable it to be a part of your lives.
Good days.
The way of Testivus brings enlightment on unit testing.
If you write code, write tests.
Don’t get stuck on unit testing dogma.
Embrace unit testing karma.
Think of code and test as one.
The test is more important than the unit.
The best time to test is when the code is fresh.
Tests not run waste away.
An imperfect test today is better than a perfect test someday.
An ugly test is better than no test.
Sometimes, the test justifies the means.
Only fools use no tools.
Good tests fail.
Some characteristics of a good test:
its execution doesn't depend on context (or state) - i.e. whether it's run in isolation or together with other tests;
it tests exactly one functional unit;
it covers all possible scenarios of the tested functional unit.
The discussion cannot be better phrased.
http://discuss.joelonsoftware.com/default.asp?joel.3.732806.3
http://discuss.joelonsoftware.com/default.asp?joel.3.39296.27
As per the idea of good test, it is one which catches a defect :).But TDD is more than defect catching, it is more about development and continuity.
I always think the rules and philosophy of TDD are best summed up in this article by Robert C. Martin:
The Three Rules of TDD
In it, he summarises TDD with the following three rules:
You are not allowed to write any production code unless it is to make a
failing unit test pass.
You are not allowed to write any more of a unit test than is sufficient
to fail; and compilation failures are
failures.
You are not allowed to write any more production code than is
sufficient to pass the one failing
unit test.
There is an implied fourth rule:
You should refactor your code while the tests are passing.
While there are many more detailed examples, articles and books, I think these rules sum up TDD nicely.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a debate with my colleague regarding unit test and test driven development. The topic is below:
1) Writing a unit test before you write functional code does not constitute a Test Driven Development approach
I think Writing a unit test does constitute a test driven development, it is part of TDD.
2) a suite of unit tests is simply a by product of TDD.
A suite of unit test is NOT a by product of TDD.
What do you say?
1) Writing tests before you write functional code is necessary to do TDD, but does not by itself constitute TDD, at least according to the classic definition, where the important point is that making the tests pass is what drives the design (rather than some formal design document).
2) Again, the classic view says that the important point is that the design evolves from the tests, forcing it to be very modular. It is (or was) a novel concept that tests could (and should) influence the design, and perhaps often rejected or overlooked so that TDD proponents started to feel it needs to be stressed. But saying that the tests themselves are "just a byproduct" is IMO a counterproductive exaggeration.
Writing unit tests prior to writing functional code is the whole point of TDD. So the suite of unit tests is not the by-product, it is the central tenent.
Wikipedia's article
Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle: first the developer writes a failing automated test case that defines a desired improvement or new function, then produces code to pass that test and finally refactors the new code to acceptable standards
TDD is all about Red - Green - Refactor.
When you have code before test there is no Red and that is bad because you may have errors in your tests or test things other then you think you are testing and get Green from the start. You should be Red and then go to Green only after you add code that is tested by that test.
Red-Green-Refactor http://reddevnews.com/~/media/ECG/visualstudiomagazine/Images/2007/11/listingsID_148_0711_rdn_tb%20gif.ashx
It really depends on what your tests do.
As far as I'm concerned TDD means that code classes, properties and methods are created due to the test being written for them first. In fact some development tools allow you to create code stubs directly from the test screens.
Writing a unit test for a method in a class that you've already created isn't TDD. Writing code that so that your test cases pass is.
TDD will give you far greater test coverage than standard unit testing. It will also focus your thoughts on what is wanted of the code, and although it may appear to take longer to produce a product by definition, it's more or less fully tested when built.
You will always end up with a suite of unit tests at the end.
However, a pragmatic approach must be taken as to how far you go with this as some areas are notoriously difficult to produce in a TDD environment e.g. WPF MVVM style views or web pages with javascript.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I wonder if individual programmer would spend time doing unit test, functional test or applying test driven development (TDD) method when they code alone. Or, he just bother about getting it work.
Reason I ask because having those tests do prolong the entire project.
Thanks.
I've found that if I don't do unit tests, that prolongs the project a thousand times more. I always want to just get the feature working, then the next feature, then the next. It takes work for me to have the discipline to do unit tests, even though I've proved to myself over and over again that those tests form the golden road to timely completion.
The benefits of testing are the same whether you are a team or a sole developer.
Therefore the magic answer is this: it is totally up to you.
The only real difference between the two scenarios is when you are developing by yourself you do not have to convince anyone else to write or not write tests, you can simply have that argument with yourself.
I prefer doing TDD coz the API's of some modules are not defined in beginning and writing the UI while writing the API for the data module is a little confusing at times.
so I tend to create the data module API's by writing the test cases for it. I also use it to measure progress. once that is done. the UI gets complete fairly quick and debugging the UI gets a lot lot lot faster as data part is already tested.
while the offset with the test case development is more, the debugging time it saves is good amount and gives a comfortable development flow.
It depends :)
If it is a prototype/ proof of concept / tech that I am trying to learn. I'd usually not choose TDD because it's a throw-away. (Learning tests for third party libs that I am trying to integrate into my app are an exception.)
If it is something that I need to sustain for a longer duration of time (more than a month), I'd pick TDD. If multiple people are going to work on it in the future, I'd definitely pick TDD
That said, TDD does not automatically imply good apps/design/insert-good-metric here. Good/Experienced programmers write good software. With TDD, they're more likely to be better/faster.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
On the SO blog and podcast Joel and Jeff have been discussing the, often ignored, times when unit testing a particular feature simply isn't worth the effort. Those times when unit testing a simple feature is so complicated, unpredictable, or impractical that the cost of the test doesn't reflect the value of the feature. In Joel's case, the example called for a complicated image comparison to simply determine compression quality if they had decided to write the test.
What are some cases you've run into where this was the case? Common areas I can think of are GUIs, page layout, audio testing (testing to make sure an audible warning sounded, for example), etc.
I'm looking for horror stories and actual real-world examples, not guesses (like I just did). Bonus points if you, or whoever had to write said 'impossible' test, went ahead and wrote it anyways.
#Test
public void testSetName() {
UnderTest u = new UnderTest();
u.setName("Hans");
assertEquals("Hans", u.getName());
}
Testing set/get methods is just stupid, you don't need that. If you're forced to do this, your architecture has some serious flaws.
Foo foo = new Foo();
Assert.IsNotNull(foo);
My company writes unit tests and integration tests seperately. If we write an Integration test for, say, a Data Access class, it gets fully tested.
They see Unit Tests as the same thing as an Integration test, except it can't go off-box (i.e. make calls to databases or webservices). Yet we also have Unit Tests as well as Integration Tests for the Data Access classes.
What good is a test against a data access class that can't connect to the data?
It sounds to me like the writing of a useless unit test is not the fault of unit tests, but of the programmer who decided to write the test.
As mentioned in the podcast (I believe (or somewhere else)) if a unit test is obscenely hard to create then it's possible that the code could stand to be refactored, even if it currently "works".
Even the "stupid" unit tests are necessary sometimes, even in the case of "get/set Name". When dealing which clients with complicated business rules, some of the most straightforward properties can have ridiculous caveats attached, and you mind find that some incredibly basic functions might break.
Taking the time to write a complicated unit test means that you've taken the time to fine-tune your understanding of the code, and you might fix bugs in doing so, even if you never complete the unit test itself.
Once I wrote a unit test to expose a concurrency bug, in response to a challenge on C2 Wiki.
It turned out to be unreasonably hard, and hinted that guaranteeing correctness of concurrent code is better handled at a more fundamental level.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been building IMO a really cool RIA. But its now close to completion and I need to test it to see if there are any bugs or counter-intuitive parts or anything like that. But how? Anytime I ask someone to try to break it, they look at it for like 3 minutes and say "it's solid". How do you guys test things? I have never used a UnitTest before, actually about 3 months ago I never even heard of a unit-test, and I still don't really understand what it is. Would I have to build a whole new application to run every function? That would take forever, plus some functions may only produce errors in certain situations, so I do not understand unit tests.
The question is pretty open-ended so this post won't answer all your question. If you can refine what you are looking for, that would help.
There are two major pieces of testing you likely want to do. The first is unit testing and the second is what might be called acceptance testing.
Unit testing is trying each of the classes/methods in relative isolation and making sure they work. You can use something like jUnit, nUnit, etc. as a framework to hold your tests. Take a method and look at what the different inputs it might expect and what its outcome is. Then write a test case for each of these input/output pairs. This will tell you that most of the parts work as intended.
Acceptance testing (or end-to-end testing as it is sometimes called) is running the whole system and making sure it works. Come up with a list of scenarios you expect users to do. Now systematically try them all. Try variations of them. Do they work? If so, you are likely ready to roll it out to at least a limited audience.
Also, check out How to Break Software by James Whittaker. It's one of the better testing books and is a short read.
First thing is to systematically make sure everything works in the manner you expect it to. Then you want to try it against every realistic hardware with software installed combination that is feasible and appropriate. Then you want to take every point of human interaction and try putting as much data in, no data in, and special data that may cause exceptions. The try doing things in an order or workflow you did not expect sometimes certain actions depend on others. You and your friends will naturally do those steps in order, what happens when someone doesn't? Also, having complete novices use it is a good way to see odd things users might try.
Release it in beta?
It's based on Xcode and Cocoa development, but this video is still a great introduction to unit testing. Unit testing is really something that should be done alongside development, so if your application is almost finished it's going to take a while to implement.
Firebug has a good profiler for web apps. As for testing JS files, I use Scriptaculous. Whatever backend you are using needs to be fully tested too.
But before you do that, you need to understand what unit testing is. Unit testing is verifying that all of the individual units of source code function as they are intended. This means that you verify the output of all of your functions/methods. Basically, read this. There are different testing strategies beyond unit testing such as integration testing, which is testing that different modules integrate with one another. What you are asking people to do is Acceptance testing, which is verifying that it looks and behaves according to the original plan. Here is more on various testing strategies.
PS: always test boundary conditions
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I was just wondering if anyone else just sees integration tests as just a special unit test. I have, however, heard from other programmers that it is a good idea to separate your unit tests and integration test. I was wondering if someone could explain why this is a good idea. What sorts of advantages would there be to treating integration and unit tests as completely different? For example, I have seen separate folders and packages for integration tests and unit tests. I would be of the opinion that a single test package containing both unit tests and integration tests would be sufficient as they both are basically the same concept.
I see them as different for the following reason.
Unit tests can be performed on a single class/module in the developer's environment.
Integration tests should be performed in an environment that resembles the actual production setup.
Unit tests are kept deliberately "lightweight" so that the developer can run them as often as needed with minimal cost.
Speed is the primary reason. You want your unit tests to be as fast as possible so that you can run them as often as possible. You should still run your integration tests, but running them once before a check-in should be enough IMO. The unit test suite should be run much more often - ideally with every refactoring.
I work in an environment where we have about 15k junit tests with unit and integration tests completely mixed. The full suite takes about a half hour to run. Developers avoid running it and find mistakes later than they should. Sometimes they check in after running only a subset of the tests and include a bug which breaks the continuous build.
Start separating your tests early. It's very hard to once you have a large suite.
Yep. Typically unit tests are scoped at class level, so they exist in an environment together with mock objects. Integration tests, on the other hand, do all the tricks by keeping references to your real assembly types.
I just don't see how one would organize both unit and integration into a single project.
if you limit the notion of 'unit test' to scope at the class level, then yes, keep them separate
however, if you define the smallest relevant testable unit as a feature then some of your 'unit' tests will technically be 'integration' tests
the rehashing of the various definitions/interpretations of the terms is largely irrelevant though, partitioning of your test suites should be a function of the scope of the components being tested and the time required to perform the test.
For example, if all of your tests (unit, integration, regression, or whatever) apply against a single assembly and run in a few seconds, then keep them all together. But if some of your tests require six clean installation machines on a subnet while others do not, it makes sense to separate the first set of tests from the latter
summary: the distinction of 'unit' and 'integration' tests is irrelevant; package test suites based on operational scope