Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am 4 months into professional software development. TDD is non-negotiable at my company GO-JEK.
Here is my observation: People tend to first write code, and then write tests for it. Apparently, that is more convenient for people with 4-5 years experience of s/w development and not following TDD before.
So, what is the reason people first write a failing test, then write code to pass it? Why people don't write code first and then add a test for it?
We can perform refactoring either ways
This is a good question. Since we ultimately want our tests to pass, why not write them so they pass in the first place?
The answer is that we really want our tests to be driving development. We want the tests to come first. Because when we write a test which needs some functionality, that's a concrete expression of what is needed, and that new bit of functionality is well defined. Initially that functionality does not exist (so the test is red); when we have successfully added the functionality, the test is green. It's a clean determination: either the functionality is present and the test is passing - or it isn't, and the test is failing.
If instead we write the test green (with the functionality already present) we may have written more functionality than we actually need. Or we may have written bad code - the functionality is present but wrong - and a correspondingly bad test. When we write the test first, we witness the code base transitioning from the state of lacking the necessary functionality, to having it - and we know with a fair degree of confidence we've gotten it right.
By writing a failing test, you know your test can fail, Which is the point of a unit test, to fail when not working. It's possible that by only ever seeing the test pass, it could be written in a way that would never fail, i.e forget to add an assert or a poorly written test.
Also by incrementally passing 'failing' tests, you know your are adding value with each code change.
My preferred method is to write a test and then write code to make it pass (TDD), but even when you wind up writing tests for existing code (for instance working with legacy code) you still want to do the RED - GREEN - Refactor process.
Writing a test that you believe will fail for the existing code (say by reversing your assert) and then verifying that it does, in fact, fail will give you confidence that your test is working correctly and when you set the assert back to the correct sense it will pass convincingly. Otherwise, how do you know that you are not getting a false positive from the test - or that the test is actually being run by the test runner (with some unit test frameworks)?
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I recently started a new project where I decided to adopt writing unit tests for most functions. Prior to this my testing was limited to sporadically writing test "functions" to ensure something worked as expected, and then never bothering to update the test functions, clearly not good.
Now that I've written a fair amount of code, and tests, I'm noticing that I'm writing a lot of tests for my code. My code is generally quite modular, in the sense that I try to code small functions that do something simple, and then chain them together in a larger function as required, again, accepted best practice.
But, I now end up writing tests for both the individual "building block" functions (quite small tests), as well as tests for the function that chains them together, and testing the result there as well, obviously the result will be different, but since the inputs are similar, I'm duplicating a lot of test code (the setting up the input portions , which are slightly different in each but not by much, since they're not identical I can't just use a text fixture..).
Another concern is I try to adhere quite strictly to test one thing per test, so I write a single test for every different feature within the function, for instance, if there's some extra input that can be passed to the function, but which is optional, I write one version which adds the input, one that doesn't and test them separately. The setup here is again mostly identical except for the input I added, again not exactly the same, so using a fixture doesn't feel "right".
Since this is my first project with everything being fully unit tested, I just wanted to make sure I was doing stuff correctly and that the code duplication in tests is to be expected.. so, my question is: Am I doing things correctly? If not, what should I change?
I code in C and C++.
On a side note, I love the testing itself, I'm far more confident of my code now.
Thanks!
Your question tries to address many things, and I can try to answer only some of them.
Try to get as high coverage as possible (ideally 100%)
Do not use real resources for your unit test, or at least try to avoid it. You can use mocks and stubs for that.
Do not unit test 3rd party libraries.
You can break dependencies using dependency injections or functors. That way the size of your tests can decrease.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
The community reviewed whether to reopen this question 3 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I always learned that doing maximum code coverage with unit tests is good. I also hear developers from big companies such as Microsoft saying that they write more lines of testing code than the executable code itself.
Now, is it really great? Doesn't it seem sometimes like a complete loss of time which has an only effect to making maintenance more difficult?
For example, let's say I have a method DisplayBooks() which populates a list of books from a database. The product requirements tell that if there are more than one hundred books in the store, only one hundred must be displayed.
So, with TDD,
I will start by making an unit test BooksLimit() which will save two hundred books in the database, call DisplayBooks(), and do an Assert.AreEqual(100, DisplayedBooks.Count).
Then I will test if it fails,
Then I'll change DisplayBooks() by setting the limit of results to 100, and
Finally I will rerun the test to see if it succeeds.
Well, isn't it much more easier to go directly to the third step, and do never make BooksLimit() unit test at all? And isn't it more Agile, when requirements will change from 100 to 200 books limit, to change only one character, instead of changing tests, running tests to check if it fails, changing code and running tests again to check if it succeeds?
Note: lets assume that the code is fully documented. Otherwise, some may say, and they would be right, that doing full unit tests will help to understand code which lacks documentation. In fact, having a BooksLimit() unit test will show very clearly that there is a maximum number of books to display, and that this maximum number is 100. Stepping into the non-unit-tests code would be much more difficult, since such limit may be implemented though for (int bookIndex = 0; bookIndex < 100; ... or foreach ... if (count >= 100) break;.
Well, isn't it much more easier to go directly to the third step, and do never make BooksLimit() unit test at all?
Yes... If you don't spend any time writing tests, you'll spend less time writing tests. Your project might take longer overall, because you'll spend a lot of time debugging, but maybe that's easier to explain to your manager? If that's the case... get a new job! Testing is crucial to improving your confidence in your software.
Unittesting gives the most value when you have a lot of code. It's easy to debug a simple homework assignment using a few classes without unittesting. Once you get out in the world, and you're working in codebases of millions of lines - you're gonna need it. You simply can't single step your debugger through everything. You simply can't understand everything. You need to know that the classes you're depending on work. You need to know if someone says "I'm just gonna make this change to the behavior... because I need it", but they've forgotten that there's two hundred other uses that depend on that behavior. Unittesting helps prevent that.
With regard to making maintenance harder: NO WAY! I can't capitalize that enough.
If you're the only person that ever worked on your project, then yes, you might think that. But that's crazy talk! Try to get up to speed on a 30k line project without unittests. Try to add features that require significant changes to code without unittests. There's no confidence that you're not breaking implicit assumptions made by the other engineers. For a maintainer (or new developer on an existing project) unittests are key. I've leaned on unittests for documentation, for behavior, for assumptions, for telling me when I've broken something (that I thought was unrelated). Sometimes a poorly written API has poorly written tests and can be a nightmare to change, because the tests suck up all your time. Eventually you're going to want to refactor this code and fix that, but your users will thank you for that too - your API will be far easier to use because of it.
A note on coverage:
To me, it's not about 100% test coverage. 100% coverage doesn't find all the bugs, consider a function with two if statements:
// Will return a number less than or equal to 3
int Bar(bool cond1, bool cond2) {
int b;
if (cond1) {
b++;
} else {
b+=2;
}
if (cond2) {
b+=2;
} else {
b++;
}
}
Now consider I write a test that tests:
EXPECT_EQ(3, Bar(true, true));
EXPECT_EQ(3, Bar(false, false));
That's 100% coverage. That's also a function that doesn't meet the contract - Bar(false, true); fails, because it returns 4. So "complete coverage" is not the end goal.
Honestly, I would skip tests for BooksLimit(). It returns a constant, so it probably isn't worth the time to write them (and it should be tested when writing DisplayBooks()). I might be sad when someone decides to (incorrectly) calculate that limit from the shelf size, and it no longer satisfies our requirements. I've been burned by "not worth testing" before. Last year I wrote some code that I said to my coworker: "This class is mostly data, it doesn't need to be tested". It had a method. It had a bug. It went to production. It paged us in the middle of the night. I felt stupid. So I wrote the tests. And then I pondered long and hard about what code constitutes "not worth testing". There isn't much.
So, yes, you can skip some tests. 100% test coverage is great, but it doesn't magically mean your software is perfect. It all comes down to confidence in the face of change.
If I put class A, class B and class C together, and I find something that doesn't work, do I want to spend time debugging all three? No. I want to know that A and B already met their contracts (via unittests) and my new code in class C is probably broken. So I unittest it. How do I even know it's broken, if I don't unittest? By clicking some buttons and trying the new code? That's good, but not sufficient. Once your program scales up, it'll be impossible to rerun all your manual tests to check that everything works right. That's why people who unittest usually automate running their tests too. Tell me "Pass" or "Fail", don't tell me "the output is ...".
OK, gonna go write some more tests...
100% unit test coverage is generally a code smell, a sign that someone has come over all OCD over the green bar in the coverage tool, instead of doing something more useful.
Somewhere around 85% is the sweet spot, where a test failing more often that not indicates an actual or potential problem, rather than simply being an inevitable consequence of any textual change not inside comment markers. You are not documenting any useful assumptions about the code if your assumptions are 'the code is what it is, and if it was in any way different it would be something else'. That's a problem solved by a comment-aware checksum tool, not a unit test.
I wish there was some tool that would let you specify the target coverage. And then if you accidentally go over it, show things in yellow/orange/red to push you towards deleting some of the spurious extra tests.
When looking at an isolated problem, you're completely right. But unit tests are about covering all the intentions you have for a certain piece of code.
Basically, the unit tests formulate your intentions. With a growing number of intentions, the behavior of the code to be tested can always be checked against all intentions made so far. Whenever a change is made, you can prove that there is no side-effect which breaks existing intentions. Newly found bugs are nothing else but an (implicit) intention which is not held by the code, so that you formulate your intention as new test (which fails at first) and the fix it.
For one-time code, unit tests are indeed not worth the effort because no major changes are expected. However, for any block of code which is to be maintained or which serves as component for other code, warranting that all intentions are held for any new version is worth a lot (in terms of less effort for manually trying to check for side effects).
The tipping point where unit tests actually save you time and therefore money depends on the complexity of the code, but there always is a tipping point which usually is reached after only few iterations of changes. Also, last but not least it allows you to ship fixes and changes much faster without compromising the quality of your product.
There is no exlpicit relation between code coverage and good software. You can easily imagine piece of code that has 100%(or close) code coverage and it still contains a lot of bugs. (Which does not mean that tests are bad!)
Your question about agility of 'no test at all' approach is a good one only for short perspective (which means it is most likely not good if you plan to build your program for longer time). I know from my experience that such simple tests are very useful when your project gets bigger and bigger and at some stage you need to make significant changes. This can be a momment when you'll say to yourself 'It was a good decision to spend some extra minutes to write that tiny test that spotted bug I just introduced!".
I was a big fan of code coverage recently but now it turned (luckilly) to something like 'problems coverage' approach. It means that your tests should cover all problems and bugs that were spotted not just 'lines of code'. There is no need to do a 'code coverage race'.
I understand 'Agile' word in terms of number tests as 'number of tests that helps me build good software and not waste time to write unnecessary piece of code' rather than '100% coverage' or 'no tests at all'. It's very subjective and it based on your experience, team, technology and many others factors.
The psychological side effect of '100% code coverage' is that you may think that your code has no bugs, which never is true:)
Sorry for my english.
100% code coverage is a managerial physiological disorder to impress stakeholders artificially. We do testing because there is some complex code out there which can lead to defects. So we need to make sure that the complex code has a test case, its tested and the defects are fixed before its live.
We should aim at test something which is complex and not just everything. Now this complex needs to be expressed in terms of some metric number which can be ether cyclomatic complexity , lines of code , aggregations , coupling etc. or its probably culmination of all the above things. If we find that metric higher we need to ensure that , that part of the code is covered. Below is my article which covers what is the best % for code coverage.
Is 100% code coverage really needed ?
I agree with #soru, 100% test coverage is not a rational goal.
I do not believe that any tool or metric can exist that can tell you the "right" amount of coverage. When I was in grad school, my Thesis advisor's work was on designing test coverage for "mutated" code. He's take a suite of tests, and then run an automated program to make errors in the source code under test. The idea was that the mutated code contained errors that would be found in the real world, and thus a test suite that found the highest percentage of broken code was the winner.
While his thesis was accepted, and he is now a Professor at a major engineering school, he did not find either:
1) a magic number of test coverage that is optimal
2) any suite that could find 100% of the errors.
Note, the goal is to find 100% of the errors, not to find 100% coverage.
Whether #soru's 85% is right or not is a subject for discussion. I have no means to assess if a better number would be 80% or 90% or anything else. But as a working assessment, 85% feels about right to me.
First 100% is hard to get especially on big projects ! and even if you do when a block of code is covered it doesn't mean that it is doing what it is supposed to unless your test are asserting every possible input and output (which is Almost impossible).
So i wouldn't consider a piece of software to be good simply because it has 100% code coverage but code coverage still a good thing to have.
Well, isn't it much more easier to go directly to the third step, and do never make BooksLimit() unit test at all?
well having that test there makes you pretty confident that if someone changes the code and the test fails you will notice that something is wrong with the new code therefore you avoid any potencial bug in your application
When the client decides to change the limit to 200, good luck finding bugs related to that seemingly trivial test. Specially, when you have other 100 variables in your code, and there are other 5 developers working on code that relies on that tiny piece of information.
My point: if it's valuable to the business value (or, if you dislike the name, to the very important core of the project), test it. Only discard when there is no possible (or cheap) way of testing it, like UI or user interaction, or when you are sure the impact of not writing that test is minimal. This holds truer for projects with vague, or quickly changing requirements [as I painfully discovered].
For the other example you present, the recommended is to test boundary values. So you can limit your tests to only four values: 0, some magical number between 0 and BooksLimit, BooksLimit, and some number higher.
And, as other people said, make tests, but be 100% positive something else can fail.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm fairly new to the unit testing world, and I just decided to add test coverage for my existing app this week.
This is a huge task, mostly because of the number of classes to test but also because writing tests is all new to me.
I've already written tests for a bunch of classes, but now I'm wondering if I'm doing it right.
When I'm writing tests for a method, I have the feeling of rewriting a second time what I already wrote in the method itself.
My tests just seems so tightly bound to the method (testing all codepath, expecting some inner methods to be called a number of times, with certain arguments), that it seems that if I ever refactor the method, the tests will fail even if the final behavior of the method did not change.
This is just a feeling, and as said earlier, I have no experience of testing. If some more experienced testers out there could give me advices on how to write great tests for an existing app, that would be greatly appreciated.
Edit : I would love to thank Stack Overflow, I had great inputs in less that 15 minutes that answered more of the hours of online reading I just did.
My tests just seems so tightly bound to the method (testing all codepath, expecting some inner methods to be called a number of times, with certain arguments), that it seems that if I ever refactor the method, the tests will fail even if the final behavior of the method did not change.
I think you are doing it wrong.
A unit test should:
test one method
provide some specific arguments to that method
test that the result is as expected
It should not look inside the method to see what it is doing, so changing the internals should not cause the test to fail. You should not directly test that private methods are being called. If you are interested in finding out whether your private code is being tested then use a code coverage tool. But don't get obsessed by this: 100% coverage is not a requirement.
If your method calls public methods in other classes, and these calls are guaranteed by your interface, then you can test that these calls are being made by using a mocking framework.
You should not use the method itself (or any of the internal code it uses) to generate the expected result dynamically. The expected result should be hard-coded into your test case so that it does not change when the implementation changes. Here's a simplified example of what a unit test should do:
testAdd()
{
int x = 5;
int y = -2;
int expectedResult = 3;
Calculator calculator = new Calculator();
int actualResult = calculator.Add(x, y);
Assert.AreEqual(expectedResult, actualResult);
}
Note that how the result is calculated is not checked - only that the result is correct. Keep adding more and more simple test cases like the above until you have have covered as many scenarios as possible. Use your code coverage tool to see if you have missed any interesting paths.
For unit testing, I found both Test Driven (tests first, code second) and code first, test second to be extremely useful.
Instead of writing code, then writing test. Write code then look at what you THINK the code should be doing. Think about all the intended uses of it and then write a test for each. I find writing tests to be faster but more involved than the coding itself. The tests should test the intention. Also thinking about the intentions you wind up finding corner cases in the test writing phase. And of course while writing tests you might find one of the few uses causes a bug (something I often find, and I am very glad this bug did not corrupt data and go unchecked).
Yet testing is almost like coding twice. In fact I had applications where there was more test code (quantity) than application code. One example was a very complex state machine. I had to make sure that after adding more logic to it, the entire thing always worked on all previous use cases. And since those cases were quite hard to follow by looking at the code, I wound up having such a good test suite for this machine that I was confident that it would not break even after making changes, and the tests saved my ass a few times. And as users or testers were finding bugs with the flow or corner cases unaccounted for, guess what, added to tests and never happened again. This really gave users confidence in my work in addition to making the whole thing super stable. And when it had to be re-written for performance reasons, guess what, it worked as expected on all inputs thanks to the tests.
All the simple examples like function square(number) is great and all, and are probably bad candidates to spend lots of time testing. The ones that do important business logic, thats where the testing is important. Test the requirements. Don't just test the plumbing. If the requirements change then guess what, the tests must too.
Testing should not be literally testing that function foo invoked function bar 3 times. That is wrong. Check if the result and side-effects are correct, not the inner mechanics.
It's worth noting that retro-fitting unit tests into existing code is far more difficult than driving the creation of that code with tests in the first place. That's one of the big questions in dealing with legacy applications... how to unit test? This has been asked many times before (so you may be closed as a dupe question), and people usually end up here:
Moving existing code to Test Driven Development
I second the accepted answer's book recommendation, but beyond that there's more information linked in the answers there.
Don't write tests to get full coverage of your code. Write tests that guarantee your requirements. You may discover codepaths that are unnecessary. Conversely, if they are necessary, they are there to fulfill some kind of requirement; find it what it is and test the requirement (not the path).
Keep your tests small: one test per requirement.
Later, when you need to make a change (or write new code), try writing one test first. Just one. Then you'll have taken the first step in test-driven development.
Unit testing is about the output you get from a function/method/application.
It does not matter at all how the result is produced, it just matters that it is correct.
Therefore, your approach of counting calls to inner methods and such is wrong.
What I tend to do is sit down and write what a method should return given certain input values or a certain environment, then write a test which compares the actual value returned with what I came up with.
Try writing a Unit Test before writing the method it is going to test.
That will definitely force you to think a little differently about how things are being done. You'll have no idea how the method is going to work, just what it is supposed to do.
You should always be testing the results of the method, not how the method gets those results.
tests are supposed to improve maintainability. If you change a method and a test breaks that can be a good thing. On the other hand, if you look at your method as a black box then it shouldn't matter what is inside the method. The fact is you need to mock things for some tests, and in those cases you really can't treat the method as a black box. The only thing you can do is to write an integration test -- you load up a fully instantiated instance of the service under test and have it do its thing like it would running in your app. Then you can treat it as a black box.
When I'm writing tests for a method, I have the feeling of rewriting a second time what I
already wrote in the method itself.
My tests just seems so tightly bound to the method (testing all codepath, expecting some
inner methods to be called a number of times, with certain arguments), that it seems that
if I ever refactor the method, the tests will fail even if the final behavior of the
method did not change.
This is because you are writing your tests after you wrote your code. If you did it the other way around (wrote the tests first) it wouldnt feel this way.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
When I get excited about a new feature I'm just about to implement or about a bug that I've just "understood", there is the urge to just jump into the code and get hacking. It takes some effort to stop myself from doing that and to write the corresponding test first. Later the test often turns out to be trivial 4-liner, but before writing it still there's the thought in back of a head, "maybe I can skip this one, this one time?" Ideally I'd like to get an urge to write test, and only then, perhaps, the code :)
What method (or way of thinking or mind trick or self-reward policy or whatever) do you use to help maintain the discipline? Or do you just practice it until it feels natural?
I like the instant feedback from the test, that's reward enough for me. If I can reproduce a bug in a test that's a good feeling, I know I'm headed in the right direction as opposed to guessing and possibly wasting my time.
I like working Test-First because I feel like it keeps me more in tune with what the code is actually doing as opposed to guessing based on a possibly inaccurate mental model. Being able to confirm my assumptions iteratively is a big payoff for me.
I find that writing tests helps me to sketch out my approach to the problem at hand. Often, if you can't write a good test, it means you haven't necessarily thought enough about what it is that you're supposed to be doing. The satisfaction of being confident that I know how to tackle the problem once the tests are written is rather useful.
I'll let you know when I find a method that works. :-)
But seriously, I think your "practice until it feels natural" comment pretty much hits the nail on the head. A 4 line test may appear trivial, but as long as what you are testing represents a real failure point then it is worth doing.
One thing I have found to be helpful is to include code coverage validation as part of the build process. If I fail to write tests, the build will complain at me. If I continue failing to write tests, the continuous integration build will "error out" and everyone nearby will hear the sound I have wired to the "broken build" notification. After a few weeks of "Good grief... You broke it again?", and similar comments, I soon started writing more tests to avoid embarrassment.
One other thing (which only occurred to me after I had submitted the answer the first time) is that once I really got into the habit of writing tests first, I got great positive reinforcement from the fact that I could deliver bug-fixes and additional features with much greater confidence than I could in my pre-automated-test days.
Easiest way I've found is to just use TDD a lot. At some point, writing code without unit tests becomes a very, very nervous activity.
Also, try to focus on interaction or behavioral testing rather than state-based testing.
wear a green wristband
1) You pair with somebody else in your team. One person writes the test, the other implements.
It's called "ping-pong" pairing.
Doing this will force you to discuss design and work out what to do.
Having this discussion also makes it easier to see what tests you're going to need.
2) When I'm working on my own, I like to try out chunks of code interactively. I just type them in at the ruby prompt. When I'm experimenting like this I often need to set up some data for experimenting with, and some printout statements to see what the result is.
These little, self-contained throwaway experiments are usually:
a quick way to establish the feasibility of an implementation, and
good place to start formalising into a test.
I think the important part of keeping yourself in check as far as TDD is concerned is to have the test project set up properly. That way adding a trivial test case is indeed trivial.
If to add a test you need to first create a test project, then work out how isolate components, when to mock things, etc, etc it gees into too hard basket.
So I guess it comes back to having unit tests fully integrated into your development process.
When I first started doing TDD around 2000, it felt very unnatural. Then came the first version of .net and the JUnit port of NUnit, and I started practice TDD at the Shu level (of Shu-Ha-Ri), which meant test (first) everything, with the same questions as yours.
A few years later, at another workplace, together with a very dedicated, competent senior developer, we took the steps necessary to reach the Ha level. This meant for example, not blindly starring at the coverage report, but question "is this kind of test really useful, and does it add more value than it costs?".
Now, at another workplace, together with yet another great colleague, I feel that we're taking our first steps towards the Ri level. For us that currently means a great focus on BDD/executable stories. With those in place verifying the requirements at a higher level, I feel more productive, since I don't need to (re-)write a bunch of unit tests each time a class' public interface needs to change, replace a static call with an extension method, and so on.
Don't get me wrong, the usual TDD class tests still is used and provides great value for us. It's hard to put in words, but we're just so much better at "feeling" and "sensing" what tests makes sense, and how to design our software, than I was capable of ten years ago.