Which methods absolutely need Unit Testing [closed] - unit-testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
My team is working on the development of an application running for several years already, but no unit test has ever been coded. Now that we wish starting doing so, we realise we cannot possibly go over all the existing methods to test them because that would take years of work.
The question is : how can one decide which method absolutely need unit testing, and which doesn't?
Would you rather unit test a method that is often called or a method that is often modified?
I read that Unit Testing is rather inefficient on DAO classes. Should I restrain the tests to methods containing logic?
Most important : Will the tests put in place any useful as far as only part of the application is unit tested?

how can one decide which method absolutely need unit testing, and which doesn't?
This is a difficult question to answer without knowing about your code base and what its history and future are. But in general, write tests for the parts of the code that are hard to understand, will get modified in the near future or are known to have bugs. When testing legacy applications, the best bang for your buck is to have tests make the program both easier to maintain going forward, to fix bugs and to keep old bugs from coming back.
Would you rather unit test a method that is often called or a method that is often modified?
As stated above, it depends. Is the method that is called often trivial? Easy to understand? I would probably lean towards "often modified" just to make future development easier. But ideally both should get tested.
I read that Unit Testing is rather inefficient on DAO classes.
I don't know where you read that. Unit testing can be very efficient with DAOs if you use mock objects.
Will the tests put in place any useful as far as only part of the application is unit tested?
Any tests are useful. A program that is only 10% covered by tests is better than a program with 0% coverage. Especially if that 10% is the most important or trickiest part of the program.
If you haven't read it yet, I highly recommend Michael Feather's Working Effectively with Legacy Code where "legacy code" means code that doesn't have tests.

Some people create unit tests for getters and setters and insist on 100% code coverage.
Practical people will test those methods that need testing. What this means will depend on your intelligence and discernment of what constitutes a method that needs testing.
Some people however consider the minimum size of a unit is the class and that tests should be created to test a class (and sometimes its associated classes).
In short, forget any kind of dogmatic principle about unit testing, what matters is the quality of your code. Like agile development, its what helps you to achieve that goal that is important. So if you feel your DAO objects will not benefit from testing, then don't bother - spend that time you would have spent doing something more productive instead.

Related

How do I refactor unit tests? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
This has been driving me nuts lately...
What is refactoring?
Code refactoring is the process of restructuring existing computer code – changing the factoring – without changing its external behavior.
And how do we make sure we don't break anything during refactoring?
Before refactoring a section of code, a solid set of automatic unit tests is needed. The tests are used to demonstrate that the behavior of the module is correct before the refactoring.
Okay fine. But how do I proceed if I find a code smell in the unit tests themselves? Say, a test method that does too much? How do I make sure I don't break anything while refactoring the unit tests?
Do I need some kind of meta-tests? Is it unit tests all the way down?
Or do unit tests simply not obey the normal rules of refactoring?
In my experience, there are two reasons to trust tests:
Review
You've seen it fail
Both of these are activities that happen when a test is written. If you keep tests immutable, you can keep trusting them.
Every time you modify a test, it becomes less trustworthy.
You can somewhat alleviate that problem by repeating the above process: review the changes to the tests, and temporarily change the System Under Test (SUT) so that you can see the tests fail as expected.
When modifying tests, keep the SUT unchanged. Tests and production code keep each other in check, so varying one while keeping the other locked is safest.
With respect this is an older post, it was referenced in a comment on my post about TDD in practice. So upon review, I'd like to throw in my two cents.
Mainly because I feel the accepted answer makes the slippery statement:
Every time you modify a test, it becomes less trustworthy.
I take issue with the word modify. In regards to refactoring such words like change, modify, etc are often avoided as they carry implications counter to refactoring.
If you modify a test in the traditional sense there is risk you introduced a change that made the test less trustworthy.
However, if you modify a test in the refactor sense then the test should be no less trustworthy.
This brings me back to the original question:
How do I refactor unit tests?
Quite simply, the same as you would any other code - in isolation.
So, if you want to refactor your tests, don't change the code, just change your tests.
Do I need test for my tests?
No. In fact, Kent Beck addresses this exact question in his Full Stack Radio interview, saying:
Your code is the test for your tests
Mark Seemann also notes this in his answer:
Tests and production code keep each other in check, so varying one while keeping the other locked is safest.
In the end, this is not so much about how to refactor tests as much as it is refactoring in general. The same principles apply, namely refactoring restructures code without changing its external behavior. If you don't change the external behavior, then no trust is lost.
How do I make sure I don't break anything while refactoring the unit tests?
Keep the old tests as a reference.
To elaborate: unit tests with good coverage are worth their weight in results. You don't keep them for amazing program structure or lack of duplication; they're essentially a dataset of useful input/output pairs.
So when "refactoring" tests, it only really matters that the program tested with the new set shows the same behaviour. Every difference should be carefully, manually inspected, because new program bugs might have been found.
You might also accidentally reduce the coverage when refactoring. That's harder to find, and requires specialized coverage analysis tools.
you don't know you won't break anything. to avoid the problem of 'who will test our tests?' you should keep tests as simple as possible to reduce the possibility of making an error.
when you refactor tests you can always use automatic refactoring or other 'trusted' methods like method extraction etc.
you also often use existing testing frameworks. they are tested by their creators. so when you start to build your own (even simple one) framework, complex helper methods etc, you can always test it

Can do unit test without TDD? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm in a project where we don't do TDD, because our bosses and the cliente are very "old styled" people. Because I can't do design through TDD but I feel fear to changes, I would like to write unit tests for my own safety. But, how those unit test would be? Do I have to write a test for each method specification for test that they do what it's supposed they do? Do I have to test for each new functionality like TDD but without design? I have a mess in my mind.
Thanks in advance.
You probably can't hurt anything by doing unit tests - regardless of how well they're done - except for one possible side-effect, and that is false confidence.
We tend not to do hardcore TDD, nevertheless the unit test coverage ranges from non-existent to moderate depending on project, and is becoming increasingly valuable as the idea settles in.
For general pointers, I'd say the following are key priorities for you right now:
Test what you know to be important
Test what you know to be fragile
Write tests to expose any new bugs, then solve the bug by making modifications that pass the test
Apply TDD where possible to any new features
Acknowledge that you can't TDD an existing project. By its nature, TDD only applies to new ground, whether that's a new product, or a new feature for a legacy product. Don't let this fact dishearten you.
Yes. TDD is yet another software development technique which happens to utilize unit testing heavily. Unit testing as a process is fine on its own without TDD behind. Not to mention, sometimes it's not even possible to do TDD yet you do write tests (think legacy systems/existing untested code testing).
But as for what you should test. Depending how deep with testing you want (can) go, you can start with end user oriented functionality, through system components testing (ie. class contracts) up to simply assuring your code does what you claim it does - that's pretty much the final, most fine-grained unit test which you'll most likely have a lot of.
In general, what to test is not an easy question, I've happened to answer several variants of that question already, to give you few tips:
test what your code does, not what it does not
if you got certain requirement, have a test covering it
test single feature at time
focus on public contract, skip private bits
Also, reading few of the top voted unit testing questions might give you some ideas on why you will benefit from testing, regardless of using TDD or not.
When you say "we don't do TDD", do you mean that others don't practise TDD or that your bosses forbid you from practising TDD? If it's the first one, then you can practise TDD as much as you want, as long as you don't try to force other people to do it. If it's the other one, then tell your bosses that they pay you to write code the best way you know how, and TDD is part of how you do that.
You can certainly write tests without practising TDD. People do it all the time. Use the old saying, "Test until fear turns to boredom". Write tests for whatever you fear might not work correctly.

Can unit tests be implemented effectively with agile development? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Soon I will be involved in a project that will be using the agile project management/development approach, with 5 (or so) 2 week sprints. The project will be using a DDD design pattern which I have found in the past works great with unit testing, hence I have enthusiasim to use it for this project as well. The only problem is given the following factors I am unsure as to whether unit testing can be successfully implemented with agile development:
Potential for constantly changing requirements (requirements change, tests break, tests need to be updated too).
Time factor (unit tests can make dev take a fair bit longer and if requirements change towards the end of a sprint there may be too little time to update tests and production code at the best quality).
I have a feeling that if/when requirements change (especially if towards the end of a sprint) and given the tight deadlines unit tests will become a burden. Anyone have any good advice on the matter?
I think it cuts both ways. On one hand, yes, unit tests are extra code which requires extra maintenance, and will slow you down a bit. On the other hand, if requirements start evolving, having unit tests in place will be a life saver in making sure what you are changing still works.
Unless you have unit tests with high coverage, the cost of change will grow exponentially as the projects moves forward. So basically, the more change you anticipate the MORE you will actually need your unit tests.
Secondly, good unit tests depend on very few and small feature pieces in your production code. When this is true, only a few tests will be impacted when a feature changes. Basically, each test tests just one thing and small piece of production code. The key to writing unit tests that follow this principle is to decouple your code and test in isolation.
Thirdly, you need to get a better understanding of the concept of DONE and why its definition is so important in terms of sustainable development. Basically, you can't go fast over time in a sustainable fashion if your team compromizes the concept of DONE in the short term.
Considering 10+ weeks worth of code with no test coverage makes me cringe. When will you have time to manually test all that code? And under evolving requirements, you will use a lot more time tracking down impacts the changes will have throughout your code base.
I cannot advice strongly enough to use unit testing. Even when doing DDD, let unit tests drive implementation. Coupled with good patterns like DI/IoC and SRP, you should find both your code base and tests to be more resilient to change, and thus save you a lot of time throughout those sprints.

In TDD, should tests be written by the person who implemented the feature under test? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We run a project in which we want to solve with test driven development. I thought about some questions that came up when initiating the project. One question was: Who should write the unit-test for a feature? Should the unit-test be written by the feature-implementing programmer? Or should the unit test be written by another programmer, who defines what a method should do and the feature-implementing programmer implements the method until the tests runs?
If I understand the concept of TDD in the right way, the feature-implementing programmer has to write the test by himself, because TDD is procedure with mini-iterations. So it would be too complex to have the tests written by another programmer?
What would you say? Should the tests in TDD be written by the programmer himself or should another programmer write the tests that describes what a method can do?
In TDD the developer first writes the unit tests that fail and then fixes the production code to make the test pass. The idea is that the changes are made in really small steps - so you write a test that calls a method that doesn't exist, then you fix the test by adding an empty method, then you add some assertion to the test about the method so it fails again, then you implement the first cut of the method, etc. Because these steps are so small it is not practical to have a separate person write the tests. On the other hand I would recommend pairing, so that you gain some additional eyeballs making sure the code makes sense.
I think it would be possible to have another person/team/or even client (when you use tools like Fitness) to write acceptance tests, that test the whole functionality on a higher level.
One of the benefits of TDD is the fast feedback cycle. Having another developer write the tests would slow the process down too much. The same developer should write both.
It could be done both ways, you could write the unit test yourself, or go for the ping pong approach where you take turns with another developer writing unit tests and writing the implementation if you are pairing. The right solution is the one that works for you and your team. I prefer to write the test myself, but I know others that have had luck with the ping pong approach as well.
Unit Tests and Acceptance Tests are two different things, both of which can (and should) be done in TDD. Unit Tests are written from the standpoint of the developer, to make sure that the code is doing what she expects it to. Acceptance Tests are written from the standpoint of the customer, to make sure the code fulfills the appropriate need. It can make a lot of sense for the Acceptance Tests to be written by someone else (usually because it requires a slightly different mindset and domain knowledge, and because they can be done in parallel) but Unit Tests should be written by the developer.
TDD also says that you shouldn't write any code except in response to a failing test, so having to wait for someone else to write the Unit Tests seems pretty inefficient.
The Unit Test should be written prior to coding and test that a Unit meets the requirements, therefore it should be fine for the developer implementing the code to also write the Unit Test.
I think you need to separate Automated Unit Testing from Test Driven Development.
(IMHO it's not just you who should make a vital distinction here).
AUT strongly recommends, TDD requires test to be written first.
TDD furthermore makes the test an essential part of the writing code process. TDD is not so much a method of quality assurance, but a way to think about code - so separate responsibilities would be against the philosophy of TDD. They'd also be impractical - the new test / new code cycles are very small, usually a matter of minutes. In my understanding, Test Driven Design would be a better description.
AUT can be fitted on an existing code base (although often badly, depending on size and structure of the code base). Separate responsibilities might have some advantages here. Still, AUT puts some pressure on design - so the separation would be at the who types the code level.
Distinction: I freely admit that I don't like the idea of TDD. It might work well for a certain type of coder, for certain applications, in certain markets - but all examples, demos and walkthroughs I've seen up to now make me shudder. OTOH, I consider AUT a valuable tool for quality assurance. One valuable tool.
I'm a little confused here.
You say that you want to use TDD and you do seem to understand it correctly that a programmer writes a test, then the same programmer writes the implementation and does it in the next few seconds/minutes after writing the test. That is part of the definition of TDD. (btw 'the same programmer' also means 'the other programmer in the pair' when practising pair programming).
If you want to do something different, then go for it and write up your experiences in a blog or article.
What you shouldn't do is to say that what you do different is TDD.
The reason for 'the same programmer' writing the implementation, and writing it very soon after the test is for the purposes of rapid feedback, to discover how to write good tests, how to design software well and how to write good implementations.
Please see The Three Rules Of Tdd.
Per Justin's response, not only is it fine for the implementing developer to write the test, it's the de facto standard. It is, theoretically, also acceptable for another programmer to write the test. I have toyed with the idea of a "test" programmer supporting a "feature" developer, but I have not encountered examples.
If I write a test for an object, in addition to the inputs and outputs I expect, I have to know the interface it exposes. In other words, the classes and methods under test must be decided upon before development begins. In twelve years I have only once worked in a shop that achieved that granularity of design before development began. I am not sure what your experiences have been, but it doesn't seem very Agile to me.

Guidelines for better unit tests [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Jimmy Bogard, wrote an article: Getting value out of your unit tests, where he gives four rules:
Test names should describe the what and the why, from the user’s perspective
Tests are code too, give them some love
Don’t settle on one fixture pattern/organizational style
One Setup, Execute and Verify per Test
In your opinion these guidelines are complete? What are your guidelines for unit tests?
Please avoid specific language idioms, try to keep answers language-agnostic .
There's an entire, 850 page book called xUnit Test Patterns that deal with this topic, so it's not something that can be easily boiled down to a few hard rules (although the rules you mention are good).
A more digestible book that also covers this subject is The Art of Unit Testing.
If I may add the rules I find most important, they would be:
Use Test-Driven Development. It's the by far the most effective road towards good unit tests. Trying to retrofit unit tests unto existing code tend to be difficult at best.
Keep it simple: Ideally, a unit test should be less than 10 lines of code. If it grows to much more than 20 lines of code, you should seriously consider refactoring either the test code, or the API you are testing.
Keep it fast. Unit test suites are meant to be executed very frequently, so aim at keeping the entire suite under 10 s. That can easily mean keeping each test under 10 ms.
Writing unit-tests is simple, it is writing unit-testable code that is difficult.
The Way of Testivus
If you write code, write tests.
Don’t get stuck on unit testing dogma.
Embrace unit testing karma.
Think of code and test as one.
The test is more important than the unit.
The best time to test is when the code is fresh.
Tests not run waste away.
An imperfect test today is better than a perfect test someday.
An ugly test is better than no test.
Sometimes, the test justifies the means.
Only fools use no tools.
Good tests fail.
Break the code in test regularly to see the effectiveness of the unit tests
Take a look at the code coverage of your tests, and try to make it reasonably complete (for error cases I'd use some discretion whether to test them or not).
For many more good ideas to write unit tests, search stackoverflow.com.