Why should I unit test? [closed] - unit-testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 months ago.
Improve this question
I have a problem understanding the nature of unit tests - why should I write them in general. Lets consider following bit of code:
public enum ObjectType
{
TypeA = 0,
TypeB = 1
}
public ObjectType Type;
public bool IsTypeB(ObjectType Type)
{
bool result = false;
If(Type = ObjectType.TypeB)
{
result = true;
}
return result;
}
Now my question, should I unit test IsTypeB? If yes, then why? The outcome of that method is obvious so why test it? I know my question may look dumb, but to me it looks like creating unit tests is doubling the work with no tangible effects. I have a trouble of understanding what are the real effects of unit tests, when to use them and on what kind of methods/functions.
Thanks for the answers in advance, and sorry for ignorance

Normally, one of the major goals of testing is to find bugs. There are, however, other ways to find bugs as well, like, static code analysis. Due to time and cost limits, in practice you have to find a balance between the different approaches. Testing trivial code means that there may be less time for testing the more complicated functions. Therefore it can be a valid decision to focus testing on the more complicated parts of the software.
Often, even if not targeted explicitly by dedicated tests, the trivial functions (like getters and setters) are tested nevertheless: They are often necessary parts of the test code for testing the more complicated stuff (for example for the setup part or the evaluation part of the test).
Looking at your example: It appears to be pseudocode that you have never compiled (If starts with a capital letter), you have not specified the programming language, and thus it is questionable whether it makes sense to analyse it, but nevertheless:
The function you have created looks trivial at first glance. Nevertheless, the function is probably buggy. Most likely the condition for the if should rather be Type == ObjectType.TypeB.
Knowing this, it's up to you to judge whether it is worth adding tests for the function: Would the bug have caused a compile error (in some languages the code would compile and have a defined meaning)? A compiler warning that you truly look at? Some static code analysis warning someone truly looks at? All this can contribute to an informed decision.

Related

Are there things that you can't test when applying TDD [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've been applying TDD to some new projects to get the hang of things, and I understand the basic flow: Write tests that fail, write code to pass tests, refactor if needed, repeat.
To keep the tests fast and decoupled, I abstract out things like network requests and file I/O. These are usually abstracted into interfaces which get passed through using dependency injection.
Usually development is very smooth until the end where I realize I need to implement these abstract interfaces. The whole point of abstracting them was to make it easily testable, but following TDD I would need to write a test before writing the implementation code, correct?
For example, I was looking at the tdd-tetris-tutorial https://github.com/luontola/tdd-tetris-tutorial/tree/tutorial/src/test/java/tetris. If I wanted to add the ability to play with a keyboard, I would abstract away basic controls into methods inside a Controller class, like "rotateBlock", "moveLeft", etc. that could be tested.
But at the end I would need to add some logic to detect keystrokes from the keyboard when implementing a controller class. How would one write a test to implement that?
Perhaps some things just can't be tested, and reaching 100% code coverage is impossible for some cases?
Perhaps some things just can't be tested, and reaching 100% code coverage is impossible for some cases?
I use a slightly different spelling: not all things can be tested at the same level of cost effectiveness.
The "trick", so to speak, is to divide your code into two categoies: code that is easy to test, and code that is so obvious that you don't need to test it -- or not as often.
The nice thing about simple adapters is that (once you've got them working at all) they don't generally need to change very much. All of the logic lives somewhere else and that somewhere else is easy to test.
Consider, for example, reading bytes from a file. That kind of interface looks sort of like a function, that accepts a filename as an argument and either returns an array of bytes, or some sort of exception. Implementing that is a straight forward exercise in most languages, and the code is so text book familiar that it falls clearly into the category of "so simple there are obviously no deficiencies".
Because the code is simple and stable, you don't need to test it at anywhere near the frequency that you test code you regularly refactor. Thus, the cost benefit analysis supports the conclusion that you can delegate your occasional tests of this code to more expensive techniques.
100% statement coverage was never the goal of TDD (although it is really easy to understand how you -- and a small zillion other people -- reached that conclusion). It was primarily about deferring detailed design. So to some degree code that starts simple and changes infrequently was "out of bounds" from the get go.
You can't test everything with TDD unit tests. But if you also have integration tests, you can test those I/O interfaces. You can produce integration tests using TDD.
In practice, some things are impractical to automatically test. Error handling of some rare error conditions or race hazards are the hardest.

Confused about Classical TDD and Mockist [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Here is an article: https://martinfowler.com/articles/mocksArentStubs.html#ClassicalAndMockistTesting
It's in relation to Classical TDD and Mockist. My understanding was that classes should be tested in isolation therefore ALL dependencies should be stubbed / mocked. However it seems there's a large group of people the Classical TDDers who use real objects according to the article. There are various articles on the internet that emphasize that unit tests should not use real classes other than the SUT of course. For example take a look at this from Microsoft's website on stubs: https://learn.microsoft.com/en-us/visualstudio/test/using-stubs-to-isolate-parts-of-your-application-from-each-other-for-unit-testing
public int GetContosoPrice()
{
var stockFeed = new StockFeed(); // NOT RECOMMENDED
return stockFeed.GetSharePrice("COOO");
}
Can someone clear up my confusion?
Can someone clear up the my confusion?
You don't seem to be confused at all - there are two different schools of thought on what a "unit test" is, and therefore how it should be used.
For instance, Kent Beck, in Test Driven Development By Example, writes
The problem with driving development with small-scale tests ( I call them "unit tests" but they don't match the accepted definition of unit tests very well)....
Emphasis added.
It may help to keep in mind that 20 years ago, the most common testing pattern was the "throw it over the wall to QA" test. Even in cases where automated tests were present, the disciplines required to make those tests effective were not common knowledge.
So it was important to communicate the idea that tests should be isolated from other tests. If developers were going to be running tests as often as the extreme programmers were insisting that they should, then those tests needed to be reliable and fast in wall clock time. Tests that don't share any mutable state (either themselves, or indirectly via the system under test) can be run effectively in parallel, reducing the wall clock time, and therefore reducing the developer interruption that they introduce.
There is a separate discipline that says, in addition to the isolation described above, we should also be striving for tests that check the system in isolation from other parts of the system.
If you want to get a real sense for the history of people with these different ideas talking past each other -- including the history of recognizing that they are talking past each other and trying to invent new labels, a good starting point is the C2 wiki
http://wiki.c2.com/?UnitTest
http://wiki.c2.com/?ShouldUnitTestsTestInteroperations
http://wiki.c2.com/?DeveloperTest
http://wiki.c2.com/?ProgrammerTest
For a modern perspective, you might start with Ham Vocke's Practical Test Pyramid

How to ensure that the developer knows what is not important to be tested? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Before making this question, I read these questions and its answers:
What is a reasonable code coverage % for unit tests (and why)?
Is 100% code coverage a really good thing when doing unit tests?
Unit testing code coverage - do you have 100% coverage?
and read other blogs too, like Martin Fowler - TestCoverage.
My conclusion - a resume of course - the community says to:
not waste time, where time is money, to create tests just to get 100% code coverage.
Maybe a magic number like 80% or 90% of coverared test can cover 99.99999% of functionalities. So, why do you want to wast time to accomplish 0.000001% of functionality?!
I agree with it. But I am worrying about giving to the developer the opportunity of not creating a test because he believes that is not important. I know we can avoiding these mistakes by making another person verify the code before publishing it.
The Question
Thinking in a way to control what the developer knows that should not be tested, will be a good practice to create a kind of //special comment in the code to the developer explicity mark where does he know that it doesn't worth to be tested? Or would that be an irrelevant information that is messing up the code? Can someone suggest another idea to accomplish it?
Before reading any answer for this question, my opinion is that it is a good practice, since a third person could check, to agree or not, why that code was not covered by the developer.
java example:
public String encodeToUTF8(String value){
String encodedValue = null;
try {
encodedValue = URLEncoder.encode(value, "UTF-8");
}
catch (UnsupportedEncodingException ignore) {
// [coverage-not-need] this exception will never occur because UTF-8 is a supported encoding type
}
return encodedValue;
}
Terminology: 100% code coverage means cover all branches, not only all lines.
Most coverage tools have exactly that, a special comment where you can declare that this code will not have coverage. For example, Perl's Devel::Cover uses # uncoverable and Ruby's simplecov has # :nocov:
However, I would caution against the developer prematurely declaring things uncoverable, or relying on it too heavily. The developer who wrote the code can be blind to testing opportunities. And like any comment, it can fall out of date if the surrounding code changes. Used too much, it gives a false sense of confidence in your test coverage.
Use it as a last resort after you've done your testing, ran coverage, and determined that statement really is all but impossible to test. Again, I caution against using it as an excuse to paper over things which are simply too hard to test. Often that's indicative of a needed redesign rather than truly untestable code.
Your example code is a perfect case of a misuse of "uncoverable" code. If the exception can never happen, I have to wonder why there's a catch block there at all. As written if it does happen, it will be silenced and the user will be left wondering why they're getting a NullPointerException somewhere later in their code. Instead, there should be no try/catch block. The exception should be allowed to throw an error in the exceptional case where encoding fails.
public String encodeToUTF8(String value){
return URLEncoder.encode(value, "UTF-8");
}
I'm not a Java programmer but I know it doesn't like unchecked exceptions. If Java requires you catch all possible exceptions (ugh) the catch should assert or rethrow; something that doesn't silence the exception.
And that's why you want to use an "uncoverable" marker very, very sparingly and only after much scrutiny. Examining uncovered code often leads to finding hidden bugs or poorly designed code.

How verbose/granular should your tests be? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I recently started a new project where I decided to adopt writing unit tests for most functions. Prior to this my testing was limited to sporadically writing test "functions" to ensure something worked as expected, and then never bothering to update the test functions, clearly not good.
Now that I've written a fair amount of code, and tests, I'm noticing that I'm writing a lot of tests for my code. My code is generally quite modular, in the sense that I try to code small functions that do something simple, and then chain them together in a larger function as required, again, accepted best practice.
But, I now end up writing tests for both the individual "building block" functions (quite small tests), as well as tests for the function that chains them together, and testing the result there as well, obviously the result will be different, but since the inputs are similar, I'm duplicating a lot of test code (the setting up the input portions , which are slightly different in each but not by much, since they're not identical I can't just use a text fixture..).
Another concern is I try to adhere quite strictly to test one thing per test, so I write a single test for every different feature within the function, for instance, if there's some extra input that can be passed to the function, but which is optional, I write one version which adds the input, one that doesn't and test them separately. The setup here is again mostly identical except for the input I added, again not exactly the same, so using a fixture doesn't feel "right".
Since this is my first project with everything being fully unit tested, I just wanted to make sure I was doing stuff correctly and that the code duplication in tests is to be expected.. so, my question is: Am I doing things correctly? If not, what should I change?
I code in C and C++.
On a side note, I love the testing itself, I'm far more confident of my code now.
Thanks!
Your question tries to address many things, and I can try to answer only some of them.
Try to get as high coverage as possible (ideally 100%)
Do not use real resources for your unit test, or at least try to avoid it. You can use mocks and stubs for that.
Do not unit test 3rd party libraries.
You can break dependencies using dependency injections or functors. That way the size of your tests can decrease.

What are the benefits of using a code coverage tool? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Given that there appears to be no hard and fast rule as to what a good level of code coverage for unit tests is, what then are the benefits of using a code coverage tool such as NCover?
It is fallacious to measure software quality solely according to a code coverage percentage, as you've pointed out. But NCover allows you to examine exactly which parts of your codebase have been neglected by unit-testing. Since you should know which parts of the code are executed most frequently and which parts are most error-prone, NCover is useful for ensuring they're at least being tested.
Code Coverage, as a metric gives you two important pieces of intel:
First, it tells you what is covered by a unit test and what isn't. If you use this along with static analysis of the code, you can easily find complex code that is used often and isn't tested. Complex, frequently used code that isn't currently tested is code that you will want to add tests for.
Second, if you follow the trend of the code coverage, you can detect whether you are getting "better" at testing your code, or are introducing legacy code (i.e. untested code); You may wish to have your automated build run code analysis to let you know if the coverage percentage is decreasing (indicating that someone is checking in untested code).
Even if you have an agreed level of coverage, code coverage would only tell you if you meet that level, not if the tests are any good. But even with its limitations coverage (and the stats you can derive from it, such as CRAP (coverage over complexity, Clover can display the same data as tag cloud, tis very neat) is still useful for getting a rough idea how how well the code is tested and if where potential bugs will be hiding.
No silver bullets exists, but that does not mean you should not use of every regular bullet you can find. Bind them together (Code coverage with CI, trends and coverage over complexity and maybe some code mutation) and you end up with a powerful method to quickly informing you of potential issues.
The main advantage of running a coverage tool on your test suite is to find areas of your code that are poorly tested. I often look at my coverage numbers by assembly, namespace, and class to find code that hasn't been tested, but really should be.