What are the benefits of using a code coverage tool? [closed] - unit-testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Given that there appears to be no hard and fast rule as to what a good level of code coverage for unit tests is, what then are the benefits of using a code coverage tool such as NCover?

It is fallacious to measure software quality solely according to a code coverage percentage, as you've pointed out. But NCover allows you to examine exactly which parts of your codebase have been neglected by unit-testing. Since you should know which parts of the code are executed most frequently and which parts are most error-prone, NCover is useful for ensuring they're at least being tested.

Code Coverage, as a metric gives you two important pieces of intel:
First, it tells you what is covered by a unit test and what isn't. If you use this along with static analysis of the code, you can easily find complex code that is used often and isn't tested. Complex, frequently used code that isn't currently tested is code that you will want to add tests for.
Second, if you follow the trend of the code coverage, you can detect whether you are getting "better" at testing your code, or are introducing legacy code (i.e. untested code); You may wish to have your automated build run code analysis to let you know if the coverage percentage is decreasing (indicating that someone is checking in untested code).

Even if you have an agreed level of coverage, code coverage would only tell you if you meet that level, not if the tests are any good. But even with its limitations coverage (and the stats you can derive from it, such as CRAP (coverage over complexity, Clover can display the same data as tag cloud, tis very neat) is still useful for getting a rough idea how how well the code is tested and if where potential bugs will be hiding.
No silver bullets exists, but that does not mean you should not use of every regular bullet you can find. Bind them together (Code coverage with CI, trends and coverage over complexity and maybe some code mutation) and you end up with a powerful method to quickly informing you of potential issues.

The main advantage of running a coverage tool on your test suite is to find areas of your code that are poorly tested. I often look at my coverage numbers by assembly, namespace, and class to find code that hasn't been tested, but really should be.

Related

Are there things that you can't test when applying TDD [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've been applying TDD to some new projects to get the hang of things, and I understand the basic flow: Write tests that fail, write code to pass tests, refactor if needed, repeat.
To keep the tests fast and decoupled, I abstract out things like network requests and file I/O. These are usually abstracted into interfaces which get passed through using dependency injection.
Usually development is very smooth until the end where I realize I need to implement these abstract interfaces. The whole point of abstracting them was to make it easily testable, but following TDD I would need to write a test before writing the implementation code, correct?
For example, I was looking at the tdd-tetris-tutorial https://github.com/luontola/tdd-tetris-tutorial/tree/tutorial/src/test/java/tetris. If I wanted to add the ability to play with a keyboard, I would abstract away basic controls into methods inside a Controller class, like "rotateBlock", "moveLeft", etc. that could be tested.
But at the end I would need to add some logic to detect keystrokes from the keyboard when implementing a controller class. How would one write a test to implement that?
Perhaps some things just can't be tested, and reaching 100% code coverage is impossible for some cases?
Perhaps some things just can't be tested, and reaching 100% code coverage is impossible for some cases?
I use a slightly different spelling: not all things can be tested at the same level of cost effectiveness.
The "trick", so to speak, is to divide your code into two categoies: code that is easy to test, and code that is so obvious that you don't need to test it -- or not as often.
The nice thing about simple adapters is that (once you've got them working at all) they don't generally need to change very much. All of the logic lives somewhere else and that somewhere else is easy to test.
Consider, for example, reading bytes from a file. That kind of interface looks sort of like a function, that accepts a filename as an argument and either returns an array of bytes, or some sort of exception. Implementing that is a straight forward exercise in most languages, and the code is so text book familiar that it falls clearly into the category of "so simple there are obviously no deficiencies".
Because the code is simple and stable, you don't need to test it at anywhere near the frequency that you test code you regularly refactor. Thus, the cost benefit analysis supports the conclusion that you can delegate your occasional tests of this code to more expensive techniques.
100% statement coverage was never the goal of TDD (although it is really easy to understand how you -- and a small zillion other people -- reached that conclusion). It was primarily about deferring detailed design. So to some degree code that starts simple and changes infrequently was "out of bounds" from the get go.
You can't test everything with TDD unit tests. But if you also have integration tests, you can test those I/O interfaces. You can produce integration tests using TDD.
In practice, some things are impractical to automatically test. Error handling of some rare error conditions or race hazards are the hardest.

How do I measure the benefits of unit testing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm in the process of pushing my company towards having unit tests as a major part of the development cycle. I've gotten the testing framework working with our MVC framework, and multiple members of the team are now writing unit tests. I'm at the point, though, where there's work that needs to be done to improve our hourly build, the ease of figuring out what fixtures you need to use, adding functionality to the mock object generator, etc., etc., and I want to be able to make the case for this work to management. In addition, I'd like us to allocate time to write unit tests for the most critical pieces of existing code, and I just don't see that happening without a more specific case than "everyone knows unit tests are good".
How do you quantify the positive impact of (comprehensive and reliable) unit tests on your projects? I can certainly look at the number and severity of bugs filed and correlate it with our increases in code coverage, but that's a rather weak metric.
Sonar is a company that makes a very interesting code inspection tool, they actually try to measure technical debt programaticaly, which correlates untested code and developer price per hour.
Quantification of test-quality is very difficult.
I see code-coverage only as guidance not as test-quality metric. You can literally write test of 100% code-coverage without testing anything (e.g. no asserts are used at all). Also have a look at my blog-post where I warn against metric-pitfalls.
The only sensible quantitative metric I know of and which counts for business is really reduced effort of bug-fixes in production-code. Also reduced bug-severity. Still it is very difficult to isolate that unit-tests are the only source of this success (it could also be improvement of process or communication).
Generally I would focus on the qualitative approach:
Do developers feel more comfortable changing code (because tests are a trustworthy safety net)?
When bugs occur in production analysis really shows that it was untested code (vice versa a minor conlusion that it wouldn't have occurred if there had been unit test)

Can unit tests be implemented effectively with agile development? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Soon I will be involved in a project that will be using the agile project management/development approach, with 5 (or so) 2 week sprints. The project will be using a DDD design pattern which I have found in the past works great with unit testing, hence I have enthusiasim to use it for this project as well. The only problem is given the following factors I am unsure as to whether unit testing can be successfully implemented with agile development:
Potential for constantly changing requirements (requirements change, tests break, tests need to be updated too).
Time factor (unit tests can make dev take a fair bit longer and if requirements change towards the end of a sprint there may be too little time to update tests and production code at the best quality).
I have a feeling that if/when requirements change (especially if towards the end of a sprint) and given the tight deadlines unit tests will become a burden. Anyone have any good advice on the matter?
I think it cuts both ways. On one hand, yes, unit tests are extra code which requires extra maintenance, and will slow you down a bit. On the other hand, if requirements start evolving, having unit tests in place will be a life saver in making sure what you are changing still works.
Unless you have unit tests with high coverage, the cost of change will grow exponentially as the projects moves forward. So basically, the more change you anticipate the MORE you will actually need your unit tests.
Secondly, good unit tests depend on very few and small feature pieces in your production code. When this is true, only a few tests will be impacted when a feature changes. Basically, each test tests just one thing and small piece of production code. The key to writing unit tests that follow this principle is to decouple your code and test in isolation.
Thirdly, you need to get a better understanding of the concept of DONE and why its definition is so important in terms of sustainable development. Basically, you can't go fast over time in a sustainable fashion if your team compromizes the concept of DONE in the short term.
Considering 10+ weeks worth of code with no test coverage makes me cringe. When will you have time to manually test all that code? And under evolving requirements, you will use a lot more time tracking down impacts the changes will have throughout your code base.
I cannot advice strongly enough to use unit testing. Even when doing DDD, let unit tests drive implementation. Coupled with good patterns like DI/IoC and SRP, you should find both your code base and tests to be more resilient to change, and thus save you a lot of time throughout those sprints.

Finding "dead code" in a large C++ legacy application [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm currently working on a large and old C++ application that has had many developers before me. There is a lot of "dead code" in the project, classes and functions that aren't used by anyone anymore.
What tools are available for C++ to make a analysis of large code base to detect and refactor dead code? Note: I'm not talking about test coverage tool like gcov.
How do you find dead code in your project?
You'll want to use a static analysis tool
StackOverflow: What open source C++ static analysis tools are available?
Wikipedia: List of tools for static code analysis
The main gotcha I've run into is that you have to be careful that any libraries aren't used from somewhere that you don't control/have. If you delete a function from a class that gets used by referencing a library in your project you can break something that you didn't know used the code.
You can use Cppcheck for this purpose:
$ cppcheck --enable=unusedFunction .
Checking 2380153.c...
1/2 files checked 0% done
Checking main.c...
2/2 files checked 0% done
[2380153.c:1]: (style) The function '2380153' is never used.
Caolán McNamara's callcatcher is very effectively used within the LibreOffice project (~6 MLOC) to find dead code.
I think your best bet would probably be a coverage tool. There're plenty for both *nix and windows. If you have unit tests, it's easy - if you have a low test coverage, then the uncovered code is either dead or not tested yet (you want both pieces of this info anyway). If you don't have unit tests, build your app with instrumentation provided by one of those tools, run it through some (should be all ideally) execution paths, and see the report. You get the same info as with unit tests, it will only require a lot more work.
Since you're using VisualStudio, I could provide you couple of links which you could consider using:
Coverage meter
BullseyeCoverage
Neither of them is free, not even cheap, but the outcome is usually worth it.
On *nix-like platforms gcov coupled with tools like zcov or lcov is a really great choice.
Nothing beats familiarity with the code. Except perhaps rigourous pruning as one goes along.
Sometimes what looks like deadwood is used as scaffolding for unit tests etc, or it appears to be alive simply because legacy unit tests exercise it, but it is never exercised outside of the tests. A short while ago I removed over 1000 LOC which was supporting external CAD model translators, we had tests invoking those external translators but those translators had been unsupported for 8+ years and there was no way that a user of the application even if they wanted to could ever invoke them.
Unless one is rigourous in getting rid of the dead wood, one will find your team maintaining the stuff for years.
One approach is to use "Find All References" context menu item on class and function names. If a class/function is only referenced in itself, it is almost certainly dead code.
Another approach, based on the same idea, is to remove(comment out) files/functions from project and see what error messages you will get.
See our SD C++ Test Coverage.
You need to do a lot of dynamic testing to exercise the code, to make sure you hit the maximum amount of coverage. Code "not covered" may or may not be dead; perhaps you simply didn't have a test case to exercise it.
Although not specifically for dead code, I found the Source Navigator
http://sourcenav.berlios.de/
quite useful, although cumbersome to set up and a bit buggy. That was a year ago on Linux (Fedora).

Guidelines for better unit tests [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Jimmy Bogard, wrote an article: Getting value out of your unit tests, where he gives four rules:
Test names should describe the what and the why, from the user’s perspective
Tests are code too, give them some love
Don’t settle on one fixture pattern/organizational style
One Setup, Execute and Verify per Test
In your opinion these guidelines are complete? What are your guidelines for unit tests?
Please avoid specific language idioms, try to keep answers language-agnostic .
There's an entire, 850 page book called xUnit Test Patterns that deal with this topic, so it's not something that can be easily boiled down to a few hard rules (although the rules you mention are good).
A more digestible book that also covers this subject is The Art of Unit Testing.
If I may add the rules I find most important, they would be:
Use Test-Driven Development. It's the by far the most effective road towards good unit tests. Trying to retrofit unit tests unto existing code tend to be difficult at best.
Keep it simple: Ideally, a unit test should be less than 10 lines of code. If it grows to much more than 20 lines of code, you should seriously consider refactoring either the test code, or the API you are testing.
Keep it fast. Unit test suites are meant to be executed very frequently, so aim at keeping the entire suite under 10 s. That can easily mean keeping each test under 10 ms.
Writing unit-tests is simple, it is writing unit-testable code that is difficult.
The Way of Testivus
If you write code, write tests.
Don’t get stuck on unit testing dogma.
Embrace unit testing karma.
Think of code and test as one.
The test is more important than the unit.
The best time to test is when the code is fresh.
Tests not run waste away.
An imperfect test today is better than a perfect test someday.
An ugly test is better than no test.
Sometimes, the test justifies the means.
Only fools use no tools.
Good tests fail.
Break the code in test regularly to see the effectiveness of the unit tests
Take a look at the code coverage of your tests, and try to make it reasonably complete (for error cases I'd use some discretion whether to test them or not).
For many more good ideas to write unit tests, search stackoverflow.com.