I am having a hard time running a code coverage since most of the tools (including visual studio one) requires unit tests.
I dont understand why do I need to create unit tests, why cant I basically run a code coverage with my own console exe application?
just click F5 and get the report. without putting an effort into creating unit tests or whatever.
thanks
In general, with good test coverage tool, coverage data is collected for whatever causes execution of the code. You should be able to exercise the code by hand, and get coverage data, and/or execute the code via a test case, and get coverage data.
There's a question of how you track the coverage information back to a test. Most coverage tools just collect coverage data; some other mechanism is needed to associate that coverage data with a set-of or specific test. [Note that none of this discussion cares if your tests pass. its up to you whether you want track coverage data for failed tests or not; this information is often useful when trying to find out why the test failed].
You can associate coverage data with tests by convention; Semantic Designs test coverage tools (from my company) do this. After setting up the coverage collection, you run any set of tests, by any means you desire. Each program execution produces a test coverage vector with date stamp. You can combine the coverage vectors for such a set into a single set (the UI helps you do this, or you can do it in batch script by invoking the UI as a command line tool), and then you associate the set of tests you ran with the combined coverage data. Some people associate individual tests with the coverage data collected by the individual test execution. (Doing this allows you later discover if one tests covers everything another test covers, implying you don't need the second one).
You can be more organized; you can configure your automated tests to each exercise the code, and automatically store the generated vector away in place unique to the test. This is just adding a bit of scripting to each automated test. Some specific test coverage tools come with unit test mechanisms that have a way to do it. (Our tools don't insist on a specific test framework, so they work with any framework).
Is there a way to executes only those tests which are affected by recent changes in Go? We have a large unit test suite and now it is starting to take a while before it finishes. We are thinking that we only run those tests which are affected by the code changes in the first pass.
Python has something like this: https://github.com/tarpas/pytest-testmon
Is there a way to do this in Go?
No, there is no way to do it in Go. All you can do is to split your code into packages and tests one package at a time
go test some/thing
Instead of all of them
go test ./...
go test in Go 1.10 and newer does this automatically at the package level; any packages with no changes will return cached test results, while packages with changes will be re-tested.
If a single package's tests are still taking too long, that points to a problem with your tests; good tests in Go generally execute extremely quickly, which means you probably need to review the tests themselves, and do some combination of the following:
Isolate integration tests using build tags. Tests that hit external resources tend to be slower, so making them optional will help speed up runs where you just want unit test results.
Make use of short tests so that you have the option of a quick pass you can do more frequently.
Review your unit tests - do you have unnecessary tests or test cases? Are your tests unnecessarily complex? Are you reading golden files that could be kept in constants instead? Are you deserializing static JSON into objects when you could create the object programmatically?
Optimize your unit tests. Tests are still code and poor-performing code can be optimized for performance. There are many cases in unit tests where we're happy to opt for convenience over performance in ways we wouldn't with production code, but if performance is a problem, that choice must be reconsidered.
Review your test execution - are you using uncacheable parameters to go test that are preventing it from caching results? Are you engaging the race detector, profiler, or code coverage reporting out of habit in cases where it's unnecessary?
Nabaz may be what you are looking for.
The example from their README.md is
export CMDLINE="go test"
export PKGS="./..." # IMPORTANT make sure packages are written SEPERATLY
nabaz test --cmdline $CMDLINE --pkgs $PKGS .
You cannot rerun tests only for the last edited files. But there are a few ways of optimizing running tests.
Firstly, you have to split your project into logical-separated packages. This will lead to a situation that one change will require rerunning test only in the package (in most cases).
Secondly, you can run the test only for the package you're changing by typing
go test mypkg
or... you can use build tags. The last way of optimizing is to use the short test functionality.
For Java I know the possibility to merge test coverage results on build level by specifying the same path of JaCoCo reports (see SonarQube: Multiple unit test and code coverage result files). This might be transported to SonarQube.
But is it possible to make this on SonarQube level?
I mean from different build servers or different jobs build and test software and combine coverage results at SonarQube side (perhaps by marking the SW version or any kind of given label)?
For me it would be usefull to combine integration and unit tests.
You can combine the result of multiple jobs. You can create two coverage folders, e.g.
- coverage-unit
- coverage-integration
and use the resulting lcov files, e.g.
sonar.javascript.lcov.reportPaths=coverage-unit/lcov.info,coverage-integration/lcov.info
Currently it is not possible to "amend" coverage to an existing analysis. You have to orchestrate your build pipeline so that all kind of coverage reports are produced before you actually starts the SonarQube analysis.
I have started using the Google Test unit testing tools which I am building into a CI pipeline. Is there a code coverage tool that runs in the shell and would allow me to set thresholds and add as a job into the pipeline?
For reference I come from a NodeJS background and use a pipeline as follows:
linter (eslint)
unit tests (jasmine)
code coverage (istanbul coverage && istanbul check-coverage)
The bit i'm struggling with is the third step. In NodeJS I can set the acceptable thresholds and the job fails if these are not met.
I was hoping to replicate this for my C++ code. Is this even possible?
Code coverage is not linked to the test framework you use.
With C++ on Linux, you have to compile your software with special flags to enable the code coverage, e.g. with g++ you have to set the argument --coverage (and disabling all optimisations is also recommended).
When you then run the test programs, you will get a lot of files with the coverage data in them. These can then be collected and evaluated by e.g. lcov.
lcov can create HTML pages with the result, but is also prints the totals of the coverage analysis to stdout. So, you would have to build a script that runs lcov, filters the output and reports error or failure depending on the percentage measured.
Btw, you can set limits for lcov to define when the coverage is sufficient or not, but this is only used for the background color in the HTML output.
On each of these topics you'll find multiple entries here at Stackoverflow, how these tasks can be accomplished.
The other day we had a hard discussion between different developers and project leads, about code coverage tools and the use of the corresponding reports.
Do you use code coverage in your projects and if so, why not?
Is code coverage a fixed part of your builds or continous integration
or do you just use it from time to time?
How do you deal with the numbers derived from the reports?
We use code coverage to verify that we aren't missing big parts in our testing efforts. Once a milestone or so we run a full coverage report and spend a few days analyzing the results, adding test coverage for areas we missed.
We don't run it every build because I don't know that we would analyze it on a regular enough basis to justify that.
We analyze the reports for large blocks of unhit code. We've found this to be the most efficient use. In the past we would try to hit a particular code coverage target but after some point, the returns become very diminishing. Instead, it's better to use code coverage as a tool to make sure you didn't forget anything.
1) Yes we do use code coverage
2) Yes it is part of the CI build (why wouldn't it be?)
3) The important part - we don't look for 100% coverage. What we do look for is buggy/complex code, that's easy to find from your unit tests, and the Devs/Leads will know the delicate parts of the system. We make sure the coverage of such code areas is good, and increases with time, not decreases as people hack in more fixes without the requisite tests.
Code coverage tells you how big your "bug catching" net is, but it doesn't tell you how big the holes are in your net.
Use it as an indicator to gauge your testing efforts but not as an absolute metric.
It is possible to write code that will give you 100% coverage and does not test anything at all.
The way to look at Code Coverage is to see how much is NOT covered and find out why it is not covered. Code coverage simply tells us that the lines of code is being hit when the unit tests are running. It does not tell us that the code works correctly or not. 100% code coverage is a good number but in medium/large projects it is very hard to achieve.
I like to measure code coverage on any non-trivial project. As has been mentioned, try not to get too caught up in achieving an arbitrary/magical percentage. There are better metrics, such as riskiness based on complexity, coverage by package/namespace, etc.
Take a look at this sample Clover dashboard for similar ideas.
We do it in a build, and we see that it should not drop below some value, like 85%.
I also do automatic Top 10 Largest Not-covered methods, to know what to start covering.
Many teams switching to Agile/XP use code coverage as an indirect way of gauging the ROI of their test automation efforts.
I think of it as an experiment - there's an hypothesis that "if we start writing unit tests, our code coverage will improve" - and it makes sense to collect the corresponding observation automatically, via CI, report it in a graph etc.
You use the results to detect rough spots: if the trend toward more coverage levels off at some point, for instance, you might stop to ask what's going on. Perhaps the team has trouble writing tests that are relevant.
We use code coverage to assure that we have no major holes in our tests, and it's run nightly in our CI.
Since we also have a full set of selenium-web tests that run all the way through the stack we also do an additional coverage trick:
We set up the web-application with coverage running. Then we run the full automated test battery of selenium tests. Some of these are smoke tests only.
When the full suite of tests has been run, we can identify suspected dead code simply by looking at the coverage and inspecting code. This is really nice when working on large projects, because you can have big branches of dead code after some time.
We don't really have any fixed metrics on how often we do this, but it's all set up to run with a keypress or two.
We do use code coverage, it is integrated in our nightly build. There are several tools to analyze the coverage data, commonly they report
statement coverage
branch coverage
MC/DC coverage
We expect to reach + 90% statement and branch coverage. MC/DC coverage on the other hand gives broader sense for test team. For the uncovered code, we expect justification records by the way.
I find it depends on the code itself. I won't repeat Joel's statements from SO podcast #38, but the upshot is 'try to be pragmatic'.
Code coverage is great in core elements of the app.
I look at the code as a tree of dependency, if the leaves work (e.g. basic UI or code calling a unit tested DAL) and I've tested them when I've developed them, or updated them, there is a large chance they will work, and if there's a bug, then it won't be difficult to find or fix, so the time taken to mock up some tests will probably be time wasted. Yes there is an issue that updates to code they are dependent on may affect them, but again, it's a case by case thing, and unit tests for the code they are dependent on should cover it.
When it comes to the trunks or branch of the code, yes code coverage of functionality (as opposed to each function), is very important.
For example, I recently was on a team that built an app that required a bundle of calculations to calculate carbon emissions. I wrote a suite of tests that tested each and every calculation, and in doing so was happy to see that the dependency injection pattern was working fine.
Inevitably, due to a government act change, we had to add a parameter to the equations, and all 100+ tests broke.
I realised to update them, over and above testing for a typo (which I could test once), I was unit/regression testing mathematics, and ended up spending the time on building another area of the app instead.
1) Yes we do measure simple node coverage, beacause:
it is easy to do with our current project* (Rails web app)
it encourages our developers to write tests (some come from backgrounds where testing was ad-hoc)
2) Code coverage is part of our continuous integration process.
3) The numbers from the reports are used to:
enforce a minimum level of coverage (95% otherwise the build fails)
find sections of code which should be tested
There are parts of the system where testing is not all that helpful (usually where you need to make use of mock-objects to deal with external systems). But generally having good coverage makes it easier to maintain a project. One knows that fixes or new features do not break existing functionality.
*Details for setting up required coverage for Rails: Min Limit 95 Ahead