How to prevent coverage.py in Django from resetting coverage between runs? - django

Searched the docs, but couldnt find a way to do this. I've been running my test suite with the following command:
coverage manage.py run test tests
This will run all tests in the 'tests' folder. Following this, to measure coverage I use the report command:
coverage report -m
The issue is that this measurement is completely reset between runs. So lets say i run all of my tests in the suite and achieve 85% coverage. If i then run/re-run an individual testcase/testmethod, the coverage measurement is reset so the report will only show coverage for that particular testcase/testmethod that was last run.
Per my usage, the only way to get an up-to-date coverage measurement is to re-run all test cases (this takes a long time). Is there a way to have the coverage measurement store previous results, and only modify coverage for results of subsequently run tests?

From the docs:
By default, each run of your program starts with an empty data set. If you need to run your program multiple times to get complete data (for example, because you need to supply disjoint options), you can accumulate data across runs with the -a flag on the run command.
-a can also be --append.

Related

What is the difference between test cases coverage and just a console application coverage?

I am having a hard time running a code coverage since most of the tools (including visual studio one) requires unit tests.
I dont understand why do I need to create unit tests, why cant I basically run a code coverage with my own console exe application?
just click F5 and get the report. without putting an effort into creating unit tests or whatever.
thanks
In general, with good test coverage tool, coverage data is collected for whatever causes execution of the code. You should be able to exercise the code by hand, and get coverage data, and/or execute the code via a test case, and get coverage data.
There's a question of how you track the coverage information back to a test. Most coverage tools just collect coverage data; some other mechanism is needed to associate that coverage data with a set-of or specific test. [Note that none of this discussion cares if your tests pass. its up to you whether you want track coverage data for failed tests or not; this information is often useful when trying to find out why the test failed].
You can associate coverage data with tests by convention; Semantic Designs test coverage tools (from my company) do this. After setting up the coverage collection, you run any set of tests, by any means you desire. Each program execution produces a test coverage vector with date stamp. You can combine the coverage vectors for such a set into a single set (the UI helps you do this, or you can do it in batch script by invoking the UI as a command line tool), and then you associate the set of tests you ran with the combined coverage data. Some people associate individual tests with the coverage data collected by the individual test execution. (Doing this allows you later discover if one tests covers everything another test covers, implying you don't need the second one).
You can be more organized; you can configure your automated tests to each exercise the code, and automatically store the generated vector away in place unique to the test. This is just adding a bit of scripting to each automated test. Some specific test coverage tools come with unit test mechanisms that have a way to do it. (Our tools don't insist on a specific test framework, so they work with any framework).

Difference between Tests and Coverage steps on CI Servers

I'we just start playing with setting thresholds when I run my coverage trying to force our team to apply to dedicated threshold standards.
My question is this, is there any need for separate tests and coverage steps? To me it looks like they are doing exactly the same thing? I was thinking of emerging those two steps into on tests-coverage step, does that make sense ?
One reason for running tests and coverage separately is, measuring coverage requires changing the program to support collecting coverage information.
In Java, both Jacoco and Cobertura will modify the bytecode of the class files to add instructions to record coverage. In C++, to use GCov to measure coverage you compile the binaries with different flags to those used to create release binaries.
Therefore, it makes sense to run the tests against the release artifacts to gain confidence that the release artifacts are behaving correctly. Then to measure coverage in a separate run against the instrumented artifacts.
It is, of course, possible to assume that the coverage enabled artifacts will be functionally equivalent to the release artifacts. Therefore, running tests twice is not required. This comes down to your (and your companies) attitude to risk and you can decide to run the tests twice (with and without coverage) or once with coverage enabled.

how can i get c++ code overage from google test suite in terminal?

I have started using the Google Test unit testing tools which I am building into a CI pipeline. Is there a code coverage tool that runs in the shell and would allow me to set thresholds and add as a job into the pipeline?
For reference I come from a NodeJS background and use a pipeline as follows:
linter (eslint)
unit tests (jasmine)
code coverage (istanbul coverage && istanbul check-coverage)
The bit i'm struggling with is the third step. In NodeJS I can set the acceptable thresholds and the job fails if these are not met.
I was hoping to replicate this for my C++ code. Is this even possible?
Code coverage is not linked to the test framework you use.
With C++ on Linux, you have to compile your software with special flags to enable the code coverage, e.g. with g++ you have to set the argument --coverage (and disabling all optimisations is also recommended).
When you then run the test programs, you will get a lot of files with the coverage data in them. These can then be collected and evaluated by e.g. lcov.
lcov can create HTML pages with the result, but is also prints the totals of the coverage analysis to stdout. So, you would have to build a script that runs lcov, filters the output and reports error or failure depending on the percentage measured.
Btw, you can set limits for lcov to define when the coverage is sufficient or not, but this is only used for the background color in the HTML output.
On each of these topics you'll find multiple entries here at Stackoverflow, how these tasks can be accomplished.

Golang - Effective test of multiple packages

I want to execute all tests from my application, now I do it with command:
go test ./app/...
Unfortunately it takes quite a long time, despite that single tests run quite fast. I think that the problem is that go needs to compile every package (with its dependence) before it runs tests.
I tried to use -i flag, it help a bit but still I'm not satisfied with the testing time.
go test -i ./app/...
go test ./app/...
Do you have any better idea how to efficiently test multiple packages.
This is the nature of go test: it builds a special runtime with addition code to execute (this is how it tracks code coverage).
If it isnt fast enough, you have two options:
1) use bash tooling to compile a list of packages (e.g. using ls), and then execute them each individually in parallel. There exists many ways to do this in bash.
The problem with this approach is that the output will be interleaved and difficult to track down failures.
2) use the t.Parallel() flag with each of your tests to allow the test runtime to execute in parallel. Since Go 1.5, go test runs with GOMAXPROCS set to the number of cores on your CPU which allows for concurrently running tests. Tests are still ran synchronously by default. You have to set the t.Parallel() flag for each test, telling the runtime it is OK to execute this test in parallel.
The problem with this approach being that it assumes you followed best practices and have used SoC/decoupling, don't have global states that would mutate in the middle of another test, no mutex locks (or very few of them), no race condition issues (use -race), etc.
--
Opinion: Personally, I setup my IDE to run gofmt and go test -cover -short on every Save. that way, my code is always formatted and my tests are run, only within the package I am in, telling me if something failed. The -cover flag works with my IDE to show me the lines of code that have been tested versus not tested. The -short flag allows me to write tests that I know will take a while to run, and within those tests I can check t.Short() bool to see if I should t.Skip() that test. There should be packages available for your favorite IDE to set this up (I did it in Sublime, VIM and now Atom).
That way, I have instant feedback within the package I m editing.
Before I commit the code, I then run all tests across all packages. Or, I can just have the C.I. server do it.
Alternatively, you can make use of the -short flag and build tags (e.g. go test -tags integration) to refactor your tests to separate your Unit tests from Integration tests. This is how I write my tests:
test that are fast and can run in parallel <- I make these tests run by default with go test and go test -short.
slow tests or tests that require external components, I require addition input to run, like go test -tags integration is required to run them. This pattern does not run the integration tests with a normal go test, you have to specify the additional tag. I don't run the integration tests across the board either. That's what my CI servers are for.
If you follow a consistent name scheme for your tests you can easily reduce the number of them you execute by using the -run flag.
Quoting from go help testflag:
-run regexp
Run only those tests and examples matching the regular expression.
So let's say you have 3 packages: foo, bar and qux. Tests of those packages are all named like TestFoo.., TestBar.. and TestQux.. respectively.
You could do go test -run '^Test(Foo|Bar)*' ./... to run only tests from foo and bar package.

Generate new code coverage for a single file without clearing all other coverage reports in PHPUnit

First the question:
In PHPUnit 3.5, is there a way to generate a coverage report for a single test without the report for the entire test suite being overwritten. I.e. only update the coverage report for the affected files? I still want the output to go to the same folder.
For those that want a bit of background:
Working with PHPUnit 3.5, I have a project which retroactively needs to be covered with unit tests. Now in order to know which classes still need tests I run the entire test suite and generate a html coverage report on it. Because running the complete suite takes some time, I would like to avoid having to run it every time I want to check which tests still need to be implemented. But at the same time I also want the coverage report for the unit test that I'm currently working on, so that I can make sure I'm executing each line of code in a class (this of course is very fast back and forth, so it makes no sense to run the entire suite just to generate this report). I can generate the report for a single test, and I can generate it for the entire suite. But what I'm looking for is a hybrid, which would allow me to first generate a report for the entire suite, and then just update the report with coverage information for the test I'm currently working on.
I've set up a ruby script which will simply run the test for the current file I'm working on and generate a coverage report on that file. But working like that, it always resets the coverage report for all other files also, even if the test did not execute anything in those classes.
Any ideas?
This isn't possible natively, but if you can figure out how to regenerate the HTML from the XML coverage data files you could modify your script to
Copy the coverage XML for a full run to a staging area.
When running a single test, copy over the new XML files to that area. This will necessarily not merge coverage from two tests that cover the same class, but I'm guessing from your description that you're covering a single class from each test and vice versa.
Rebuild the HTML from the XML. You might be able to figure out how to do this by looking at the source, but I doubt it's possible natively either.