Sonar Kotlin Object 0% coverage - unit-testing

In my project I created an object to hold some Constants to be used along the project modules, for example, Success/Error messages, but when I run the Sonar scan, it reports the file with 0% coverage.
I tried to test the constants values to check if Sonar would see that as covered, but it didn't, actually it does show the object Messages {} as not covered, not the lines inside:
How can I cover it with tests so Sonar won't report it as 0% covered?

The simpliest way to fix it, was to create a test to check if the object is not null
Using Kotlin and JUnit
val obj = Messages
assertNotNull(obj)

Related

(Google Test) Automatically retry a test if it failed the first time

Our team uses Google Test for automated testing. Most of our tests pass consistently, but a few seem to fail ~5% of the time due to race conditions, network time-outs, etc.
We would like the ability to mark certain tests as "flaky". A flaky test would be automatically re-run if it fails the first time, and will only fail the test suite if it fails both times.
Is this something Google Test offers out-of-the-box? If not, is it something that can be built on top of Google Test?
You have several options:
Use --gtest_repeat for the test executable:
The --gtest_repeat flag allows you to repeat all (or selected) test methods in a program many times. Hopefully, a flaky test will eventually fail and give you a chance to debug.
You can mimic tagging your tests by adding "flaky" somewhere in their names, and then use the gtest_filter option to repeat them. Below are some examples from Google documentation:
$ foo_test --gtest_repeat=1000
Repeat foo_test 1000 times and don't stop at failures.
$ foo_test --gtest_repeat=-1
A negative count means repeating forever.
$ foo_test --gtest_repeat=1000 --gtest_break_on_failure
Repeat foo_test 1000 times, stopping at the first failure. This
is especially useful when running under a debugger: when the test
fails, it will drop into the debugger and you can then inspect
variables and stacks.
$ foo_test --gtest_repeat=1000 --gtest_filter=Flaky.*
Repeat the tests whose name matches the filter 1000 times.
See here for more info.
Use bazel to build and run your tests:
Rather than tagging your tests in the test files, you can tag them in the bazel BUILD files.
You can tag each test individually using cc_test rule.
You can also define a set of tests (using test_suite) in the BUILD file and tag them together (e.g. "small", "large", "flaky", etc). See here for an example.
Once you tag your tests, you can use simple commands like this:
% bazel test --test_tag_filters=performance,stress,-flaky //myproject:all
The above command will test all tests in myproject that are tagged as performance,stress, and are not flaky.
See here for documentation.
Using Bazel is probably cleaner because you don't have to modify your test files, and you can quickly modify your tests tags if things change.
See this repo and this video for examples of running tests using bazel.

Count each subtest for "Failed %" in MSTest's trx file

We are running an automated test where each file counts as its own test. We are using the Dynamic Data attribute to provide files that need to be tested. Currently, each file gets tested, however in the TRX file, it is logging them, essentially, as sub-tests, and if any 1 test fails, then the whole bucket counts as failing. This gives us inaccurate reads as far as how many files actually failed or passed (because if one failed, then the entire thing gets marked as failing) when we publish our test results in the azure pipeline. Is there a way to mark these subtests as actual tests so that the counting is done accurately?
Here you can see 2 of the subtests actually passed, however that is not reflected in the results in the header (note that the reason it say 0/4 instead of 0/1 is that there are 3 other similar test "buckets" that have passing and failing tests but are also being marked as all failing.

SonarQube: see details of failed tests

In SonarQube 5.6.6, I can see on http://example.com/component_measures/metric/test_failures/list?id=myproject that my unit test results were successfully imported. This is indicated by
Unit Test Failures: 1
which I produced by a fake failing test.
I also see the filename of the failing test class in a long list, and I see the number of failed tests (again: 1).
But I can't find any more information: which method, stack trace, stdout/err, just everything which is also included in the build/reports/test/index.html files generated by gradle? Clicking to the list entry points me to the code and coverage view, but I can't find any indicator, which test failed.
Am I doing something wrong in the frontend, is it a configuration problem, or am I looking for a feature which doesn't exist in SonarQube?
This is how it looks currently:
http://example.com/component_measures/domain/Coverage: Here I see that one test failed:
http://example.com/component_measures/metric/test_success_density/list: I can see which file it is:
But clicking on the line above only points me to the source file. Below the test which "failed". There is no indication that this test failed. And I can't find any way to see stack trace or the name of the failed test method:
Btw: The page of the first screenshot show Information about unit tests. But if the failing test is an integration test, I don't even see these numbers
Update
Something like this is probably what I'm looking for:
(found on https://deors.wordpress.com/2014/07/04/individual-test-coverage-sonarqube-jacoco/)
I never saw such a view on my installation, don't know how to get it and if it is implemented in the current version.
Unfortunately Test execution details is a deprecated feature Sonar Qube 5.6
If you install older version such as Sonar Qube 4.x, we will get following screen which provides test case result details.
But this screen itself has been removed.
Ref # https://jira.sonarsource.com/browse/SONARCS-657
Basically the issue is Unit test case details report requires links back to source code files. But now the unit test cases are only linked to assemblies.

Code coverage reporting "code run", but not "code covered" by separate-file unit tests

Disclaimer; beginner question!
My project structure, highly simplified for sake of the question, looks like this:
Project/
|-- main.py
|-- test_main.py
After reading Jeff Knupp's blogpost on unit testing and writing an assortment of tests I wanted to see how much of my code was now covered by tests. So I installed coverage.py and the following confuses me:
$ coverage run main.py (shows me the prints/logging from the script)
$ coverage report main.py
Name, Stmts, Miss, Cover
main.py, 114, 28, 75%
The thing is, I don't run unit tests from within the main script, nor did I think I should. I manually run all tests from test_main.py before a commit and know for a fact they do not cover 75% of all my statements. After reading coverage documentation I am doubting my unit test implementation ... do I need triggers from the main.py that run tests?
So I tried the same on my test script:
$ coverage run test_main.py (shows me an 'OK' test run for all tests)
$ coverage report test_main.py
Name, Stmts, Miss, Cover
test_main.py, 8, 0, 100%
But this is simply showing me I've "hit" 100% of my code in the test statements during execution of that script. So why then is coverage listed under "increase test coverage" if it simply displays what code has been used.
I would really like to see how much of my main.py is covered by test_main.py and am pretty sure I am missing some basic concept. Could someone elaborate?
On my Ubuntu machine running "coverage run test_main.py; coverage report" only gives me a report on test_main.py. On my Windows machine this gives:
Name, Stmts, Miss, Cover
main.py, 114, 74, 35%
test_main.py, 8, 0, 100%
TOTAL, 122, 74, 39%
The coverage report still doesn't make sense:
The test_main covers 9 out of 134 lines of code and 1 out of 10 functions in main - coverage is not 35%
Why is it reporting the coverage of test_main, these are the tests and it would be weird if this wasn't 100% since I'm running all tests to see my coverage ...
I am doing something wrong here or this way of looking at it is bollocks, calculating an average of "coverage" while summing the tests with the code itself offers no insight and in my beginner opinion is wrong
To answer and close my own question - even though I still do not agree with quite some coverage logic, the 35% is accurate and thank you #Ned for pointing out lines are counted when merely imported. It also includes top-level file description, the argparser and main reference to the function leading up to this percentage. This totals up to 40 out of 114 lines of code - even if the function itself which I import directly is only 9 lines of code.
I don't really like this way of reporting, since I do not use all imports in the test statement, the argparser is untouched and still it says these are "covered" - mostly resulting in a semantic discussion, I would say these are "seen" or "passed", but not actually "covered by tests".
I also made another test coverage run with only a different filename test_main_2.py testing the same function in the exact same manner ... resulting in a (35+100+100)/3 = 78% coverage average instead of the before (35+100)/2 = 68% coverage.
But I do understand how it counts coverage (average) now allowing me to interpret the numbers in a more correct manner. Maybe this can help an early beginner interpret his or her own first results.

Mark unstable jenkins builds as failed

I'm using the MultiJob plugin to split my job into phases, as a result it will give the "father" an end result of the worse "child" job
My problem is that junit tests are marked as UNSTABLE if tests fails, resulting in a "yellow" dot, I need them to be marked as Failure resulting in a "red" dot
I tried several approches in order to achive this goal inculding using
Failure Cause Management and looking for the following regexp .+[jsystem].+\bFAILED\b
and also using Health report amplification factor set to 100 which supposed to cause 1% failing tests scores as 0% health. 5% failing tests scores as 0% health
non of the above seems to help
Thanks in advance
Use text Text Finder Plugin. Add a post build step and search for something "setting the build status to unstable" or whatever your Jenkins says when becoming yellow. If regexp matches - set to failure.