How to ignore lombok.#UtilityClass for Jacoco? - jacoco

I have used lombok.#UtilityClass to:
generate a private constructor
make the class final
make all fields in the utility class static
And Jacoco does not cover Lombok generated code. If I explicitly define a private constructor, Jacoco can recognize it.
So, any way to avoid coverage penalty due to usage of #UtilityClass?

Well, it turns out that the same can be done to solve this, as what we do to ignore any Lombok generated code: add lombok.config file in the project root with these lines:
# this is root dir and don't search for parent
config.stopBubbling = true
# add #Generated and Jacoco will detect Lombok generated code and ignore them in reports
lombok.addLombokGeneratedAnnotation = true

Related

C++ Google test - exporting additional information like 'author' and 'project' to XML report

I am using the output parameter of google test like
--gtest_output=xml:Reports\fooReport.xml
However some XML attributes like author and project won´t be exported as the tool gtest2html might expect.
Is there a property like 'author' available in gtest at all? If so, where would I assign it?
Explanation
In GoogleTest, those fields that you see in gtest2html like 'author', 'project', etc. are not available in the xml output by default. Those are custom field expected to be found by gtest2html.
However, you can add them to the same xml element using RecordProperty function provided by GoogleTest.
The final section of the documentation explains:
Calling RecordProperty() outside of the lifespan of a test is allowed. If it's called outside of a test but between a test suite's SetUpTestSuite() and TearDownTestSuite() methods, it will be attributed to the XML element for the test suite. If it's called outside of all test suites (e.g. in a test environment), it will be attributed to the top-level XML element.
So, in order to make your xml output compatible with gtest2html, you need those fields in the outermost xml element (which has the tag "testsuites").
Solution
So, in main or in a test environment (you don't have to use one, but it is strongly recommended in the documentation), before any test is started, you need to add below call:
::testing::Test::RecordProperty ("project", "MyProject");
This will then add this to the xml output:
<testsuites ... project="MyProject" name="AllTests">
(I know it is quite late as an answer, but hopefully this will help someone)

Jest: How to ignore __mocks__ folder for specific test and instead use a mock I define in the test file?

I have a __mocks__ folder that mocks out a node module. This works for most of my tests, but in one particular test I need a custom mock. I need my unit-tested code to ignore the mock in the __mocks__ folder, and use a specific mock that I define in the test file.
I tried using jest.unmock(), however this prevents me from defining specific mocks in my unit test (thing.test.js). If I then add some mocks or modify the module I'm mocking, the changes don't get added to the code I'm testing (thing.js).
Example:
thing.js imports AWS.js module
__mocks__/AWS.js contains AWS.js module mock
other tests use __mocks__/AWS.js
thing.test.js wants to create a custom mock that doesn't get overwritten by __mocks__/AWS.js and doesn't affect other tests -> how to do this??
I am using Typescript, but the same approach applies. I create my normal test files (thing.spec.ts) which tests the code. In our code base, we do basic tests in this file, and would use it to test non-mocked functions, and simply spyOn() calls.
We then create a separate test file (thing.mock.spec.ts) where the 'mock' indicates that the tests in the this file, are going to be using the __mock__ directory class instead. The naming is just our internal standard to be clear of what we are using.
In the thing.mock.spec.ts we do the mock of the complete class as you are doing in your test. This test file only tests functions that require the mock data, since the main tests have been done independently in the thing.spec.ts.
This would then have:
__mocks__/AWS.js
thing.js
thing.test.js
thing.mock.test.js
This way, when looking at just the file names, you get a sense of what is being used during the testing.

Cause test failure from pytest autouse fixture

pytest allows the creation of fixtures that are automatically applied to every test in a test suite (via the autouse keyword argument). This is useful for implementing setup and teardown actions that affect every test case. More details can be found in the pytest documentation.
In theory, the same infrastructure would also be very useful for verifying post-conditions that are expected to exist after each test runs. For example, maybe a log file is created every time a test runs, and I want to make sure it exists when the test ends.
Don't get hung up on the details, but I hope you get the basic idea. The point is that it would be tedious and repetitive to add this code to each test function, especially when autouse fixtures already provide infrastructure for applying this action to every test. Furthermore, fixtures can be packaged into plugins, so my check could be used by other packages.
The problem is that it doesn't seem to be possible to cause a test failure from a fixture. Consider the following example:
#pytest.fixture(autouse=True)
def check_log_file():
# Yielding here runs the test itself
yield
# Now check whether the log file exists (as expected)
if not log_file_exists():
pytest.fail("Log file could not be found")
In the case where the log file does not exist, I don't get a test failure. Instead, I get a pytest error. If there are 10 tests in my test suite, and all of them pass, but 5 of them are missing a log file, I will get 10 passes and 5 errors. My goal is to get 5 passes and 5 failures.
So the first question is: is this possible? Am I just missing something? This answer suggests to me that it is probably not possible. If that's the case, the second question is: is there another way? If the answer to that question is also "no": why not? Is it a fundamental limitation of pytest infrastructure? If not, then are there any plans to support this kind of functionality?
In pytest, a yield-ing fixture has the first half of its definition executed during setup and the latter half executed during teardown. Further, setup and teardown aren't considered part of any individual test and thus don't contribute to its failure. This is why you see your exception reported as an additional error rather than a test failure.
On a philosophical note, as (cleverly) convenient as your attempted approach might be, I would argue that it violates the spirit of test setup and teardown and thus even if you could do it, you shouldn't. The setup and teardown stages exist to support the execution of the test—not to supplement its assertions of system behavior. If the behavior is important enough to assert, the assertions are important enough to reside in the body of one or more dedicated tests.
If you're simply trying to minimize the duplication of code, I'd recommend encapsulating the assertions in a helper method, e.g., assert_log_file_cleaned_up(), which can be called from the body of the appropriate tests. This will allow the test bodies to retain their descriptive power as specifications of system behavior.
AFAIK it isn't possible to tell pytest to treat errors in particular fixture as test failures.
I also have a case where I would like to use fixture to minimize test code duplication but in your case pytest-dependency may be a way to go.
Moreover, test dependencies aren't bad for non-unit tests and be careful with autouse because it makes tests harder to read and debug. Explicit fixtures in test function header give you at least some directions to find executed code.
I prefer using context managers for this purpose:
from contextlib import contextmanager
#contextmanager
def directory_that_must_be_clean_after_use():
directory = set()
yield directory
assert not directory
def test_foo():
with directory_that_must_be_clean_after_use() as directory:
directory.add("file")
If you absoulutely can't afford to add this one line for every test, it's easy enough to write this as a plugin.
Put this in your conftest.py:
import pytest
directory = set()
# register the marker so that pytest doesn't warn you about unknown markers
def pytest_configure(config):
config.addinivalue_line("markers",
"directory_must_be_clean_after_test: the name says it all")
# this is going to be run on every test
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item):
directory.clear()
yield
if item.get_closest_marker("directory_must_be_clean_after_test"):
assert not directory
And add the according marker to your tests:
# test.py
import pytest
from conftest import directory
def test_foo():
directory.add("foo file")
#pytest.mark.directory_must_be_clean_after_test
def test_bar():
directory.add("bar file")
Running this will give you:
fail.py::test_foo PASSED
fail.py::test_bar FAILED
...
> assert not directory
E AssertionError: assert not {'bar file'}
conftest.py:13: AssertionError
You don't have to use markers, of course, but these allow controlling the scope of the plugin. You can have the markers per-class or per-module as well.

Unit and integration tests get the same names by default

In Grails the unit tests and integration tests get generated with exactly the same naming convention, so that if you are testing a domain class named Foo both tests get generated with the name FooTests, in the same package. Is there an expectation I should have either a unit test or an integration test, but not both? Do people put their integration tests in a different package from their unit tests, or rename one? What's the preferred way to get around this?
btw I hacked on grails/scripts/_GrailsCreateArtifacts.groovy to change how the classes are generated in order to get it to generate different names (the value for the suffix property is all that changed). I don't like having to include something that means 'integration' in the name when the class is in a folder called 'integration', but this at least avoids having to do a manual rename.
createIntegrationTest = { Map args = [:] ->
def superClass = args["superClass"] ?: "GroovyTestCase"
createArtifact(name: args["name"], suffix: "${args['suffix']}IntTests",
type: "Tests", path: "test/integration", superClass: superClass)
}
I think giving them a different name is the right choice.
*IntTests
or
*IntegrationTests

Excluding plugins from Cobertura reports in Grails

I'm using SpringSecurity plugin in my project and also Cobertura plugin for code coverage reports. The thing is I'd like the SpringSecurity specific classes (login and logout controllers, persistent login token and so on) to be excluded from my reports, since I assume they work properly. I'd like reports to contain only my project specific classes code coverage. Is there any way I can achieve that?
coverage {
exclusions = ['**/grails-app/conf/**','**/*any.other.package*','**/*any.class*']
xml = true
enabledByDefault = true
}
Add the above snippet and configure the exclusions list in BuildConfig.groovy in grails-app/conf
Please see Exclude code from code coverage with Cobertura
There is a similar post Excluding some classes from the cobertura report doesn't work, but it is still unresolved.