So, I came across this statement in the documentation of JUnit5's TestExecutionListener:
"Contrary to JUnit 4, test engines are supposed to report events not only for identifiers that represent executable leaves in the test plan but also for all intermediate containers."
The doc link -> https://junit.org/junit5/docs/5.0.3/api/org/junit/platform/launcher/TestExecutionListener.html
My question is: what are intermediate containers?
Any JUnit 5 test engine can determine how the tree of tests is constructed.
A TestIdentifier describes one node in this tree and it is one of three types:
CONTAINER
TEST
CONTAINER_AND_TEST
A test can be executed, whereas a container has children, which can themselves be of any of the three types.
Let's look at an example using JUnit Jupiter:
class MyTestContainer {
#Test void test1() { }
#Nested
class InnerTestContainer {
#Test void test2() { }
}
}
Running this would announce the following events - maybe in a slightly different order:
execution started: MyTestContainer
execution started: MyTestContainer.test1
execution finished: MyTestContainer.test1
execution started: MyTestContainer.InnerTestContainer
execution started: MyTestContainer.InnerTestContainer.test2
execution finished: MyTestContainer.InnerTestContainer.test2
execution finished: MyTestContainer.InnerTestContainer
execution finished: MyTestContainer
Keep in mind that it's completely up to the test engines how they define containers and executable tests. Some (ArchUnit) use member variables for tests, others build up the hierarchy through a DSL of their own. Cucumber, for example, has its own way to group and nest features, specifications etc.
Hope I could clarify the idea behind test, container and the tree of those forming the final test plan.
Related
Using Google Test, I want to test the behaviour of a ClientListener.AcceptRequest method:
class ClientListener {
public:
// Clients can call this method, want to test that it works
Result AcceptRequest(const Request& request) {
queue_.Add(request);
... blocks waiting for result ...
return result;
}
private:
// Executed by the background_thread_;
void ProcessRequestsInQueue() {
while (true) {
Process(queue_.PopEarliest());
}
}
MyQueue queue_;
std::thread background_thread_ = thread([this] {ProcessRequestsInQueue();});
};
The method accepts a client request, queues it, blocks waiting for a result, returns a result when available.
The result is available when the background thread processes the corresponding request from a queue.
I have a test which looks as follows:
TEST(ListenerTest, TwoRequests) {
ClientListener listener;
Result r1 = listener.AcceptClientRequest(request1);
Result r2 = listener.AcceptClientRequest(request2);
ASSERT_EQ(r1, correctResultFor1);
ASSERT_EQ(r2, correctResultFor2);
}
Since the implementation of a ClientListener class involves multiple threads, this test might pass on one attempt but fail on another. To increase the chance of capturing a bug, I run the test multiple times:
TEST_P(ListenerTest, TwoRequests) {
... same as before ...
}
INSTANTIATE_TEST_CASE_P(Instantiation, ListenerTest, Range(0, 100));
But now make test command treats each parameterised instantiation as a separate test,
and in the logs, I see 100 tests:
Test 1: Instantiation/ListenerTest.TwoRequests/1
Test 2: Instantiation/ListenerTest.TwoRequests/2
...
Test 100: Instantiation/ListenerTest.TwoRequests/100
Given that I do not use the parameter value, is there a way to rewrite the testing code such that the make test command would log a single test executed 100 times, rather than 100 tests?
Simple answer: use --gtest_repeat when executing tests would do the trick (default is 1).
Longer answer: unit tests shouldn't be used for this kind of tests. GTest is thread-safe by design (as stated in their README), but this doesn't mean it is a good tool to perform such tests. Maybe it is a good starting point to actually begin working on real integration tests, I really recommend Python's behave framework for this purpose.
Does anyone know, how to fail only one step in the test and allow the test finish all steps, using Allure framework!
For exemple, I have one test wich consists of 3 test steps, and each of the steps has it's own assertion. It can look like this:
#Test
public void test()
step1();
step2();
step3();
}
#Step
public void step1() {
Assert.assertEquals(1, 0);
}
#Step
public void step2() {
Assert.assertEquals(1, 1);
}
#Step
public void step3() {
Assert.assertEquals(2, 2);
}
When step1 fail, then test method will fail too. Is there a possibility to finish other two steps with their own assertions and not fail the test? Like TestNG does with SoftAssert. (org.testng.asserts.SoftAssert)
And as a result I would like to see the report where we can see all broken and passed test steps,(in one test method) like in 1.4.9 Allure release https://github.com/allure-framework/allure-core/releases/tag/allure-core-1.4.9 on the picture report.
Maybe you can, but you shouldn't. You're breaking the concept of a test. A test is something that either passes or fails with a description of a failure. It is not something that can partially fail.
When you write a test you should include only those assertions that are bound to each other. Like if the first assertion fails, then the second is not needed by your functionality at all. That means if you have assertions that are not dependent on each other – you better make a couple of test methods and they will be completely separated and will fail separately.
In short, the test should not continue after a failed step and that's it. Otherwise – it's a bad test.
P.S. That's why JUnit does not allow soft assertions.
P.P.S If you reallyreallyreally need to check all the three things – possible workaround is using an ErrorCollector.
Ideally, a test class is written for every class in the production code. In test class, all the test methods may not require the same preconditions. How do we solve this problem?
Do we create separate test classes for these?
I suggest creating separate methods wrapping necessary precondition setup. Do not confuse this approach with traditional test setup. As an example, assume you wrote tests for receipt provider, which searches repository and depending on some validation steps, returns receipt. We might end-up with:
receipt doesn't exist in repository: return null
receipt exists, but doesn't match validator date: return null
receipt exists, matches validator date, but was not fully committed (i.e. was not processed by some external system): return null
We have several conditions here: receipt exists/doesn't exist, receipt is invalid date-wise, receipt is not commited. Our happy path is the default setup (for example done via traditional test setup). Then, happy path test would be as simple as (some C# pseudo-code):
[Test]
public void GetReceipt_ReturnsReceipt()
{
receiptProvider.GetReceipt("701").IsNotNull();
}
Now, for the special condition cases we simply write tiny, dedicated methods that would arrange our test environment (eg. setup dependencies) so that conditions are met:
[Test]
public void GetReceipt_ReturnsNull_WhenReceiptDoesntExist()
{
ReceiptDoesNotExistInRepository("701")
receiptProvider.GetReceipt("701").IsNull();
}
[Test]
public void GetReceipt_ReturnsNull_WhenExistingReceiptHasInvalidDate()
{
ReceiptHasInvalidDate("701");
receiptProvider.GetReceipt("701").IsNull();
}
You'll end up with couple extra helper methods, but your tests will be much easier to read and understand. This is especially helpful when logic is more complicated than simple yes-no setup:
[Test]
public void GetReceipt_ThrowsException_WhenUncommittedReceiptHasInvalidDate()
{
ReceiptHasInvalidDate("701");
ReceiptIsUncommitted("701");
receiptProvider.GetReceipt("701").Throws<Exception>();
}
It's an option to group tests with the same preconditions in the same classes, this also helps avoiding test classes of over a thousand lines. You can also group the creation of the preconditions in seperate methods and let each test call the applicable method. You can do this when most of the methods have different preconditions, otherwise you could just use a setup method that is called before the test.
I like to use a Setup method that will get called before each test runs. In this method I instantiate the class I want to test, giving it any dependencies it needs to be created. Then I will set the specific details for the individual tests inside the test method. It moves any common initialization of the class out to the setup method and allows the test to be focused on what it needs to be evaluated.
You may find this link valuable, it discusses an approach to Test Setups:
In Defense of Test Setup Methods, by Erik Dietrich
I'm testing a set of classes and my unit tests so far are along the lines
1. read in some data from file X
2. create new object Y
3. sanity assert some basic properties of Y
4. assert advanced properties of Y
There's about 30 of these tests, that differ in input/properties of Y that can be checked. However, at the current project state, it sometimes crashes at #2 or already fails at #3. It should never crash at #1. For the time being, I'm accepting all failures at #4.
I'd like to e.g. see a list of unit tests that fail at #3, but so far ignore all those that fail at #4. What's the standard approach/terminology to create this? I'm using JUnit for Java with Eclipse.
You need reporting/filtering on your unit test results.
jUnit itself wants your tests to pass, fail, or not run - nothing in between.
However, it doesn't care much about how those results are tied to passing/failing the build, or reported.
Using tools like maven (surefire execution plugin) and some custom code, you can categorize your tests to distinguish between 'hard failures', 'bad, but let's go on', etc. But that's build validation or reporting based on test results rather than testing.
(Currently, our build process relies on annotations such as #Category(WorkInProgress.class) for each test method to decide what's critical and what's not).
What I could think of would be to create assert methods that check some system property as to whether to execute the assert:
public static void assertTrue(boolean assertion, int assertionLevel){
int pro = getSystemProperty(...);
if (pro >= assertionLevel){
Assert.assertTrue(assertion);
}
}
I'm using Spring junit runner, and its transaction capabilities to start and rollback transactions before and after every test.
However I have a test class with some heavy DB initialization and I want each test (method) to run within the transaction scope, i.e. start a transaction at the beginning of the test and roll it back after all tests in the class completed.
Are You aware that having all the test methods inside your class within a single transaction will cause a lot of trouble? Basically you can no longer depend on having a clean database as other test methods will modify it along the way. And because the order of test methods is not specified, you cannot depend on it as well (so you'll never know what exactly does the database hold). Essentially You are giving up all test transactional support, your only guarantee is that after running the whole test case, the database will remain clean (so other test cases won't be affected).
End of grumbling thou. I don't think Spring supports such a behavior out-of-the-box (partially due to the reasons highlighted above). However, if You look closely at TransactionalTestExecutionListener, it is responsible for transactional support in Spring-powered tests.
#Override
public void beforeTestMethod(TestContext testContext) throws Exception {
//...
startNewTransaction(testContext, txContext);
}
and:
#Override
public void afterTestMethod(TestContext testContext) throws Exception {
//...
endTransaction(testContext, txContext);
//...
}
Now look even closer, there are unimplemented beforeTestClass and afterTestClass... You will find detailed instructions how to wire this all up in chapter 9.3.5 of Spring reference documentation. Hint: write your own listener and use it instead of TransactionalTestExecutionListener.