Flag test as expected to fail in unit 5 - junit-jupiter

I have a unit test, written with JUnit 5 (Jupiter), that is failing. I do not currently have time to fix the problem, so I would like to mark the test as an expected failure. Is there a way to do that?
I see #Disable which causes the test to not be run. I would like the test to still run (and ideally fail the build if it starts to work), so that I remember that the test is there.
Is there such an annotation in Junit 5? I could use assertThrows to catch the error, but I would like the build output to indicate that this is not a totally normal test.

You can disable the failing test with the #Disabled annotation. You can then add another test that asserts the first one does indeed fail:
#Test
#Disabled
void fixMe() {
Assertions.fail();
}
#Test
void fixMeShouldFail() {
assertThrows(AssertionError.class, this::fixMe);
}

Related

Questions about google test and assertion output (test results); can I trust when gtest says a test passed?

When I create a TEST or TEST_F test, how can I know that my assertion is actually executing?
The problem I have is, when I have an empty TEST_F, for example,
TEST_F(myFixture, test1) {}
When it runs, gtest says this test passes. I would have expected the test to fail, until I write test code. Anyway.
So, my problem is that when gtest says that when test is "OK" or that it passed, I can't trust it, because a test could "pass" if there is no test code.
It would be nice to print what my EXPECT_ or ASSERT calls are doing and then see that they pass. Problem is, if I do any std::cout calls, that seems to be out of sync with the test results at the end. The output messages are not in sync with any of my own std::cout calls.
Is there a verbose option to google test? How can I be sure the EXPECT that I coded is actually running?
You might consider looking at TDD, Test Driven Development, https://en.wikipedia.org/wiki/Test-driven_development
write one test => it will fail
write code to make the test pass => test passes
Rinse and repeat: express each requirement as a test, that initially fails. Write code to make that test pass.

Allure Framework: How to fail only one step in the test method

Does anyone know, how to fail only one step in the test and allow the test finish all steps, using Allure framework!
For exemple, I have one test wich consists of 3 test steps, and each of the steps has it's own assertion. It can look like this:
#Test
public void test()
step1();
step2();
step3();
}
#Step
public void step1() {
Assert.assertEquals(1, 0);
}
#Step
public void step2() {
Assert.assertEquals(1, 1);
}
#Step
public void step3() {
Assert.assertEquals(2, 2);
}
When step1 fail, then test method will fail too. Is there a possibility to finish other two steps with their own assertions and not fail the test? Like TestNG does with SoftAssert. (org.testng.asserts.SoftAssert)
And as a result I would like to see the report where we can see all broken and passed test steps,(in one test method) like in 1.4.9 Allure release https://github.com/allure-framework/allure-core/releases/tag/allure-core-1.4.9 on the picture report.
Maybe you can, but you shouldn't. You're breaking the concept of a test. A test is something that either passes or fails with a description of a failure. It is not something that can partially fail.
When you write a test you should include only those assertions that are bound to each other. Like if the first assertion fails, then the second is not needed by your functionality at all. That means if you have assertions that are not dependent on each other – you better make a couple of test methods and they will be completely separated and will fail separately.
In short, the test should not continue after a failed step and that's it. Otherwise – it's a bad test.
P.S. That's why JUnit does not allow soft assertions.
P.P.S If you reallyreallyreally need to check all the three things – possible workaround is using an ErrorCollector.

How do I use TestNG SkipException?

How do I use TestNG throw new SkipException() effectively? Does anyone have an example?
I tried throwing this exception at the start of a test method but it blows up the teardown, setup, methods, etc. , and has collateral damage by causing a few (not all) of the subsequent tests to be skipped also, and shows a bunch of garbage on the TestNG HTML report.
I use TestNG to run my unit tests and I already know how to use an option to the #Test annotation to disable a test. I would like my test to show up as "existent" on my report but without counting it in the net result. In other words, it would be nice if there was a #Test annotation option to "skip" a test. This is so that I can mark tests as ignored sortof without having the test disappear from the list of all tests.
Is "SkipException" required to be thrown in #BeforeXXX before the #Test is ran? That might explain the wierdness I am seeing.
Yes, my suspicion was correct. Throwing the exception within #Test doesn't work, and neither did throwing it in #BeforeTest, while I am using parallel by classes. If you do that, the exception will break the test setup and your TestNG report will show exceptions within all of the related #Configuration methods and may even cause subsequent tests to fail without being skipped.
But, when I throw it within #BeforeMethod, it works perfectly. Glad I was able to figure it out. The documentation of the class suggests it will work in any of the #Configuration annotated methods, but something about what I am doing didn't allow me to do that.
#BeforeMethod
public void beforeMethod() {
throw new SkipException("Testing skip.");
}
I'm using TestNG 6.8.1.
I have a few #Test methods from which I throw SkipException, and I don't see any weirdness. It seems to work just as expected.
#Test
public void testAddCategories() throws Exception {
if (SupportedDbType.HSQL.equals(dbType)) {
throw new SkipException("Using HSQL will fail this test. aborting...");
}
...
}
Maven output:
Results :
Tests run: 85, Failures: 0, Errors: 0, Skipped: 2
While using DataProvider empty test using Apache POI create seperate check #BeforeTest we can skip the data base is empty or null in that scenario we can use this skiptest with row check is empty using boolean true check then skipped that expection do not go to entire check its having 1000 input check rather its skip that data provider null...
For skipping test case from #Test annotation option you can use 'enable=false' attribute with #Test annotation as below
#Test(enable=false)
This will skip the test case without running it. but other tests, setup and teardown will run without any issue.

How to mark a Google Test test-case as "expected to fail"?

I want to add a testcase for functionality not yet implemented and mark this test case as "it's ok that I fail".
Is there a way to do this?
EDIT:
I want the test to be executed and the framework should verify it is failing as long as the testcase is in the "expected fail" state.
EDIT2:
It seems that the feature I am interested in does not exist in google-test, but it does exist in the Boost Unit Test Framework, and in LIT.
EXPECT_NONFATAL_FAILURE is what you want to wrap around the code that you expect to fail. Note you will hav to include the gtest-spi.h header file:
#include "gtest-spi.h"
// ...
TEST_F( testclass, testname )
{
EXPECT_NONFATAL_FAILURE(
// your code here, or just call:
FAIL()
,"Some optional text that would be associated with"
" the particular failure you were expecting, if you"
" wanted to be sure to catch the correct failure mode" );
}
Link to docs: https://github.com/google/googletest/blob/955c7f837efad184ec63e771c42542d37545eaef/docs/advanced.md#catching-failures
You can prefix the test name with DISABLED_.
I'm not aware of a direct way to do this, but you can fake it with something like this:
try {
// do something that should fail and throw and exception
...
EXPECT_TRUE(false); // this should not be reached!
} catch (...) {
// return or print a message, etc.
}
Basically, the test will fail if it reaches the contradictory expectation.
It would be unusual to have a unit test in an expected-to-fail state. Unit tests can test for positive conditions ("expect x to equal 2") or negative conditions ("expect save to throw an exception if name is null"), and can be flagged not to run at all (if the feature is pending and you don't want the noise in your test output). But what you seem to be asking for is a way to negate a feature's test while you're working on it. This is against the tenants of Test Driven Development.
In TDD, what you should do is write tests that accurately describe what a feature should do. If that feature isn't written yet then, by definition, those tests will and should fail. Then you implement the feature until, one by one, all those tests pass. You want all the tests to start as failing and then move to passing. That's how you know when your feature is complete.
Think of how it would look if you were able to mark failing tests as passing as you suggest: all tests would pass and everything would look complete when the feature didn't work. Then, once you were done and the feature worked as expected, suddenly your tests would start to fail until you went in and unflagged them. Beyond being a strange way to work, this workflow would be very prone to error and false-positives.

How do I write NUnit unit tests without having to surround them with try catch statements?

At my company we are writing a bunch of unit tests. What we'd like to have done is for the unit tests to execute and whenever one succeeds or fails at the end of the test we can write that somewhere but we don't want to put that logic in every test.
Any idea how we could just write tests without having to surround the content of the test with the try catch logic that we've been using?
I'm guessing you do something like this:
[Test]
public void FailBecauseOfException()
{
try
{
throw new Exception();
}
catch (Exception e)
{
Assert.Fail(e.Message);
}
}
There is no need for this. The tests will fail automatically if they throw an exception. For example, the following test will show up as a failure:
[Test]
public void FailBecauseOfException()
{
throw new Exception();
}
I'm not entirely sure what you are trying to do here. Are you saying you are wrapping it in a try/catch so that you can catch when an exception occurs and log this?
If so, then a better way, probably, is just to get NUnit to write an output file and use this. I haven't used NUnit for about a year, but IIRC you can redirect its output to any file you like using the /out directive.
If there is a reason why you have to log it the way you say, then you'll either have to add your custom code to each test, or have a common "runner" that takes your code (for each test) as an anonymous method and runs it inside a single try..catch. That would prevent you having to repeat the try..catch for every test.
Apologies if I've misunderstood the question.
MSTest has TestCleanup, which runs after every test. In NUnit, the attribute to be used is TearDown (after every test) or TestFixtureTearDown (after all the test are completely). This executes after the end of each test.
If you want something to run just in case a test passes, you could have a member variable shouldRunExtraMethod, which is initialized to false before each test, and is changed to true at the end of the test. And on the TearDown, you only execute it depending on this variable value
If your unit test method covers the scenario in which you expect exceptions to be thrown, use the ExpectedException attribute. There's a post here on SO about using that attribute.
Expect exceptions in nUnit...
NUnit assert statements all have an option to print a message for each test for when it fails.
Although if you'd like to have it write out something somewhere at the end of each test, you can set it up in the teardown of each method. Just set the string to what you want written inside the test itself, and during teardown (which happens after each test) It can do whatever you want with it.
I'm fairly certain teardown occurs even if an exception is thrown. That should do what you're wanting.
The problem you have is that the NUnit Assert.* methods will throw an AssertionException whenever an assert fails - but it does nothing else. So it doesn't look like you can check anything outside of the unit test to verify whether the test failed or not.
The only alternative I can think of is to use AOP (Aspect Oriented Programming) with a tool such as PostSharp. This tool allows you to create aspects that can act on certain events. For example:
public class ExceptionDialogAttribute : OnExceptionAspect
{
public override void OnException(MethodExecutionEventArgs eventArgs)
{
string message = eventArgs.Exception.Message;
Window window = Window.GetWindow((DependencyObject) eventArgs.Instance);
MessageBox.Show(window, message, "Exception");
eventArgs.FlowBehavior = FlowBehavior.Continue;
}
}
This aspect is code which runs whenever an exception is raised:
[ExceptionDialog]
[Test]
public void Test()
{
assert.AreEqual(2, 4);
}
Since the above test will raise an exception, the code in ExceptionDialogAttribute will run. You can get information about the method, such as it's name, so that you can log it into a file.
It's been a long time since I used PostSharp, so it's worth checking out the examples and experimenting with it.