I want to add a testcase for functionality not yet implemented and mark this test case as "it's ok that I fail".
Is there a way to do this?
EDIT:
I want the test to be executed and the framework should verify it is failing as long as the testcase is in the "expected fail" state.
EDIT2:
It seems that the feature I am interested in does not exist in google-test, but it does exist in the Boost Unit Test Framework, and in LIT.
EXPECT_NONFATAL_FAILURE is what you want to wrap around the code that you expect to fail. Note you will hav to include the gtest-spi.h header file:
#include "gtest-spi.h"
// ...
TEST_F( testclass, testname )
{
EXPECT_NONFATAL_FAILURE(
// your code here, or just call:
FAIL()
,"Some optional text that would be associated with"
" the particular failure you were expecting, if you"
" wanted to be sure to catch the correct failure mode" );
}
Link to docs: https://github.com/google/googletest/blob/955c7f837efad184ec63e771c42542d37545eaef/docs/advanced.md#catching-failures
You can prefix the test name with DISABLED_.
I'm not aware of a direct way to do this, but you can fake it with something like this:
try {
// do something that should fail and throw and exception
...
EXPECT_TRUE(false); // this should not be reached!
} catch (...) {
// return or print a message, etc.
}
Basically, the test will fail if it reaches the contradictory expectation.
It would be unusual to have a unit test in an expected-to-fail state. Unit tests can test for positive conditions ("expect x to equal 2") or negative conditions ("expect save to throw an exception if name is null"), and can be flagged not to run at all (if the feature is pending and you don't want the noise in your test output). But what you seem to be asking for is a way to negate a feature's test while you're working on it. This is against the tenants of Test Driven Development.
In TDD, what you should do is write tests that accurately describe what a feature should do. If that feature isn't written yet then, by definition, those tests will and should fail. Then you implement the feature until, one by one, all those tests pass. You want all the tests to start as failing and then move to passing. That's how you know when your feature is complete.
Think of how it would look if you were able to mark failing tests as passing as you suggest: all tests would pass and everything would look complete when the feature didn't work. Then, once you were done and the feature worked as expected, suddenly your tests would start to fail until you went in and unflagged them. Beyond being a strange way to work, this workflow would be very prone to error and false-positives.
Related
Recently I noticed that my team follows two approaches on how to write tests in Reactor. First one is with help of .block() method. And it looks something like that:
#Test
void set_entity_version() {
Entity entity = entityRepo.findById(ID)
.block();
assertNotNull(entity);
assertFalse(entity.isV2());
entityService.setV2(ID)
.block();
Entity entity = entityRepo.findById(ID)
.block();
assertNotNull(entity);
assertTrue(entity.isV2());
}
And the second one is about using of StepVerifier. And it looks something like that:
#Test
void set_entity_version() {
StepVerifier.create(entityRepo.findById(ID))
.assertNext(entity -> {
assertNotNull(entity);
assertFalse(entity.isV2());
})
.verifyComplete();
StepVerifier.create(entityService.setV2(ID)
.then(entityRepo.findById(ID)))
.assertNext(entity -> {
assertNotNull(entity);
assertTrue(entity.isV2());
})
.verifyComplete();
}
In my humble opinion, the second approach looks more reactive I would say. Moreover, official docs are very clear on that:
A StepVerifier provides a declarative way of creating a verifiable script for an async Publisher sequence, by expressing expectations about the events that will happen upon subscription.
Still, I'm really curious, what way should be encouraged to use as the main road for doing testing in Reactor. Should .block() method be abandoned completly or it could be useful in some cases? If yes, what such cases are?
Thanks!
You should use StepVerifier. It allows more options:
Verify that you expect n element in a flux
Verify that the flux/mono complete
Verify that an error is expected
Verify that a sequence is expected n element followed by an error (impossible to test with .block())
From the official doc:
public <T> Flux<T> appendBoomError(Flux<T> source) {
return source.concatWith(Mono.error(new IllegalArgumentException("boom")));
}
#Test
public void testAppendBoomError() {
Flux<String> source = Flux.just("thing1", "thing2");
StepVerifier.create(
appendBoomError(source))
.expectNext("thing1")
.expectNext("thing2")
.expectErrorMessage("boom")
.verify();
}
Create initial context
Using virtual time to manipulate time. So when you have something like Mono.delay(Duration.ofDays(1)) you don't have to wait 1 day for your test to complete.
Expect that no event are emitted for a given duration...
from https://medium.com/swlh/stepverifier-vs-block-in-reactor-ca754b12846b
There are pros and cons of both block() and StepVerifier testing
patterns. Hence, it is necessary to define a pattern or set of rules
which can guide us on how to use StepVerifier and block().
In order to decide which patterns to use, we can try to answer the
following questions which will provide a clear expectation from the
tests we are going to write:
Are we trying to test the reactive aspect of the code or just the output of the code?
In which of the patterns we find clarity based on the 3 A’s of testing i.e Arrange, Act, and Assert, in order to make the test
understandable?
What are the limitations of the block() API over StepVerifier in testing reactive code? Which API is more fluent for writing tests in
case of Exception?
If you try answering all these questions above, you will find the
answers to “what” and “where”. So, just give it a thought before
reading the following answers:
block() tests the output of the code and not the reactive aspect. In such a case where we are concerned about testing the output of
the code, rather than the reactive aspect of the code we can use a
block() instead of StepVerifier as it is easy to write and the tests
are more readable.
The assertion library for a block() pattern is better organised in terms of 3 A’s pattern i.e Arrange, Act, and Assert than
StepVerifier. In StepVerfier while testing a method call for a mock
class or even while testing a Mono output one has to write expectation
in the form of chained methods, unlike assert which in my opinion
decreases the readability of the tests. Also, if you forget to write
the terminal step i.e verify() in case of StepVerifier, the code
will not get executed and the test will go green. So, the developer
has to be very careful about calling verify at end of the chain.
There are some aspects of reactive code that can not be tested by using block() API. In such cases, one should use StepVerifier when we
are testing a Flux of data or subscription delays or subscriptions
on different Schedulers, etc, where the developer is bound to use
StepVerifier.
To verify exception by using block() API you need to use assertThatThrownBy API in assertions library that catches the
exception. With the use of an assertion API, error message and
instance of the exception can be asserted. StepVerifier also provides
assertions on exception by expectError() API and supports the
assertion of the element before errors are thrown in a Flux of
elements that can not be achieved by block(). So, for the assertion of
exception, StepVerifier is better than a block() as it can assert
both Mono/Flux.
When I create a TEST or TEST_F test, how can I know that my assertion is actually executing?
The problem I have is, when I have an empty TEST_F, for example,
TEST_F(myFixture, test1) {}
When it runs, gtest says this test passes. I would have expected the test to fail, until I write test code. Anyway.
So, my problem is that when gtest says that when test is "OK" or that it passed, I can't trust it, because a test could "pass" if there is no test code.
It would be nice to print what my EXPECT_ or ASSERT calls are doing and then see that they pass. Problem is, if I do any std::cout calls, that seems to be out of sync with the test results at the end. The output messages are not in sync with any of my own std::cout calls.
Is there a verbose option to google test? How can I be sure the EXPECT that I coded is actually running?
You might consider looking at TDD, Test Driven Development, https://en.wikipedia.org/wiki/Test-driven_development
write one test => it will fail
write code to make the test pass => test passes
Rinse and repeat: express each requirement as a test, that initially fails. Write code to make that test pass.
I'm starting out using unit tests. I have a situation and don't know how to proceed:
For example:
I have a class that opens and reads a file.
In my unit test, I want to test the open method and the read method, but to read the file I need to open the file first.
If the "open file" test fails, the "read file" test would fail too!
So, how to explicit that the read fail because the open? I test the open inside the read??
The key feature of unit tests is isolation: one specific unit test should cover one specific functionality - and if it fails, it should report it.
In your example, read clearly depends on open functionality: if the latter is broken, there's no reason to test the former, as we do know the result. More, reporting read failure will only add some irrelevant noise to your test results.
What can (and should be) reported for read in this case is test skipped or something similar. That's how it's done in PHPUnit, for example:
class DependencyFailureTest extends PHPUnit_Framework_TestCase
{
public function testOne()
{
$this->assertTrue(FALSE);
}
/**
* #depends testOne
*/
public function testTwo()
{
}
}
Here we have testTwo dependant on testOne. And that's what's shown when the test is run:
There was 1 failure:
1) testOne(DependencyFailureTest)
Failed asserting that <boolean:false> is true.
/home/sb/DependencyFailureTest.php:6
There was 1 skipped test:
1) testTwo(DependencyFailureTest)
This test depends on "DependencyFailureTest::testOne" to pass.
FAILURES!
Tests: 2, Assertions: 1, Failures: 1, Skipped: 1.
Explanation:
To quickly localize defects, we want our attention to be focused on
relevant failing tests. This is why PHPUnit skips the execution of a
test when a depended-upon test has failed.
Opening the file is a prerequisite to reading the file, so it's fine to include that in the test. You can throw an exception in your code if the file failed to open. The error message in the test will then make it clear why the test failed.
I would also recommend that you consider creating the file in the test itself to remove any dependencies on existing files. That way you ensure that you always have a valid file to reference.
Generally speaking, you wouldn't find yourself testing your proposed scenario of unit testing the ability to read from a file, since you will usually end up using a file manipulation library of some kind and can usually safely assume that the maintainers of said library have the appropriate unit tests already in place (for example, I feel pretty confident that I can use the File class in .NET without worry).
That being said, the idea of one condition being an impediment to testing a second is certainly valid. That's why mock frameworks were created, so that you can easily set up a mock object that will always behave in a defined manner that can then be substituted for the initial dependency. This allows you to focus squarely on unit testing the second object/condition/etc. in a test scenario.
This is my doubt on what we regard as a "unit" while unit-testing.
say I have a method like this,
public String myBigMethod()
{
String resultOne = moduleOneObject.someOperation();
String resultTwo = moduleTwoObject.someOtherOperation(resultOne);
return resultTwo;
}
( I have unit-tests written for someOperation() and someOtherOperation() seperately )
and this myBigMethod() kinda integrates ModuleOne and ModuleTwo by using them as above,
then, is the method "myBigMethod()" still considered as a "unit" ?
Should I be writing a test for this "myBigMethod()" ?
Say I have written a test for myBigMethod()... If testSomeOperation() fails, it would also result in testMyBigMethod() to fail... Now testMyBigMethod()'s failure might show a not-so-correct-location of the bug.
One-Cause causing two tests to fail doesn't look so good to me. But donno if there's any better way...? Is there ?
Thanks !
You want to test the logic of myBigMethod without testing the dependencies.
It looks like the specification of myBigMethod is:
Call moduleOneObject.someOperation
Pass the result into moduleTwoObject.someOtherOperation
Return the result
The key to testing just this behavior is to break the dependencies on moduleOneObject and moduleTwoObject. Typically this is done by passing the dependencies into the class under test in the constructor (constructor injection) or setting them via properties (setter injection).
The question isn't just academic because in practice moduleOneObject and moduleTwoObject could go out and hit external systems such as a database. A true unit test doesn't hit external systems as that would make it an "integration test".
The test for myBigMethod() should test the combination of the results of the other two methods called. So, yes it should fail if either of the methods it depends on fails, but it should be testing more. There should be some case where someOperation() and someOtherOperation() work correctly, but myBigMethod() can still fail. If that's not possible, then there's no need to test myBigMethod().
At my company we are writing a bunch of unit tests. What we'd like to have done is for the unit tests to execute and whenever one succeeds or fails at the end of the test we can write that somewhere but we don't want to put that logic in every test.
Any idea how we could just write tests without having to surround the content of the test with the try catch logic that we've been using?
I'm guessing you do something like this:
[Test]
public void FailBecauseOfException()
{
try
{
throw new Exception();
}
catch (Exception e)
{
Assert.Fail(e.Message);
}
}
There is no need for this. The tests will fail automatically if they throw an exception. For example, the following test will show up as a failure:
[Test]
public void FailBecauseOfException()
{
throw new Exception();
}
I'm not entirely sure what you are trying to do here. Are you saying you are wrapping it in a try/catch so that you can catch when an exception occurs and log this?
If so, then a better way, probably, is just to get NUnit to write an output file and use this. I haven't used NUnit for about a year, but IIRC you can redirect its output to any file you like using the /out directive.
If there is a reason why you have to log it the way you say, then you'll either have to add your custom code to each test, or have a common "runner" that takes your code (for each test) as an anonymous method and runs it inside a single try..catch. That would prevent you having to repeat the try..catch for every test.
Apologies if I've misunderstood the question.
MSTest has TestCleanup, which runs after every test. In NUnit, the attribute to be used is TearDown (after every test) or TestFixtureTearDown (after all the test are completely). This executes after the end of each test.
If you want something to run just in case a test passes, you could have a member variable shouldRunExtraMethod, which is initialized to false before each test, and is changed to true at the end of the test. And on the TearDown, you only execute it depending on this variable value
If your unit test method covers the scenario in which you expect exceptions to be thrown, use the ExpectedException attribute. There's a post here on SO about using that attribute.
Expect exceptions in nUnit...
NUnit assert statements all have an option to print a message for each test for when it fails.
Although if you'd like to have it write out something somewhere at the end of each test, you can set it up in the teardown of each method. Just set the string to what you want written inside the test itself, and during teardown (which happens after each test) It can do whatever you want with it.
I'm fairly certain teardown occurs even if an exception is thrown. That should do what you're wanting.
The problem you have is that the NUnit Assert.* methods will throw an AssertionException whenever an assert fails - but it does nothing else. So it doesn't look like you can check anything outside of the unit test to verify whether the test failed or not.
The only alternative I can think of is to use AOP (Aspect Oriented Programming) with a tool such as PostSharp. This tool allows you to create aspects that can act on certain events. For example:
public class ExceptionDialogAttribute : OnExceptionAspect
{
public override void OnException(MethodExecutionEventArgs eventArgs)
{
string message = eventArgs.Exception.Message;
Window window = Window.GetWindow((DependencyObject) eventArgs.Instance);
MessageBox.Show(window, message, "Exception");
eventArgs.FlowBehavior = FlowBehavior.Continue;
}
}
This aspect is code which runs whenever an exception is raised:
[ExceptionDialog]
[Test]
public void Test()
{
assert.AreEqual(2, 4);
}
Since the above test will raise an exception, the code in ExceptionDialogAttribute will run. You can get information about the method, such as it's name, so that you can log it into a file.
It's been a long time since I used PostSharp, so it's worth checking out the examples and experimenting with it.