I can not think of a better heading.
In the following code, if rollBackLogger is nil, the first test case would fail but all other test cases would raise an exception.
Is there a way available to avoid this, other than using an if statement?
I believe that this is a very common situation for unit testing and that there should be some function in assert or some other way around to avoid this.
assert.NotNil(rollbackLogger)
assert.Equal("Action", rollBackLogger[0].Action)
assert.Equal("random path", rollBackLogger[0].FilePath)
Use require.NotNil instead.
Package require implements the same assertions as the assert package but stops test execution when a test fails.
require.NoError is also particularly useful.
You can simply use t.FailNow() if you want the test to fail if the condition isn't valid.
I don't think there's a way to stop the test on an assert failure without using a condition or an external package.
if !assert.NotNil(rollbackLogger) {
t.FailNow()
}
assert.Equal("Action", rollBackLogger[0].Action)
assert.Equal("random path", rollBackLogger[0].FilePath)
or if you use the testify/assert package,
if !assert.NotNil(rollbackLogger) {
assert.FailNow(t, "message")
}
assert.Equal("Action", rollBackLogger[0].Action)
assert.Equal("random path", rollBackLogger[0].FilePath)
Related
I've found a lot of this sort of thing when refactoring our Jest test suites:
it('calls the API and throws an error', async () => {
expect.assertions(2);
try {
await login('email', 'password');
} catch (error) {
expect(error.name).toEqual('Unauthorized');
expect(error.status).toEqual(401);
}
});
I believe the expect.assertions(2) line is redundant here, and can safely be removed, because we already await the async call to login().
Am I correct, or have I misunderstood how expect.assertions works?
expect.assertions is important when testing the error scenarios of asynchronous code, and is not redundant.
If you remove expect.assertions from your example you can't be confident that login did in fact throw the error.
it('calls the API and throws an error', async () => {
try {
await login('email', 'password');
} catch (error) {
expect(error.name).toEqual('Unauthorized');
expect(error.status).toEqual(401);
}
});
Let's say someone changes the behavior of login to throw an error based on some other logic, or someone has affected the mock for this test which no longer causes login to throw. The assertions in the catch block won't run but the test will still pass.
Using expect.assertions at the start of the test ensures that if the assertions inside the catch don't run, we get a failure.
This is from Jest documentation:
Expect.assertions(number) verifies that a certain number of assertions
are called during a test. This is often useful when testing
asynchronous code, in order to make sure that assertions in a callback
actually got called.
So to put in other words, expect.assertions makes sure that the n number of assertions are made by the end of the test.
It's good to use it especially when writing a new tests, so one can easily check that correct assertions are made during the test. Async tests often pass because the intended assertions were not made before the test-runner (Jest,Mocha etc.) thought the test was finished.
I think we are missing the obvious here.
expect.assertions(3) is simply saying ...
I expected 3 expect statements to be called before the test times out. e.g.
expect(actual1).toEqual(expected1);
expect(actual2).toEqual(expected2);
expect(actual3).toEqual(expected3);
This timing out business is the reason to use expect.assertions. It would be silly to use it in a purely synchronous test. At least one of the expect statements would be found in a subscribe block (or other async block) within the spec file.
To ensure that the assertions in the catch block of an async/await test are adequately tested, expect.assertions(n) must be declared as shown in your code snippet. Such declaration is unnecessary for async/await tests without the catch block.
It seems quite unintuitive but it is simply the way it is. Perhaps, for certain reasons well deep within the javascript runtime, the test environment can detect when an await'ed' promise successfully resolved but cannot detect same for await'ed' promises that failed to resolve. The creators of the test environment would likely know verbatim why such is the case.
I have to admit that apart from error testing, I find it challenging to see a real use for expect.assertions. The above snippet can be changed to the following with the same guarantee but I think it reads more naturally and doesn't require me to count how many time I call expect. This is especially error-prone if a test if complex:
it('calls the API and throws an error', async () => {
try {
await login('email', 'password');
fail('must throw')
} catch (error) {
expect(error.name).toEqual('Unauthorized');
expect(error.status).toEqual(401);
}
});
Recently I noticed that my team follows two approaches on how to write tests in Reactor. First one is with help of .block() method. And it looks something like that:
#Test
void set_entity_version() {
Entity entity = entityRepo.findById(ID)
.block();
assertNotNull(entity);
assertFalse(entity.isV2());
entityService.setV2(ID)
.block();
Entity entity = entityRepo.findById(ID)
.block();
assertNotNull(entity);
assertTrue(entity.isV2());
}
And the second one is about using of StepVerifier. And it looks something like that:
#Test
void set_entity_version() {
StepVerifier.create(entityRepo.findById(ID))
.assertNext(entity -> {
assertNotNull(entity);
assertFalse(entity.isV2());
})
.verifyComplete();
StepVerifier.create(entityService.setV2(ID)
.then(entityRepo.findById(ID)))
.assertNext(entity -> {
assertNotNull(entity);
assertTrue(entity.isV2());
})
.verifyComplete();
}
In my humble opinion, the second approach looks more reactive I would say. Moreover, official docs are very clear on that:
A StepVerifier provides a declarative way of creating a verifiable script for an async Publisher sequence, by expressing expectations about the events that will happen upon subscription.
Still, I'm really curious, what way should be encouraged to use as the main road for doing testing in Reactor. Should .block() method be abandoned completly or it could be useful in some cases? If yes, what such cases are?
Thanks!
You should use StepVerifier. It allows more options:
Verify that you expect n element in a flux
Verify that the flux/mono complete
Verify that an error is expected
Verify that a sequence is expected n element followed by an error (impossible to test with .block())
From the official doc:
public <T> Flux<T> appendBoomError(Flux<T> source) {
return source.concatWith(Mono.error(new IllegalArgumentException("boom")));
}
#Test
public void testAppendBoomError() {
Flux<String> source = Flux.just("thing1", "thing2");
StepVerifier.create(
appendBoomError(source))
.expectNext("thing1")
.expectNext("thing2")
.expectErrorMessage("boom")
.verify();
}
Create initial context
Using virtual time to manipulate time. So when you have something like Mono.delay(Duration.ofDays(1)) you don't have to wait 1 day for your test to complete.
Expect that no event are emitted for a given duration...
from https://medium.com/swlh/stepverifier-vs-block-in-reactor-ca754b12846b
There are pros and cons of both block() and StepVerifier testing
patterns. Hence, it is necessary to define a pattern or set of rules
which can guide us on how to use StepVerifier and block().
In order to decide which patterns to use, we can try to answer the
following questions which will provide a clear expectation from the
tests we are going to write:
Are we trying to test the reactive aspect of the code or just the output of the code?
In which of the patterns we find clarity based on the 3 A’s of testing i.e Arrange, Act, and Assert, in order to make the test
understandable?
What are the limitations of the block() API over StepVerifier in testing reactive code? Which API is more fluent for writing tests in
case of Exception?
If you try answering all these questions above, you will find the
answers to “what” and “where”. So, just give it a thought before
reading the following answers:
block() tests the output of the code and not the reactive aspect. In such a case where we are concerned about testing the output of
the code, rather than the reactive aspect of the code we can use a
block() instead of StepVerifier as it is easy to write and the tests
are more readable.
The assertion library for a block() pattern is better organised in terms of 3 A’s pattern i.e Arrange, Act, and Assert than
StepVerifier. In StepVerfier while testing a method call for a mock
class or even while testing a Mono output one has to write expectation
in the form of chained methods, unlike assert which in my opinion
decreases the readability of the tests. Also, if you forget to write
the terminal step i.e verify() in case of StepVerifier, the code
will not get executed and the test will go green. So, the developer
has to be very careful about calling verify at end of the chain.
There are some aspects of reactive code that can not be tested by using block() API. In such cases, one should use StepVerifier when we
are testing a Flux of data or subscription delays or subscriptions
on different Schedulers, etc, where the developer is bound to use
StepVerifier.
To verify exception by using block() API you need to use assertThatThrownBy API in assertions library that catches the
exception. With the use of an assertion API, error message and
instance of the exception can be asserted. StepVerifier also provides
assertions on exception by expectError() API and supports the
assertion of the element before errors are thrown in a Flux of
elements that can not be achieved by block(). So, for the assertion of
exception, StepVerifier is better than a block() as it can assert
both Mono/Flux.
I want to add a testcase for functionality not yet implemented and mark this test case as "it's ok that I fail".
Is there a way to do this?
EDIT:
I want the test to be executed and the framework should verify it is failing as long as the testcase is in the "expected fail" state.
EDIT2:
It seems that the feature I am interested in does not exist in google-test, but it does exist in the Boost Unit Test Framework, and in LIT.
EXPECT_NONFATAL_FAILURE is what you want to wrap around the code that you expect to fail. Note you will hav to include the gtest-spi.h header file:
#include "gtest-spi.h"
// ...
TEST_F( testclass, testname )
{
EXPECT_NONFATAL_FAILURE(
// your code here, or just call:
FAIL()
,"Some optional text that would be associated with"
" the particular failure you were expecting, if you"
" wanted to be sure to catch the correct failure mode" );
}
Link to docs: https://github.com/google/googletest/blob/955c7f837efad184ec63e771c42542d37545eaef/docs/advanced.md#catching-failures
You can prefix the test name with DISABLED_.
I'm not aware of a direct way to do this, but you can fake it with something like this:
try {
// do something that should fail and throw and exception
...
EXPECT_TRUE(false); // this should not be reached!
} catch (...) {
// return or print a message, etc.
}
Basically, the test will fail if it reaches the contradictory expectation.
It would be unusual to have a unit test in an expected-to-fail state. Unit tests can test for positive conditions ("expect x to equal 2") or negative conditions ("expect save to throw an exception if name is null"), and can be flagged not to run at all (if the feature is pending and you don't want the noise in your test output). But what you seem to be asking for is a way to negate a feature's test while you're working on it. This is against the tenants of Test Driven Development.
In TDD, what you should do is write tests that accurately describe what a feature should do. If that feature isn't written yet then, by definition, those tests will and should fail. Then you implement the feature until, one by one, all those tests pass. You want all the tests to start as failing and then move to passing. That's how you know when your feature is complete.
Think of how it would look if you were able to mark failing tests as passing as you suggest: all tests would pass and everything would look complete when the feature didn't work. Then, once you were done and the feature worked as expected, suddenly your tests would start to fail until you went in and unflagged them. Beyond being a strange way to work, this workflow would be very prone to error and false-positives.
I'm trying to test a c++ code on an Nunit framework but I keep getting the following Exception
System.Runtime.InteropServices.SEHException : External Component has thrown an exception.
which is supposedly perfectly normal (I assume) anyway I wanna ignore it. (i.e. Use ExpectedException) This is my .h file
[Test, Description("Tests if an Entity has been successfully Locked")]
void test_LockOperation();
and the .cpp file
void TestDmObstacles::test_LockOperation()
{
lockVal = DbtoDmObstaclesAdapter::lock( CmnGuid::parseString( L"3B6DB8F8-4BA7-DD11-B6A7-001E8CDE165C" ) );
//When lock is successful the lockVal is 0
Assert::AreEqual(0, lockVal);
}
I wanna use ExpectedException but I don't know how to do it in c++. I tried the try/catch method as well but it didn't work (I just put the Assertion in the catch block)
PS: I can't use another framework it has to be Nunit
EDIT
Here is the try/catch approach I used
void TestDmObstacles::test_LockOperation()
{
try
{
lockVal = DbtoDmObstaclesAdapter::lock( CmnGuid::parseString( L"3B6DB8F8-4BA7-DD11-B6A7-001E8CDE165C" ) );
}
catch (...)
{
//Assert::Fail();
Assert::AreEqual(0, lockVal);
}
}
Is the exception expected, or is the exception acceptable?
If it is expected, then your unit test framework should have some kind of API that allows you to state the expected exception, and to fail the test if it does not occur. A quick trawl through the documentation yields the incantation:
[ExpectedException( "System.ArgumentException" )]
(replace System.ArgumentException with the exception you're expecting.)
If the exception is merely acceptable, then I would say that either your code or your test is broken. A unit test is to test that expected things happen. If there is a result in your test that only may yield a particular result, then you are not testing a consistent view of the unit from test to test. Hence, you're not really testing it.
It might indicate, for example, that your code is leaking an unexpected exception that it should be handling instead.
Your code sample doesn't match what you are trying to achieve : if the exception is expected, than catching it is not supposed to fail the test.
Note that I wouldn't recommend (at all) for the test to catch (...) : any thrown exception will induce the same test result, which I doubt is what you want.
At my company we are writing a bunch of unit tests. What we'd like to have done is for the unit tests to execute and whenever one succeeds or fails at the end of the test we can write that somewhere but we don't want to put that logic in every test.
Any idea how we could just write tests without having to surround the content of the test with the try catch logic that we've been using?
I'm guessing you do something like this:
[Test]
public void FailBecauseOfException()
{
try
{
throw new Exception();
}
catch (Exception e)
{
Assert.Fail(e.Message);
}
}
There is no need for this. The tests will fail automatically if they throw an exception. For example, the following test will show up as a failure:
[Test]
public void FailBecauseOfException()
{
throw new Exception();
}
I'm not entirely sure what you are trying to do here. Are you saying you are wrapping it in a try/catch so that you can catch when an exception occurs and log this?
If so, then a better way, probably, is just to get NUnit to write an output file and use this. I haven't used NUnit for about a year, but IIRC you can redirect its output to any file you like using the /out directive.
If there is a reason why you have to log it the way you say, then you'll either have to add your custom code to each test, or have a common "runner" that takes your code (for each test) as an anonymous method and runs it inside a single try..catch. That would prevent you having to repeat the try..catch for every test.
Apologies if I've misunderstood the question.
MSTest has TestCleanup, which runs after every test. In NUnit, the attribute to be used is TearDown (after every test) or TestFixtureTearDown (after all the test are completely). This executes after the end of each test.
If you want something to run just in case a test passes, you could have a member variable shouldRunExtraMethod, which is initialized to false before each test, and is changed to true at the end of the test. And on the TearDown, you only execute it depending on this variable value
If your unit test method covers the scenario in which you expect exceptions to be thrown, use the ExpectedException attribute. There's a post here on SO about using that attribute.
Expect exceptions in nUnit...
NUnit assert statements all have an option to print a message for each test for when it fails.
Although if you'd like to have it write out something somewhere at the end of each test, you can set it up in the teardown of each method. Just set the string to what you want written inside the test itself, and during teardown (which happens after each test) It can do whatever you want with it.
I'm fairly certain teardown occurs even if an exception is thrown. That should do what you're wanting.
The problem you have is that the NUnit Assert.* methods will throw an AssertionException whenever an assert fails - but it does nothing else. So it doesn't look like you can check anything outside of the unit test to verify whether the test failed or not.
The only alternative I can think of is to use AOP (Aspect Oriented Programming) with a tool such as PostSharp. This tool allows you to create aspects that can act on certain events. For example:
public class ExceptionDialogAttribute : OnExceptionAspect
{
public override void OnException(MethodExecutionEventArgs eventArgs)
{
string message = eventArgs.Exception.Message;
Window window = Window.GetWindow((DependencyObject) eventArgs.Instance);
MessageBox.Show(window, message, "Exception");
eventArgs.FlowBehavior = FlowBehavior.Continue;
}
}
This aspect is code which runs whenever an exception is raised:
[ExceptionDialog]
[Test]
public void Test()
{
assert.AreEqual(2, 4);
}
Since the above test will raise an exception, the code in ExceptionDialogAttribute will run. You can get information about the method, such as it's name, so that you can log it into a file.
It's been a long time since I used PostSharp, so it's worth checking out the examples and experimenting with it.