Wrong function call's evaluation when in death test - c++

I'm writing tests using gtest and gmock. Most of my test cases are supposed to crash with an custom assert (that I mock).
Here come troubles : whereas the assert is well triggered, I have plenty of problems with the expected calls.
The following code is the three steps I have been through in order to make it works ('cause yes, this part works) :
class MyTestedObject : public testing::Test {
public:
static MockObject * myMockedObject;
};
void assertFailure() {
exit(1);
}
TEST_F(MyTestedObjectDeathTest, nullInputConstructors) {
MockAssertHandler assertHandler;
EXPECT_CALL(assertHandler, failure(_,_,_,_))
.Times(1)
.WillRepeatedly(InvokeWithoutArgs(assertFailure));
setHandler(assertHandler);
testing::Mock::AllowLeak(myMockedObject);
testing::Mock::AllowLeak(&assertHandler);
EXPECT_DEATH(new MyTestedObject(NULL, NULL,0), ".*");
}
MyTestedObject's constructor begins to check whether arguments are NULL. It's supposed to trigger an assert if at least one of them is. But the test fails because failure is 'never called'. Debug reveals it is called.
Then I tried to comment the Times part, just to be sure it came from here and it was the only issue. It works, but is not adequate : I want to be sure the program die from my assert. As the evaluation of EXPECT_CALL is done when the mock object is destroyed, I guessed the exit calls was messing up the whole thing, so I tried this, which works :
void testHelper() {
MockAssertHandler assertHandler;
EXPECT_CALL(assertHandler, failure(_,_,_,_))
.Times(1)
.WillRepeatedly(InvokeWithoutArgs(assertFailure));
setHandler(assertHandler);
testing::Mock::AllowLeak(MyTestedObjectTest::myMockObject);
testing::Mock::AllowLeak(&assertHandler);
new MyTestedObject(NULL, NULL,0);
}
TEST_F(MyTestedObjectDeathTest, nullInputConstructors) {
EXPECT_DEATH(testHelper(), ".*");
}
Now, I'd like to be sure no calls are made on others functions.
I tried to put myMockedObject in a StrictMock structure and to put EXPECT_CALL(...).Times(0), but I got the same pattern as at first : the 'exit' call seems to block all EXPECT_CALL evaluation.
Any hint /workaround ? :)
EDIT : Forgot to tell : Execution env. is Windows 7 with Visual Studio 2008.

Google Test's wiki explains this:
Since statement runs in the child process, any in-memory side effect
(e.g. modifying a variable, releasing memory, etc) it causes will not
be observable in the parent process.
That includes Google Mock's tracking of calls made in the death test statement. In short, Google Mock and death tests don't mix.
My advice here is to separate those tests. Use Google Mock to verify that a failure handler is invoked and use death tests to verify that your failure handler does indeed do what it's supposed to (terminate the program, print specified output, etc.)

Related

How to verify number of invocations, in the project reactor for retryWhen

I have the following function
public Mono<Integer> revertChange() { someService.someMethod() .retryWhen(3 times, with 150millis of delay, if specific error occured) .onError(e -> log_the_error); }
And I have a simple unit test that summpose to verify that the someService.someMethod was called exactly 3 times
`class Test {
#InjectMocks
SomeService someService;
#Test
void shouldCallSomeServiceExactlythreetimes_whenErrorOccured() {
verify(someService).someMethod(3)//someMethod invoked 3 times
}
}
`
The problem is that the verify block does not catches that the someMethod was executed 3 times, it says only 1. I am using junit 5 and jmockit, maybe there are better alternatives specific for reactive mocks, any ideas guys?
Verification block does not catch multiple execution of the method
Mockito.verify(someService, Mockito.times(3)).someMethod();
from https://javadoc.io/doc/org.mockito/mockito-core/latest/org/mockito/Mockito.html :
Arguments passed are compared using equals() method. Read about ArgumentCaptor or ArgumentMatcher to find out other ways of matching / asserting arguments passed.
Although it is possible to verify a stubbed invocation, usually it's just redundant. Let's say you've stubbed foo.bar(). If your code cares what foo.bar() returns then something else breaks(often before even verify() gets executed). If your code doesn't care what foo.bar() returns then it should not be stubbed.
check also https://javadoc.io/doc/org.mockito/mockito-core/latest/org/mockito/Mockito.html#4
for verification with time out check https://javadoc.io/doc/org.mockito/mockito-core/latest/org/mockito/Mockito.html#verification_timeout
this snippet passes as soon as someMethod() has been called 2 times under 150 ms
Mockito.verify(someService, Mockito.timeout(150).times(2)).someMethod();
After careful investigation of the problem and deep dive to project reactor internals, the solution looks pretty simple the method that I am calling needs to be wrapped with Mono.deffer(() -> someService.someMethod()) that will eagerly every time when you call it

Conflicting results when unit testing MVC controller

I'm writing unit tests (using NUnit & Moq) for my MVC 2 controllers, and am following examples in the Pro ASP.net MVC 2 Framework book by Steven Sanderson (great book, btw). However, I've run into problems, which I think are just due to my lack of understanding of NUnit.
Here's an excerpt, with the irrelevant parts removed:
[Test]
public void Cannot_Save_Invalid_Event()
{
...
repository.Setup(x => x.SaveEvent(evt)).Callback(Assert.Fail);
...
repository.Verify(x => x.SaveEvent(evt));
}
This test is passing for me, although from what I understand, those two statements should directly conflict with each other. The second one wasn't there originally, but I put it in to verify that it was passing for the right reasons.
From what I understand, my repository is set up to fail if "repository.SaveEvent(evt)" is called. However, later in the test, I try to verify that "repository.SaveEvent(evt)" was called. Since it passes, doesn't this mean that it was both called, and not called? Perhaps those statements don't act as I suspect they do.
Can someone explain how these two statements are not opposites, and how they can both exist and the test still pass?
Maybe your tests doesn-t fail beacuse it has a catch-everything block that also hides the assert/verify-exception that is necessary for the test to fail.
Note: the following unittest will allways pass
[Test]
public void HidingAssertionFailure()
{
try {
Assert.AreEqual(0,1); // this should fail
} catch (Exception ex) {
// this will hide the assertion failure
}
}
The reason for this behavior was that it was running "SaveEvent()", however, since the mocked repository didn't define that action, it was throwing an exception in my controller, which my controller was catching.
So, it seems that the callback will only execute if control returns successfully.

How do I ignore a test based on another test in NUnit?

I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}

How do I write NUnit unit tests without having to surround them with try catch statements?

At my company we are writing a bunch of unit tests. What we'd like to have done is for the unit tests to execute and whenever one succeeds or fails at the end of the test we can write that somewhere but we don't want to put that logic in every test.
Any idea how we could just write tests without having to surround the content of the test with the try catch logic that we've been using?
I'm guessing you do something like this:
[Test]
public void FailBecauseOfException()
{
try
{
throw new Exception();
}
catch (Exception e)
{
Assert.Fail(e.Message);
}
}
There is no need for this. The tests will fail automatically if they throw an exception. For example, the following test will show up as a failure:
[Test]
public void FailBecauseOfException()
{
throw new Exception();
}
I'm not entirely sure what you are trying to do here. Are you saying you are wrapping it in a try/catch so that you can catch when an exception occurs and log this?
If so, then a better way, probably, is just to get NUnit to write an output file and use this. I haven't used NUnit for about a year, but IIRC you can redirect its output to any file you like using the /out directive.
If there is a reason why you have to log it the way you say, then you'll either have to add your custom code to each test, or have a common "runner" that takes your code (for each test) as an anonymous method and runs it inside a single try..catch. That would prevent you having to repeat the try..catch for every test.
Apologies if I've misunderstood the question.
MSTest has TestCleanup, which runs after every test. In NUnit, the attribute to be used is TearDown (after every test) or TestFixtureTearDown (after all the test are completely). This executes after the end of each test.
If you want something to run just in case a test passes, you could have a member variable shouldRunExtraMethod, which is initialized to false before each test, and is changed to true at the end of the test. And on the TearDown, you only execute it depending on this variable value
If your unit test method covers the scenario in which you expect exceptions to be thrown, use the ExpectedException attribute. There's a post here on SO about using that attribute.
Expect exceptions in nUnit...
NUnit assert statements all have an option to print a message for each test for when it fails.
Although if you'd like to have it write out something somewhere at the end of each test, you can set it up in the teardown of each method. Just set the string to what you want written inside the test itself, and during teardown (which happens after each test) It can do whatever you want with it.
I'm fairly certain teardown occurs even if an exception is thrown. That should do what you're wanting.
The problem you have is that the NUnit Assert.* methods will throw an AssertionException whenever an assert fails - but it does nothing else. So it doesn't look like you can check anything outside of the unit test to verify whether the test failed or not.
The only alternative I can think of is to use AOP (Aspect Oriented Programming) with a tool such as PostSharp. This tool allows you to create aspects that can act on certain events. For example:
public class ExceptionDialogAttribute : OnExceptionAspect
{
public override void OnException(MethodExecutionEventArgs eventArgs)
{
string message = eventArgs.Exception.Message;
Window window = Window.GetWindow((DependencyObject) eventArgs.Instance);
MessageBox.Show(window, message, "Exception");
eventArgs.FlowBehavior = FlowBehavior.Continue;
}
}
This aspect is code which runs whenever an exception is raised:
[ExceptionDialog]
[Test]
public void Test()
{
assert.AreEqual(2, 4);
}
Since the above test will raise an exception, the code in ExceptionDialogAttribute will run. You can get information about the method, such as it's name, so that you can log it into a file.
It's been a long time since I used PostSharp, so it's worth checking out the examples and experimenting with it.

Does a Unittest has to have an Assertion like "assertEquals(..)"

I have a little JUnit-Test that export an Object to the FileSystem. In the first place my test looked like this
public void exportTest {
//...creating a list with some objects to export...
JAXBService service = new JAXBService();
service.exportList(list, "output.xml");
}
Usually my test contain a assertion like assertEquals(...). So I changed the code to the following
public void exportCustomerListTest() throws Exception {
// delete the old resulting file, so we can test for a new one at the end
File file = new File("output.xml");
file.delete();
//...creating a list with some objects to export...
JAXBService service = new JAXBService();
service.exportCustomers(list, "output.xml");
// Test if a file has been created and if it contains some bytes
FileReader fis = new FileReader("output.xml");
int firstByte = fis.read();
assertTrue(firstByte != -1 );
}
Do I need this, or was the first approach enough? I am asking because, the first one is actually just "testing" that the code runs, but not testing any results. Or can I rely on the "contract" that if the export-method runs without an exception the test passes?
Thanks
Well, you're testing that your code runs to completion without any exceptions - but you're not testing anything about the output.
Why not keep a file with the expected output, and compare that with the actual output? Note that this would probably be easier if you had an overload of expertCustomers which took a Writer - then you could pass in a StringWriter and only write to memory. You could test that in several ways, with just a single test of the overload which takes a filename, as that would just create a FileOutputStream wrapped in an OutputStreamWriter and then call the more thoroughly tested method. You'd really just need to check that the right file existed, probably.
you could use
assertTrue(new File("output.xml")).exist());
if you notice problems during the generation of the file, you can unit test the generation process (and not the fact that the file was correctly written and reloaded from the filesystem)
You can either go with the "gold file" approach (testing that two files are 1 to 1 identical) or test various outputs of your generator (I imagine that the generation of the content is separated from the saving into the file)
I agree with the other posts. I will also add that your first test won't tell a test suite or test runner that this particular test has failed.
Sometimes a test only needs to demonstrate that no exceptions were thrown. In that case relying that an exception will fail the test is good enough. There is certainly nothing gained in JUnit by calling the assertEquals method. A test passes when it doesn't throw an AssertionException, not because that method is called. Consider a method that allows null input, you might write a test like this:
#Test public void testNullAllowed() {
new CustomObject().methodThatAllowsNull(null);
}
That might be enough of a test right there (leave to a separate test or perhaps there is nothing interesting to test about what it does with a null value), although it prudent to leave a comment that you didn't forget the assert, you left it out on purpose.
In your case, however, you haven't tested very much. Sure it didn't blow up, but an empty method wouldn't blow up either. Your second test is better, at least you demonstrate a non-empty file was created. But you can do better than that and check that at least some reasonable result was created.