Unexpected behaviour of XCTestCase -- create different instance to run test methods - unit-testing

I write unit test case with XCTestCase, and initialize variables in -setUp as following:
- (void)setUp {
[super setUp];
// Put setup code here. This method is called before the invocation of each test method in the class.
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
_myPath = #"Path";
});
}
But when I try to use myPath in test cases, it only works in the first one, and the "myPath" would be in nil in subsequent cases.
So I set break point in -setUp to see what's happen. And I found that it create new instances for each methods!!
To double check it, I create a new project and test target to log testcase address as following:
#implementation fooTestTests
- (void)setUp
{
[super setUp];
// Put setup code here. This method is called before the invocation of each test method in the class.
NSLog(#"<fooTestTests: %p>", self);
}
- (void)tearDown
{
// Put teardown code here. This method is called after the invocation of each test method in the class.
[super tearDown];
}
- (void)testExample
{
XCTFail(#"No implementation for \"%s\"", __PRETTY_FUNCTION__);
}
- (void)testExample2
{
XCTFail(#"No implementation for \"%s\"", __PRETTY_FUNCTION__);
}
#end
And the result is :
fooTest[846:303] <fooTestTests: 0x1005ab750>
fooTest[846:303] <fooTestTests: 0x1005ab7f0>
Since the XCTestCase was designed as a object have one or more test method, it should not create different instance for each method.
In such situation, I don't know where to setup my test environment. Even though write setup code in -init, it still create new instance and call -init many times. Currently, I have only a few unit test, but when the number of tests grows, and the setup procedure becomes more complex, it would be a problem. Could anyone give me a suggestion?
Add Question Summary:
If I have 2 test methods in one testcase class, the behaviour would be:
Instantiate new testcase as object 1
-setUp
test 1
-tearDown
Instanitate new testcase as object 2
-setUp
test 2
-tearDown
Why it needs step 5?
The Answer
The answer provided by Jon Reid
More information:
Testcase class at XUnit Patterns
Apple docs - Testing with Xcode - Flow of Test Execution

To run a single test, this is the regular pattern of xUnit test frameworks, including XCTest:
Instantiate new test object
Invoke -setUp
Invoke test method
Invoke -tearDown
That concludes a single test. The test object itself will be destroyed at some later time outside of our control.
So don't use -init or -dealloc with test objects. Instead use -setUp and -tearDown which are guaranteed to be called before, and after, each test. But each test does operate on a brand-new instance of the object. This helps keep tests isolated.
So get rid of your dispatch_once.

Related

Mocks vs Stubs in PHPUnit

I know stubs verify state and the mocks verify behavior.
How can I make a mock in PHPUnit to verify the behavior of the methods? PHPUnit does not have verification methods (verify()), And I do not know how to make a mock in PHPUnit.
In the documentation, to create a stub is well explained:
// Create a stub for the SomeClass class.
$stub = $this->createMock(SomeClass::class);
// Configure the stub.
$stub
->method('doSomething')
->willReturn('foo');
// Calling $stub->doSomething() will now return 'foo'.
$this->assertEquals('foo', $stub->doSomething());
But in this case, I am verifying status, saying that return an answer.
How would be the example to create a mock and verify behavior?
PHPUnit used to support two ways of creating test doubles out of the box. Next to the legacy PHPUnit mocking framework we could choose prophecy as well.
Prophecy support was removed in PHPUnit 9, but it can be added back by installing phpspec/prophecy-phpunit.
PHPUnit Mocking Framework
The createMock method is used to create three mostly known test doubles. It's how you configure the object makes it a dummy, a stub, or a mock.
You can also create test stubs with the mock builder (getMockBuilder returns the mock builder). It's just another way of doing the same thing that lets you to tweak some additional mock options with a fluent interface (see the documentation for more).
Dummy
Dummy is passed around, but never actually called, or if it's called it responds with a default answer (mostly null). It mainly exists to satisfy a list of arguments.
$dummy = $this->createMock(SomeClass::class);
// SUT - System Under Test
$sut->action($dummy);
Stub
Stubs are used with query like methods - methods that return things, but it's not important if they're actually called.
$stub = $this->createMock(SomeClass::class);
$stub->method('getSomething')
->willReturn('foo');
$sut->action($stub);
Mock
Mocks are used with command like methods - it's important that they're called, and we don't care much about their return value (command methods don't usually return any value).
$mock = $this->createMock(SomeClass::class);
$mock->expects($this->once())
->method('doSomething')
->with('bar');
$sut->action($mock);
Expectations will be verified automatically after your test method finished executing. In the example above, the test will fail if the method doSomething wasn't called on SomeClass, or it was called with arguments different to the ones you configured.
Spy
Not supported.
Prophecy
Prophecy is now supported by PHPUnit out of the box, so you can use it as an alternative to the legacy mocking framework. Again, it's the way you configure the object makes it becomes a specific type of a test double.
Dummy
$dummy = $this->prophesize(SomeClass::class);
$sut->action($dummy->reveal());
Stub
$stub = $this->prophesize(SomeClass::class);
$stub->getSomething()->willReturn('foo');
$sut->action($stub->reveal());
Mock
$mock = $this->prophesize(SomeClass::class);
$mock->doSomething('bar')->shouldBeCalled();
$sut->action($mock->reveal());
Spy
$spy = $this->prophesize(SomeClass::class);
// execute the action on system under test
$sut->action($spy->reveal());
// verify expectations after
$spy->doSomething('bar')->shouldHaveBeenCalled();
Dummies
First, look at dummies. A dummy object is both what I look like if you ask me to remember where I left the car keys... and also the object you get if you add an argument with a type-hint in phpspec to get a test double... then do absolutely nothing with it. So if we get a test double and add no behavior and make no assertions on its methods, it's called a "dummy object".
Oh, and inside of their documentation, you'll see things like $prophecy->reveal(). That's a detail that we don't need to worry about because phpspec takes care of that for us. Score!
Stubs
As soon as you start controlling even one return value of even one method... boom! This object is suddenly known as a stub. From the docs: "a stub is an object double" - all of these things are known as test doubles, or object doubles - that when put in a specific environment, behaves in a specific way. That's a fancy way of saying: as soon as we add one of these willReturn() things, it becomes a stub.
And actually, most of the documentation is spent talking about stubs and the different ways to control exactly how it behaves, including the Argument wildcarding that we saw earlier.
Mocks
If you keep reading down, the next thing you'll find are "mocks". An object becomes a mock when you call shouldBeCalled(). So, if you want to add an assertion that a method is called a certain number of times and you want to put that assertion before the actual code - using shouldBeCalledTimes() or shouldBeCalled() - congratulations! Your object is now known as a mock.
Spies
And finally, at the bottom, we have spies. A spy is the exact same thing as a mock, except it's when you add the expectation after the code - like with shouldHaveBeenCalledTimes().
https://symfonycasts.com/screencast/phpspec/doubles-dummies-mocks-spies

Accessing test info in Google Test parameterized test case (`TEST_P`)

When I create a regular TEST (or TEST_F), I can access to the test case information stored in test_info_, like:
TEST_F(MyTestSuite, TestCaseOne)
{
// ...
test_info_->name(); // will return "TestCaseOne"
}
I would like to access this kind of information when I use the parameterized (TEST_P) variant, which allows me to define fixture-based tests.
Looking under the hood, I can see that TEST_P works quite differently than her cousins TESTand TEST_F, as it registers the new test case by calling ::testing::UnitTest::GetInstance()->parameterized_test_registry().GetTestCasePatternHolder<test_case_name>(#test_case_name, __FILE__, __LINE__)->AddTestPattern(...) method. I understand that the class inheriting from TestWithParam will then run all the registered TEST_Ps test cases.
Is there a way to access (either runtime or compile time) to the name (string) of a TEST_P?
There's actually a getter for the TestInfo instance. From the documentation:
To obtain a TestInfo object for the currently running test, call
current_test_info() on the UnitTest singleton object:
// Gets information about the currently running test.
// Do NOT delete the returned object - it's managed by the UnitTest class.
const ::testing::TestInfo* const test_info =
::testing::UnitTest::GetInstance()->current_test_info();
printf("We are in test %s of test case %s.\n",
test_info->name(), test_info->test_case_name());

OCMock and AVCaptureDeviceInput

I'm in the process of updating our test suite from OCMock 2 to OCMock 3 and am running into a few issues.
One of the issue is that we're trying to mock AVCaptureDeviceInput.
For one of the test we want to return a mocked instance AVCaptureDeviceInput when a class method is called on AVCaptureDeviceInput.
in our setup method:
self.mockAVCaptureDeviceInputClass = [OCMockObject mockForClass:[AVCaptureDeviceInput class]];
in our test:
id deviceInput = [OCMockObject mockForClass: [AVCaptureDeviceInput class]];
[[[[self.mockAVCaptureDeviceInputClass stub] classMethod] andReturn:deviceInput]
deviceInputWithDevice:mockDevice error:((NSError __autoreleasing **)[OCMArg setTo:nil])];
The issue seems to be that deviceInput overwrites the self.mockAVCaptureDeviceInputClass so that when the classMethod is stubbed, it does not do anything.
An alternative I tried to work around this was to create a mock for an instance of AVCaptureDeviceInput, but that just hangs:
[OCMockObject partialMockForObject: [AVCaptureDeviceInput new]];
with the following stack trace:
0x000000010938a219 in _object_set_associative_reference ()
0x0000000108aed5c3 in OCMSetAssociatedMockForClass at /Users/otusweb/Desktop/dfsa/Pods/OCMock/Source/OCMock/OCMFunctions.m:226
0x00000001144ecce2 in -[OCClassMockObject prepareClassForClassMethodMocking] at /Users/otusweb/Desktop/dfsa/Pods/OCMock/Source/OCMock/OCClassMockObject.m:89
0x00000001144ec934 in -[OCClassMockObject initWithClass:] at /Users/otusweb/Desktop/dfsa/Pods/OCMock/Source/OCMock/OCClassMockObject.m:31
0x00000001144f47f6 in -[OCPartialMockObject initWithObject:] at /Users/otusweb/Desktop/dfsa/Pods/OCMock/Source/OCMock/OCPartialMockObject.m:33
0x00000001144f1cdd in +[OCMockObject partialMockForObject:] at /Users/otusweb/Desktop/dfsa/Pods/OCMock/Source/OCMock/OCMockObject.m:58
0x00000001144e9abe in -[dfsaTests testExample] at /Users/otusweb/Desktop/dfsa/dfsaTests/dfsaTests.m:33
You are running into a common issue: only one mock object can mock class methods for a given class. This is is documented in the limitations section (http://ocmock.org/reference/#limitations). Currently the last mock created "wins".
What happens in your case is that you set up the first mock in your setup method (self.mockAVCaptureDeviceInputClass) but then you create a second mock for the same class in your test (deviceInput). At this point only the latter one can stub class methods on AVCaptureDeviceInput.
This problem is getting so common that I have decided to add a warning to OCMock. I'm thinking about the mock object printing a warning in cases where it has active stubs when it gets deactivated for class method stubbing. FWIW, there is some investigation under way to see whether it's possible to have more than one mock object mock class methods on the same class (https://github.com/erikdoe/ocmock/issues/173), but that's not trivial.

Unit testing code that uses a Singleton

I have code that uses a Singleton (this happens to be for a website, so the Singleton has the scope of just that one Request; each Request will have their own singleton).
My code (which happens to be in an HttpModule) does the following:
1 - Checks if the Singleton object is null and, if so, initializes it.
2 - Updates a property on this singleton object along the lines of:
if(A)
{
SingletonHolder.Current.X = Y;
}
else
{
SingletonHolder.Current.X = Z;
}
I then want to have some unit tests around this method to check that logic is correct. Let's say for argument's sake that we want the following 4 tests:
GivenMethodCall_WhenA_ThenXSetToY
GivenMethodCall_WhenA_ThenXNotSetToZ
GivenMethodCall_WhenNotA_ThenXSetToZ
GivenMethodCall_WhenNotA_ThenXNotSetToY
These tests all work perfectly when run one-at-a-time, but when run in VS2013 with the NUnit test runner then we get some failures because each test is run in parallel and the method under test is updating the same singleton object's property with different values.
Any advice on a pattern that would solve this?
Thanks
Griff
You probably just need to provide a method within your test fixture decorated with the SetUpAttribute method to perform initial setup before each test method is run and another method decorated with the TearDownAttribute that runs after each test method. The documentation for the nUnit SetUpAttribute is found here.
This will allow you to (in your SetUp method) initialize your Singleton object, and then (in your TearDown method) to set your Singleton object to null so that SetUp can re-instantiate/initialize the object for the next test.

How do I ignore a test based on another test in NUnit?

I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}