I have a unit test (using typemock 5.4.5.0) that is testing a creation service. The creation service is passed in a validation service in its constructor. The validation service returns an object that has a boolean property (IsValid). In my unit test I am mocking the validation service call to return an instance that has IsValid set to true. The creation service has an if statement that checks the value of that property. When I run the unit test, the object returned from the validation service has its property set to true, but when the if statement is executed, it treats it as though it was false.
I can verify this by debugging the unit test. The object returned by the validation service does indeed have its IsValid property set to true, but it skips the body of my if statement entirely and goes to the End If.
Here is a link to the unit test itself - https://gist.github.com/1076372
Here is a link to the creation service function I am testing - https://gist.github.com/1076376
Does anyone know why the hell the IsValid property is true but is treated like it is false?
P.S. I have also entered this issue in TypeMock's support system, but I think I will probably get a quicker response here!
First, if possible, I'd recommend upgrading to the latest version of Typemock Isolator that you're licensed for. Each version that comes out, even minor releases, contains fixes for interesting edge cases that sometimes make things work differently. I've found upgrading sometimes fixes things.
Next, I see this line in your unit test:
Isolate.WhenCalled(() => validator.ValidateNewQuestionForExistingQuestionPool(new QuestionViewModel())).WillReturn(new Validation(true));
The red flag for me is the "new QuestionViewModel()" that's inside the "WhenCalled()" block.
Two good rules of thumb I always follow:
Don't put anything in the WhenCalled() that you don't want mocked.
If you don't care about the arguments, don't pass real arguments.
In this case, the first rule makes me think "I don't want the constructor for the QuestionViewModel mocked, so I shouldn't put it in there."
The second rule makes me consider whether the argument to the "ValidateNewQuestionForExistingPool" method really isn't important. In this case, it's not, so I'd pass null rather than a real object. If there's an overload you're specifically looking at, cast the null first.
Finally, sort of based on that first rule, I generally try not to inline my return values, either. That means I'd create the new Validation object before the Isolate call.
var validation = new Validator(true);
Isolate.WhenCalled(() => validator.ValidateNewQuestionForExistingQuestionPool(null)).WillReturn(validation);
Try that, see how it runs. You might also watch in the Typemock Tracer utility to see what's getting set up expectation-wise when you run your test to ensure additional expectations aren't being set up that you're not... expecting.
Related
I know stubs verify state and the mocks verify behavior.
How can I make a mock in PHPUnit to verify the behavior of the methods? PHPUnit does not have verification methods (verify()), And I do not know how to make a mock in PHPUnit.
In the documentation, to create a stub is well explained:
// Create a stub for the SomeClass class.
$stub = $this->createMock(SomeClass::class);
// Configure the stub.
$stub
->method('doSomething')
->willReturn('foo');
// Calling $stub->doSomething() will now return 'foo'.
$this->assertEquals('foo', $stub->doSomething());
But in this case, I am verifying status, saying that return an answer.
How would be the example to create a mock and verify behavior?
PHPUnit used to support two ways of creating test doubles out of the box. Next to the legacy PHPUnit mocking framework we could choose prophecy as well.
Prophecy support was removed in PHPUnit 9, but it can be added back by installing phpspec/prophecy-phpunit.
PHPUnit Mocking Framework
The createMock method is used to create three mostly known test doubles. It's how you configure the object makes it a dummy, a stub, or a mock.
You can also create test stubs with the mock builder (getMockBuilder returns the mock builder). It's just another way of doing the same thing that lets you to tweak some additional mock options with a fluent interface (see the documentation for more).
Dummy
Dummy is passed around, but never actually called, or if it's called it responds with a default answer (mostly null). It mainly exists to satisfy a list of arguments.
$dummy = $this->createMock(SomeClass::class);
// SUT - System Under Test
$sut->action($dummy);
Stub
Stubs are used with query like methods - methods that return things, but it's not important if they're actually called.
$stub = $this->createMock(SomeClass::class);
$stub->method('getSomething')
->willReturn('foo');
$sut->action($stub);
Mock
Mocks are used with command like methods - it's important that they're called, and we don't care much about their return value (command methods don't usually return any value).
$mock = $this->createMock(SomeClass::class);
$mock->expects($this->once())
->method('doSomething')
->with('bar');
$sut->action($mock);
Expectations will be verified automatically after your test method finished executing. In the example above, the test will fail if the method doSomething wasn't called on SomeClass, or it was called with arguments different to the ones you configured.
Spy
Not supported.
Prophecy
Prophecy is now supported by PHPUnit out of the box, so you can use it as an alternative to the legacy mocking framework. Again, it's the way you configure the object makes it becomes a specific type of a test double.
Dummy
$dummy = $this->prophesize(SomeClass::class);
$sut->action($dummy->reveal());
Stub
$stub = $this->prophesize(SomeClass::class);
$stub->getSomething()->willReturn('foo');
$sut->action($stub->reveal());
Mock
$mock = $this->prophesize(SomeClass::class);
$mock->doSomething('bar')->shouldBeCalled();
$sut->action($mock->reveal());
Spy
$spy = $this->prophesize(SomeClass::class);
// execute the action on system under test
$sut->action($spy->reveal());
// verify expectations after
$spy->doSomething('bar')->shouldHaveBeenCalled();
Dummies
First, look at dummies. A dummy object is both what I look like if you ask me to remember where I left the car keys... and also the object you get if you add an argument with a type-hint in phpspec to get a test double... then do absolutely nothing with it. So if we get a test double and add no behavior and make no assertions on its methods, it's called a "dummy object".
Oh, and inside of their documentation, you'll see things like $prophecy->reveal(). That's a detail that we don't need to worry about because phpspec takes care of that for us. Score!
Stubs
As soon as you start controlling even one return value of even one method... boom! This object is suddenly known as a stub. From the docs: "a stub is an object double" - all of these things are known as test doubles, or object doubles - that when put in a specific environment, behaves in a specific way. That's a fancy way of saying: as soon as we add one of these willReturn() things, it becomes a stub.
And actually, most of the documentation is spent talking about stubs and the different ways to control exactly how it behaves, including the Argument wildcarding that we saw earlier.
Mocks
If you keep reading down, the next thing you'll find are "mocks". An object becomes a mock when you call shouldBeCalled(). So, if you want to add an assertion that a method is called a certain number of times and you want to put that assertion before the actual code - using shouldBeCalledTimes() or shouldBeCalled() - congratulations! Your object is now known as a mock.
Spies
And finally, at the bottom, we have spies. A spy is the exact same thing as a mock, except it's when you add the expectation after the code - like with shouldHaveBeenCalledTimes().
https://symfonycasts.com/screencast/phpspec/doubles-dummies-mocks-spies
Am trying to understand what Exactly Verify or VerifyAll Does ?
I was searching and i got the below info on using MOQ
Arrange
Mock
Set up expectations for dependencies
Set up expected results
Create class under test
Act
Invoke method under test
Assert
Assert actual results match expected results
Verify that expectations were met
So What exactly does Verify Does? I can test everything using Assert and in case if any failures the unit test will fail ?
What extra work does verify does ? Is it the replacement for Assert ?
Some more clarify will make me understand.
Thanks
Assert vs Mock.Verify
Assert is used to do checks on the class/object/subject under test (SUT).
Verify is used to check if the collaborators of the SUT were notified or contacted.
So if you are testing a car object, which has an engine as a collaborator/dependency.
You would use to Assert to see if car.IsHumming after you invoke car.PushStart()
You would use Verify to see if _mockEngine.Ignition() received a call.
Verify vs VerifyAll
Approach One:
Explicitly setup all operations you expect to be triggered on the mocked collaborator from the subsequent Act step
Act - do something that will cause the operations to be triggered
call _mock.VerifyAll() : to cause every expection that you setup in (1) to be verified
Approach Two
Act - do something that will cause the operations to be triggered
call _mock.Verify(m => m.Operation) : cause verification that Operation was in fact called. One Verify call per verification. It also allows you to check count of calls e.g. exactly Once, etc.
So if you have multiple operations on the Mock OR if you need the mocked method to return a value which will be processed, then Setup + Act + VerifyAll is the way to go
If you have a few notifications that you want to be checked, then Verify is easier.
this is one of my questions about unit testing.
I'm reading The Art Of Unit Testing and at chapter 3 the author shows how to remove dependency between one or more classes. That seems clear to me. What's not absolutely clear is the following point.
When I configure a test method with a stub, I configure it to return a specific value. Then I call the tested method exposed by the tested class. This method executes some logic and uses the return value of the stub. The problem is: if the stub is configured to return the wrong value my test will probably fail.
So the question is: when I use stubs, should I ALWAYS configure them to return the expected value? In my opininon this should be the correct way to test as if the stub return always the expected value I'm sure to test only the logic inside the tested method.
What do you think about this? Is there some case in which has some kind of sense to oblige the stub to return uncorrect values?
Thanks a lot,
Marco
You are testing how the sut (system under test) works under several conditions:
the good path = configuring stubs to return good values and test that the sut behaves accordingly
the sad path(s) = configure stubs with wrong values and verify that the sut can handle such cases (e.g. you can test that it throws an Exception using the ExpectedException attribute if you're using nUnit)
You could configure the stub method to return value depending on the test setup in some scenarios. In others to return default value, which should be valid.
This is my doubt on what we regard as a "unit" while unit-testing.
say I have a method like this,
public String myBigMethod()
{
String resultOne = moduleOneObject.someOperation();
String resultTwo = moduleTwoObject.someOtherOperation(resultOne);
return resultTwo;
}
( I have unit-tests written for someOperation() and someOtherOperation() seperately )
and this myBigMethod() kinda integrates ModuleOne and ModuleTwo by using them as above,
then, is the method "myBigMethod()" still considered as a "unit" ?
Should I be writing a test for this "myBigMethod()" ?
Say I have written a test for myBigMethod()... If testSomeOperation() fails, it would also result in testMyBigMethod() to fail... Now testMyBigMethod()'s failure might show a not-so-correct-location of the bug.
One-Cause causing two tests to fail doesn't look so good to me. But donno if there's any better way...? Is there ?
Thanks !
You want to test the logic of myBigMethod without testing the dependencies.
It looks like the specification of myBigMethod is:
Call moduleOneObject.someOperation
Pass the result into moduleTwoObject.someOtherOperation
Return the result
The key to testing just this behavior is to break the dependencies on moduleOneObject and moduleTwoObject. Typically this is done by passing the dependencies into the class under test in the constructor (constructor injection) or setting them via properties (setter injection).
The question isn't just academic because in practice moduleOneObject and moduleTwoObject could go out and hit external systems such as a database. A true unit test doesn't hit external systems as that would make it an "integration test".
The test for myBigMethod() should test the combination of the results of the other two methods called. So, yes it should fail if either of the methods it depends on fails, but it should be testing more. There should be some case where someOperation() and someOtherOperation() work correctly, but myBigMethod() can still fail. If that's not possible, then there's no need to test myBigMethod().
I am isolating my webservice-related tests from the actual webservices with Stubs.
How do you/should i incorporate tests to ensure that my crafted responses match the actual webservice ones (i don't have control over it)?
I don't want to know how to do it, but when and where?
Should i create a testsuite-testsuite for testdata testing?...
I would use something like this excellent tool
Storm
If you can, install the service in a small, completely controlled environment. Drawback: You must find a way to be notified when a new version is rolled out.
If that's not possible, write a test that calls the real service and checks for vital points (do I get a response? Are all parts there and where I expect them? Can I parse the result?)
Avoid things like checking timestamps, result size, etc., that is things that can and do change all the time.
You can test the possible failures using EasyMock as follows:
public void testDisplayProductsWhenWebServiceThrowsRemoteLookupException() {
...
EasyMock.expect(mockWebService.getProducts(category)).andThrow(new RemoteLookupException());
...
someServiceOrController.someMethodThatUsesMockWebService(...);
}
Repeat for all possible failure scenarios. The other solution is to implement a dummy SEI yourself. Using JAX-WS, you can trivially annotate a java class that generates an interface consistent with the client you consume. All of the methods can just return dummy data. You can then deploy the services on your own server and point your test environment at the dummy location.
Perhaps more importantly than any of the crap I've said so far, you can take the advice of the authors of The Pragmatic Programmer and program with assertions. That is, given that you must inevitably make certain assumptions about the web service you consume given that you have no control over it's implementation, you can add code such as:
if(resultOfWebService == null || resultOfWebService.getId() == null)
throw new AssertionError("WebService violated contract by doing xyz: result => " + resultOfWebServivce);
That way, if your assumptions don't hold, you'll at least find out about it instead of potentially silently fail!
You can also turn on schema validations and protocol validations to ensure that the service is operating according to spec.