I'm trying to find the best way to unit test decision-states within a Spring WebFlow context.
<var name="registration" class="*some class path*.Registration"/>
<decision-state id="checkSignedIn">
<if test="*someClass*.isSignedOn(registration)"
then="checkHas*Said*Service"
else="registrationChoice"/>
</decision-state>
<decision-state id="checkHasTCloudService">
<if test="*someClass*Dao.isUserRegisteredFor*saidSvc*(registration)"
then="*svc*Activated"
else="registrationChoice"/>
</decision-state>
<view-state id="registrationChoice" model="registration" view="view.xhtml" >
<on-entry>...
N.B. the someClass and the someClassDao are not within the FlowScope or ConversationScope.
I want to test, via Mockito, that the decision-state expressions are being called and then verify the correct state outcomes.
Normally, one can simply
setCurrentState(someViewState: where you want slot test in within a transitional flow)
define input
mock an ExternalContext
setEvent within that context
resumeFlow(with given context)
verify mocked method calls & finally
assertCurrentState(someViewState: where you would expect to be at, after given input has influenced the decision-state to fork to, within the flow)
It seems decision-states don't operate as a view-state (fair enough: they aren't a given state of view within a flow) so how are we to mock/test?
Thanks in aniticiptation of responses.
Well, I've been put in the right direction by a colleague (the venerable Murray MacPherson) who reminded me that the process is:
1. mock your dao calls
2. begin your flow & (now this is the crux)
3. based on the decision outcomes set by your mocked calls, assert your expected outcome state (which will be some view),
- whether an end state (in which case you would also be expecting an end to your flow) or
- (interim) current state. If it has arrived at exp[ected point, then you know the decisions have been exercised.
N.B. if your expected outcome is a 'currentState', then you can verify the mocked (dao) call/s has/have been made otherwise (as the flow would no longer be active) you cannot make such verifications: the simple fact you've arrived at your expected end state is verification in itself.
In this exact example, you have an alternative to starting at a particular view state via setCurrentState() - you can use startFlow - which will... start the flow. You can then test which view state you end up at, due to the results of your decision states.
Related
I have seen how to test observable using TestSubscriber but I have no idea how to test Completable.doOnSuccess callback. Specifically this method:
fun setAuthToken(authToken: AuthToken): Completable {
this.authToken = authToken
return Completable.fromSingle<User>(api
.getCurrentUser()
.doOnSuccess {
user = it
})
}
This is not something that might not need to be tested with RxJava test subscribers at all (depending on the rest of the code).
Remember - you don't want to test internal state, or at least do it as rarely as possible. Internal state and class structure can change and it will probably change often. So it's bad practice to check if user is assigned to the field.
So you could make Completable blocking and then assert state of (let’s call it ‚server’) server class, but I would highly discourage doing it this way:
server.setAuthToken(AuthToken("token"))
.blockingAwait()
assertThat(server.user, equalTo(expectedUser))
What you want to test is behavior.
You are probably not assigning user to the field just for the sake of having some fields. You are doing it to use information from user later on. So first you should call setAuthToken and then call function that really uses information from the user. Then you can assert if used information is correct and is coming from correct user.
So sample tests (depending on the class) could look like this:
server.setAuthToken(AuthToken("token"))
.andThen(server.sendRequest())
.blockingAwait()
// assert if correct user info was sent
or
server.setAuthToken(AuthToken("token"))
.andThen(server.sendRequest())
.test()
// assert if correct user info was sent
I am using creating an application using mapreduce2 and testing the same using MRUnit 1.1.0. In one of the tests, I am checking the output of a reducer which puts the 'current system time' in it's output. I.e. at the time the reducer executes, the timestamp is used in context.write() to be written along with the rest of the output. Now even though I use the same method to find the system time in the test method as the one I use in the reducer, the times calculated in both are generally a second apart, like 2016-05-31 19:10:02 and 2016-05-31 19:10:01. So the output in the two turns out to be different, example:
test_fe01,2016-05-31 19:10:01
test_fe01,2016-05-31 19:10:02
This causes an assertion error. I wish to ignore this difference in timestamps, so the tests pass if the rest of the output apart from the timestamp is matched. I am looking for a way to mock the method used for returning the system time, so that a hardcoded value is returned, and the reducer and the test both use this mocked method during the test. Is this possible to do? Any help will be appreciated.
Best Regards
EDIT: I have already tried Mockito's spy functionality in the my test:
MClass mc = Mockito.spy(new MClass());
Mockito.when(mc.getSysTime()).thenReturn("FakeTimestamp");
However, this gives a runtime error:
org.mockito.exceptions.misusing.MissingMethodInvocationException:
when() requires an argument which has to be 'a method call on a mock'.
For example:
when(mock.getArticles()).thenReturn(articles);
Also, this error might show up because:
1. you stub either of: final/private/equals()/hashCode() methods.
Those methods *cannot* be stubbed/verified.
2. inside when() you don't call method on mock but on some other object.
3. the parent of the mocked class is not public.
It is a limitation of the mock engine.
The method getSysTime() is public and static and class MClass is public and doesn't have any parent classes.
Assuming i understand your question, you could pass a time into the Reduce using the configuration object. In your reduce you could check if this configuration is set and use it, otherwise you use the system time.
This way you can pass in a known value for testing and assert you get that same value back.
Am trying to understand what Exactly Verify or VerifyAll Does ?
I was searching and i got the below info on using MOQ
Arrange
Mock
Set up expectations for dependencies
Set up expected results
Create class under test
Act
Invoke method under test
Assert
Assert actual results match expected results
Verify that expectations were met
So What exactly does Verify Does? I can test everything using Assert and in case if any failures the unit test will fail ?
What extra work does verify does ? Is it the replacement for Assert ?
Some more clarify will make me understand.
Thanks
Assert vs Mock.Verify
Assert is used to do checks on the class/object/subject under test (SUT).
Verify is used to check if the collaborators of the SUT were notified or contacted.
So if you are testing a car object, which has an engine as a collaborator/dependency.
You would use to Assert to see if car.IsHumming after you invoke car.PushStart()
You would use Verify to see if _mockEngine.Ignition() received a call.
Verify vs VerifyAll
Approach One:
Explicitly setup all operations you expect to be triggered on the mocked collaborator from the subsequent Act step
Act - do something that will cause the operations to be triggered
call _mock.VerifyAll() : to cause every expection that you setup in (1) to be verified
Approach Two
Act - do something that will cause the operations to be triggered
call _mock.Verify(m => m.Operation) : cause verification that Operation was in fact called. One Verify call per verification. It also allows you to check count of calls e.g. exactly Once, etc.
So if you have multiple operations on the Mock OR if you need the mocked method to return a value which will be processed, then Setup + Act + VerifyAll is the way to go
If you have a few notifications that you want to be checked, then Verify is easier.
I have a unit test (using typemock 5.4.5.0) that is testing a creation service. The creation service is passed in a validation service in its constructor. The validation service returns an object that has a boolean property (IsValid). In my unit test I am mocking the validation service call to return an instance that has IsValid set to true. The creation service has an if statement that checks the value of that property. When I run the unit test, the object returned from the validation service has its property set to true, but when the if statement is executed, it treats it as though it was false.
I can verify this by debugging the unit test. The object returned by the validation service does indeed have its IsValid property set to true, but it skips the body of my if statement entirely and goes to the End If.
Here is a link to the unit test itself - https://gist.github.com/1076372
Here is a link to the creation service function I am testing - https://gist.github.com/1076376
Does anyone know why the hell the IsValid property is true but is treated like it is false?
P.S. I have also entered this issue in TypeMock's support system, but I think I will probably get a quicker response here!
First, if possible, I'd recommend upgrading to the latest version of Typemock Isolator that you're licensed for. Each version that comes out, even minor releases, contains fixes for interesting edge cases that sometimes make things work differently. I've found upgrading sometimes fixes things.
Next, I see this line in your unit test:
Isolate.WhenCalled(() => validator.ValidateNewQuestionForExistingQuestionPool(new QuestionViewModel())).WillReturn(new Validation(true));
The red flag for me is the "new QuestionViewModel()" that's inside the "WhenCalled()" block.
Two good rules of thumb I always follow:
Don't put anything in the WhenCalled() that you don't want mocked.
If you don't care about the arguments, don't pass real arguments.
In this case, the first rule makes me think "I don't want the constructor for the QuestionViewModel mocked, so I shouldn't put it in there."
The second rule makes me consider whether the argument to the "ValidateNewQuestionForExistingPool" method really isn't important. In this case, it's not, so I'd pass null rather than a real object. If there's an overload you're specifically looking at, cast the null first.
Finally, sort of based on that first rule, I generally try not to inline my return values, either. That means I'd create the new Validation object before the Isolate call.
var validation = new Validator(true);
Isolate.WhenCalled(() => validator.ValidateNewQuestionForExistingQuestionPool(null)).WillReturn(validation);
Try that, see how it runs. You might also watch in the Typemock Tracer utility to see what's getting set up expectation-wise when you run your test to ensure additional expectations aren't being set up that you're not... expecting.
I am isolating my webservice-related tests from the actual webservices with Stubs.
How do you/should i incorporate tests to ensure that my crafted responses match the actual webservice ones (i don't have control over it)?
I don't want to know how to do it, but when and where?
Should i create a testsuite-testsuite for testdata testing?...
I would use something like this excellent tool
Storm
If you can, install the service in a small, completely controlled environment. Drawback: You must find a way to be notified when a new version is rolled out.
If that's not possible, write a test that calls the real service and checks for vital points (do I get a response? Are all parts there and where I expect them? Can I parse the result?)
Avoid things like checking timestamps, result size, etc., that is things that can and do change all the time.
You can test the possible failures using EasyMock as follows:
public void testDisplayProductsWhenWebServiceThrowsRemoteLookupException() {
...
EasyMock.expect(mockWebService.getProducts(category)).andThrow(new RemoteLookupException());
...
someServiceOrController.someMethodThatUsesMockWebService(...);
}
Repeat for all possible failure scenarios. The other solution is to implement a dummy SEI yourself. Using JAX-WS, you can trivially annotate a java class that generates an interface consistent with the client you consume. All of the methods can just return dummy data. You can then deploy the services on your own server and point your test environment at the dummy location.
Perhaps more importantly than any of the crap I've said so far, you can take the advice of the authors of The Pragmatic Programmer and program with assertions. That is, given that you must inevitably make certain assumptions about the web service you consume given that you have no control over it's implementation, you can add code such as:
if(resultOfWebService == null || resultOfWebService.getId() == null)
throw new AssertionError("WebService violated contract by doing xyz: result => " + resultOfWebServivce);
That way, if your assumptions don't hold, you'll at least find out about it instead of potentially silently fail!
You can also turn on schema validations and protocol validations to ensure that the service is operating according to spec.