Context
We want to emit test result metric in a near to real-time manner. Therefore, we publish metrics from the pytest_runtest_makereport hook. It seems that in the pytest_runtest_makereport hook, the tests that xfailed or xpassed are marked as skipped.
Question
Is there any way that we can identify if a test result is xfailed or xpassed from pytest_runtest_makereport hook?
We could differentiate xfailed and xpassed test from the skipped test from the pytest_terminal_summary hook through terminalreporter object, which offers a way to get a list of test nodeid by the test result. For example:
terminalreporter.stats.get("xfailed", [])
In your pytest_runtest_makereport hook you can use
if call.excinfo and call.excinfo.typename == "XFailed":
to detect a call to pytest.xfail("...") from within the test. If the test was marked with #pytest.mark.xfail("..."), you will have to detect the test failure and then look for the mark (or check the mark earlier and remember it).
Related
Am trying to understand what Exactly Verify or VerifyAll Does ?
I was searching and i got the below info on using MOQ
Arrange
Mock
Set up expectations for dependencies
Set up expected results
Create class under test
Act
Invoke method under test
Assert
Assert actual results match expected results
Verify that expectations were met
So What exactly does Verify Does? I can test everything using Assert and in case if any failures the unit test will fail ?
What extra work does verify does ? Is it the replacement for Assert ?
Some more clarify will make me understand.
Thanks
Assert vs Mock.Verify
Assert is used to do checks on the class/object/subject under test (SUT).
Verify is used to check if the collaborators of the SUT were notified or contacted.
So if you are testing a car object, which has an engine as a collaborator/dependency.
You would use to Assert to see if car.IsHumming after you invoke car.PushStart()
You would use Verify to see if _mockEngine.Ignition() received a call.
Verify vs VerifyAll
Approach One:
Explicitly setup all operations you expect to be triggered on the mocked collaborator from the subsequent Act step
Act - do something that will cause the operations to be triggered
call _mock.VerifyAll() : to cause every expection that you setup in (1) to be verified
Approach Two
Act - do something that will cause the operations to be triggered
call _mock.Verify(m => m.Operation) : cause verification that Operation was in fact called. One Verify call per verification. It also allows you to check count of calls e.g. exactly Once, etc.
So if you have multiple operations on the Mock OR if you need the mocked method to return a value which will be processed, then Setup + Act + VerifyAll is the way to go
If you have a few notifications that you want to be checked, then Verify is easier.
I'm testing a set of classes and my unit tests so far are along the lines
1. read in some data from file X
2. create new object Y
3. sanity assert some basic properties of Y
4. assert advanced properties of Y
There's about 30 of these tests, that differ in input/properties of Y that can be checked. However, at the current project state, it sometimes crashes at #2 or already fails at #3. It should never crash at #1. For the time being, I'm accepting all failures at #4.
I'd like to e.g. see a list of unit tests that fail at #3, but so far ignore all those that fail at #4. What's the standard approach/terminology to create this? I'm using JUnit for Java with Eclipse.
You need reporting/filtering on your unit test results.
jUnit itself wants your tests to pass, fail, or not run - nothing in between.
However, it doesn't care much about how those results are tied to passing/failing the build, or reported.
Using tools like maven (surefire execution plugin) and some custom code, you can categorize your tests to distinguish between 'hard failures', 'bad, but let's go on', etc. But that's build validation or reporting based on test results rather than testing.
(Currently, our build process relies on annotations such as #Category(WorkInProgress.class) for each test method to decide what's critical and what's not).
What I could think of would be to create assert methods that check some system property as to whether to execute the assert:
public static void assertTrue(boolean assertion, int assertionLevel){
int pro = getSystemProperty(...);
if (pro >= assertionLevel){
Assert.assertTrue(assertion);
}
}
I'm trying to find the best way to unit test decision-states within a Spring WebFlow context.
<var name="registration" class="*some class path*.Registration"/>
<decision-state id="checkSignedIn">
<if test="*someClass*.isSignedOn(registration)"
then="checkHas*Said*Service"
else="registrationChoice"/>
</decision-state>
<decision-state id="checkHasTCloudService">
<if test="*someClass*Dao.isUserRegisteredFor*saidSvc*(registration)"
then="*svc*Activated"
else="registrationChoice"/>
</decision-state>
<view-state id="registrationChoice" model="registration" view="view.xhtml" >
<on-entry>...
N.B. the someClass and the someClassDao are not within the FlowScope or ConversationScope.
I want to test, via Mockito, that the decision-state expressions are being called and then verify the correct state outcomes.
Normally, one can simply
setCurrentState(someViewState: where you want slot test in within a transitional flow)
define input
mock an ExternalContext
setEvent within that context
resumeFlow(with given context)
verify mocked method calls & finally
assertCurrentState(someViewState: where you would expect to be at, after given input has influenced the decision-state to fork to, within the flow)
It seems decision-states don't operate as a view-state (fair enough: they aren't a given state of view within a flow) so how are we to mock/test?
Thanks in aniticiptation of responses.
Well, I've been put in the right direction by a colleague (the venerable Murray MacPherson) who reminded me that the process is:
1. mock your dao calls
2. begin your flow & (now this is the crux)
3. based on the decision outcomes set by your mocked calls, assert your expected outcome state (which will be some view),
- whether an end state (in which case you would also be expecting an end to your flow) or
- (interim) current state. If it has arrived at exp[ected point, then you know the decisions have been exercised.
N.B. if your expected outcome is a 'currentState', then you can verify the mocked (dao) call/s has/have been made otherwise (as the flow would no longer be active) you cannot make such verifications: the simple fact you've arrived at your expected end state is verification in itself.
In this exact example, you have an alternative to starting at a particular view state via setCurrentState() - you can use startFlow - which will... start the flow. You can then test which view state you end up at, due to the results of your decision states.
I use interceptor annotated #With(Secure.class) that check access to a controller.
There is some regular logic performed in the controller that i want to test.
However, when i run my test (startTest) it intercepted by Secure that cause the test to fail.
startTest A play.mvc.results.Redirect has been caught, null Hide trace
play.mvc.results.Redirect
Is there any way to avoid redirection?
I have a unit test (using typemock 5.4.5.0) that is testing a creation service. The creation service is passed in a validation service in its constructor. The validation service returns an object that has a boolean property (IsValid). In my unit test I am mocking the validation service call to return an instance that has IsValid set to true. The creation service has an if statement that checks the value of that property. When I run the unit test, the object returned from the validation service has its property set to true, but when the if statement is executed, it treats it as though it was false.
I can verify this by debugging the unit test. The object returned by the validation service does indeed have its IsValid property set to true, but it skips the body of my if statement entirely and goes to the End If.
Here is a link to the unit test itself - https://gist.github.com/1076372
Here is a link to the creation service function I am testing - https://gist.github.com/1076376
Does anyone know why the hell the IsValid property is true but is treated like it is false?
P.S. I have also entered this issue in TypeMock's support system, but I think I will probably get a quicker response here!
First, if possible, I'd recommend upgrading to the latest version of Typemock Isolator that you're licensed for. Each version that comes out, even minor releases, contains fixes for interesting edge cases that sometimes make things work differently. I've found upgrading sometimes fixes things.
Next, I see this line in your unit test:
Isolate.WhenCalled(() => validator.ValidateNewQuestionForExistingQuestionPool(new QuestionViewModel())).WillReturn(new Validation(true));
The red flag for me is the "new QuestionViewModel()" that's inside the "WhenCalled()" block.
Two good rules of thumb I always follow:
Don't put anything in the WhenCalled() that you don't want mocked.
If you don't care about the arguments, don't pass real arguments.
In this case, the first rule makes me think "I don't want the constructor for the QuestionViewModel mocked, so I shouldn't put it in there."
The second rule makes me consider whether the argument to the "ValidateNewQuestionForExistingPool" method really isn't important. In this case, it's not, so I'd pass null rather than a real object. If there's an overload you're specifically looking at, cast the null first.
Finally, sort of based on that first rule, I generally try not to inline my return values, either. That means I'd create the new Validation object before the Isolate call.
var validation = new Validator(true);
Isolate.WhenCalled(() => validator.ValidateNewQuestionForExistingQuestionPool(null)).WillReturn(validation);
Try that, see how it runs. You might also watch in the Typemock Tracer utility to see what's getting set up expectation-wise when you run your test to ensure additional expectations aren't being set up that you're not... expecting.