I'm writing a simple Postman test that even checks if true == false but it always passes. What am I doing wrong? You can see the green light here:
Just a single test on its own without the wrapper function will fail [good!], but that doesn't seem a scalable way to write a lot of tests.
so wrapping stuff in pm.test( ) with either a function() or an ()=> arrow function means everything false passes... ???
If I use a test runner, or check test results below I can see the fails. So maybe that little happy green light in the test authoring panel is just buggy / should be ignored? Or maybe it means syntax error rather than results error? Confusing.
I think there is a misunderstanding here.
pm.expect(true).to.eql(false); throws an error.
If it is wrapped by a test, this error is being caught.
If no test wrapper, it's not being caught.
The red/ green dot next to "Tests" just indicates if the Javascript has been executed without problems.
So if you execute this as a test, the Javascript went trough without errors, thus the green dot. Because the error has been caught by the test function.
If you only execute the .expect() without a test, the error is not caught, thus the Javascript failed, thus the red dot.
Did you check the Test Results area at the bottom?
There you can clearly see, that a test which expects true to equal false is failing.
Related
Am trying to understand what Exactly Verify or VerifyAll Does ?
I was searching and i got the below info on using MOQ
Arrange
Mock
Set up expectations for dependencies
Set up expected results
Create class under test
Act
Invoke method under test
Assert
Assert actual results match expected results
Verify that expectations were met
So What exactly does Verify Does? I can test everything using Assert and in case if any failures the unit test will fail ?
What extra work does verify does ? Is it the replacement for Assert ?
Some more clarify will make me understand.
Thanks
Assert vs Mock.Verify
Assert is used to do checks on the class/object/subject under test (SUT).
Verify is used to check if the collaborators of the SUT were notified or contacted.
So if you are testing a car object, which has an engine as a collaborator/dependency.
You would use to Assert to see if car.IsHumming after you invoke car.PushStart()
You would use Verify to see if _mockEngine.Ignition() received a call.
Verify vs VerifyAll
Approach One:
Explicitly setup all operations you expect to be triggered on the mocked collaborator from the subsequent Act step
Act - do something that will cause the operations to be triggered
call _mock.VerifyAll() : to cause every expection that you setup in (1) to be verified
Approach Two
Act - do something that will cause the operations to be triggered
call _mock.Verify(m => m.Operation) : cause verification that Operation was in fact called. One Verify call per verification. It also allows you to check count of calls e.g. exactly Once, etc.
So if you have multiple operations on the Mock OR if you need the mocked method to return a value which will be processed, then Setup + Act + VerifyAll is the way to go
If you have a few notifications that you want to be checked, then Verify is easier.
I want to add a testcase for functionality not yet implemented and mark this test case as "it's ok that I fail".
Is there a way to do this?
EDIT:
I want the test to be executed and the framework should verify it is failing as long as the testcase is in the "expected fail" state.
EDIT2:
It seems that the feature I am interested in does not exist in google-test, but it does exist in the Boost Unit Test Framework, and in LIT.
EXPECT_NONFATAL_FAILURE is what you want to wrap around the code that you expect to fail. Note you will hav to include the gtest-spi.h header file:
#include "gtest-spi.h"
// ...
TEST_F( testclass, testname )
{
EXPECT_NONFATAL_FAILURE(
// your code here, or just call:
FAIL()
,"Some optional text that would be associated with"
" the particular failure you were expecting, if you"
" wanted to be sure to catch the correct failure mode" );
}
Link to docs: https://github.com/google/googletest/blob/955c7f837efad184ec63e771c42542d37545eaef/docs/advanced.md#catching-failures
You can prefix the test name with DISABLED_.
I'm not aware of a direct way to do this, but you can fake it with something like this:
try {
// do something that should fail and throw and exception
...
EXPECT_TRUE(false); // this should not be reached!
} catch (...) {
// return or print a message, etc.
}
Basically, the test will fail if it reaches the contradictory expectation.
It would be unusual to have a unit test in an expected-to-fail state. Unit tests can test for positive conditions ("expect x to equal 2") or negative conditions ("expect save to throw an exception if name is null"), and can be flagged not to run at all (if the feature is pending and you don't want the noise in your test output). But what you seem to be asking for is a way to negate a feature's test while you're working on it. This is against the tenants of Test Driven Development.
In TDD, what you should do is write tests that accurately describe what a feature should do. If that feature isn't written yet then, by definition, those tests will and should fail. Then you implement the feature until, one by one, all those tests pass. You want all the tests to start as failing and then move to passing. That's how you know when your feature is complete.
Think of how it would look if you were able to mark failing tests as passing as you suggest: all tests would pass and everything would look complete when the feature didn't work. Then, once you were done and the feature worked as expected, suddenly your tests would start to fail until you went in and unflagged them. Beyond being a strange way to work, this workflow would be very prone to error and false-positives.
I have what I think are standard functional tests set up for the intern and I can get them to pass consistently in several browsers. I'm still evaluating if it makes sense to use the intern for a project so I'm trying to see what happens when tests fail, and currently if I make one test fail, it always seems to cause all the tests in the suite to fail.
My tests look a bit like :
registerSuite({name : 'demo',
'thing that works' : function () {
return this.remote.get('http://foo.com')
.waitForCondition("typeof globalThing !== 'undefined'", 5000)
.elementById('bigRedButton')
.clickElement()
.end()
.eval('jsObj.isTrue()')
.then(function(result){
assert.isTrue(result);
})
.end(); // not sure if this necessary...
},
'other thing that works': function() {
// more of the same
}
});
I'm going to try and debug to figure out this for myself, but I was just wondering if anyone knows if this is expected behaviour (1 test failure causes whole test suite to fail, and report that all tests in suite have failed), or whether its more likely that my set up is wrong and I have bad interactions between the promises or something?
Any help would be awesome, and happy to provide any more info if helpful :)
Thanks!
I ran into the exact same issue a few weeks ago and created a ticket on github for the issue: https://github.com/theintern/intern/issues/46.
It's tagged 'needs-triage' at the moment, I've no idea of what it means.
I'm using Google Test and Google Mock frameworks for a project's unit tests. I have various unit tests projects and want to automate my build so to run all of them.
I was expecting the unit tests executable to return 0 on success and 1 (or any other value) on any test failure, but I'm getting 1 when all tests passed. I'm getting some GMOCK warnings but couldn't find any documentation about warnings affecting return value.
I tried running the tests filtering to run just one test case where no GMOCK warnings are triggered and still get 1 as return value.
I had a couple of DISABLED test cases, so I commented them out. Still getting 1 as return value.
According to documentation and code comments for RUN_ALL_TESTS macro, the return value should be 0.
I can't think of anything else causing return value 1. Am I missing anything?
If you look at the definition of the RUN_ALL_TESTS() macro from gtest.h there's clearly stated that 0 is returned on no failures:
// Use this macro in main() to run all tests. It returns 0 if all
// tests are successful, or 1 otherwise.
//
// RUN_ALL_TESTS() should be invoked after the command line has been
// parsed by InitGoogleTest().
#define RUN_ALL_TESTS()\
(::testing::UnitTest::GetInstance()->Run())
Appearantly even warnings (from gmock) may result in a return value of 1. Try what happens if you get rid of the gmock warnings (e.g. using s.th. like NiceMock<> to wrap your mock class instance).
I have a unit test (using typemock 5.4.5.0) that is testing a creation service. The creation service is passed in a validation service in its constructor. The validation service returns an object that has a boolean property (IsValid). In my unit test I am mocking the validation service call to return an instance that has IsValid set to true. The creation service has an if statement that checks the value of that property. When I run the unit test, the object returned from the validation service has its property set to true, but when the if statement is executed, it treats it as though it was false.
I can verify this by debugging the unit test. The object returned by the validation service does indeed have its IsValid property set to true, but it skips the body of my if statement entirely and goes to the End If.
Here is a link to the unit test itself - https://gist.github.com/1076372
Here is a link to the creation service function I am testing - https://gist.github.com/1076376
Does anyone know why the hell the IsValid property is true but is treated like it is false?
P.S. I have also entered this issue in TypeMock's support system, but I think I will probably get a quicker response here!
First, if possible, I'd recommend upgrading to the latest version of Typemock Isolator that you're licensed for. Each version that comes out, even minor releases, contains fixes for interesting edge cases that sometimes make things work differently. I've found upgrading sometimes fixes things.
Next, I see this line in your unit test:
Isolate.WhenCalled(() => validator.ValidateNewQuestionForExistingQuestionPool(new QuestionViewModel())).WillReturn(new Validation(true));
The red flag for me is the "new QuestionViewModel()" that's inside the "WhenCalled()" block.
Two good rules of thumb I always follow:
Don't put anything in the WhenCalled() that you don't want mocked.
If you don't care about the arguments, don't pass real arguments.
In this case, the first rule makes me think "I don't want the constructor for the QuestionViewModel mocked, so I shouldn't put it in there."
The second rule makes me consider whether the argument to the "ValidateNewQuestionForExistingPool" method really isn't important. In this case, it's not, so I'd pass null rather than a real object. If there's an overload you're specifically looking at, cast the null first.
Finally, sort of based on that first rule, I generally try not to inline my return values, either. That means I'd create the new Validation object before the Isolate call.
var validation = new Validator(true);
Isolate.WhenCalled(() => validator.ValidateNewQuestionForExistingQuestionPool(null)).WillReturn(validation);
Try that, see how it runs. You might also watch in the Typemock Tracer utility to see what's getting set up expectation-wise when you run your test to ensure additional expectations aren't being set up that you're not... expecting.