theintern: 1 test failure causes all tests to fail - is this expected behaviour? - acceptance-testing

I have what I think are standard functional tests set up for the intern and I can get them to pass consistently in several browsers. I'm still evaluating if it makes sense to use the intern for a project so I'm trying to see what happens when tests fail, and currently if I make one test fail, it always seems to cause all the tests in the suite to fail.
My tests look a bit like :
registerSuite({name : 'demo',
'thing that works' : function () {
return this.remote.get('http://foo.com')
.waitForCondition("typeof globalThing !== 'undefined'", 5000)
.elementById('bigRedButton')
.clickElement()
.end()
.eval('jsObj.isTrue()')
.then(function(result){
assert.isTrue(result);
})
.end(); // not sure if this necessary...
},
'other thing that works': function() {
// more of the same
}
});
I'm going to try and debug to figure out this for myself, but I was just wondering if anyone knows if this is expected behaviour (1 test failure causes whole test suite to fail, and report that all tests in suite have failed), or whether its more likely that my set up is wrong and I have bad interactions between the promises or something?
Any help would be awesome, and happy to provide any more info if helpful :)
Thanks!

I ran into the exact same issue a few weeks ago and created a ticket on github for the issue: https://github.com/theintern/intern/issues/46.
It's tagged 'needs-triage' at the moment, I've no idea of what it means.

Related

Is it considered good practice to use expect.assertions in tests with Jest? [duplicate]

I've found a lot of this sort of thing when refactoring our Jest test suites:
it('calls the API and throws an error', async () => {
expect.assertions(2);
try {
await login('email', 'password');
} catch (error) {
expect(error.name).toEqual('Unauthorized');
expect(error.status).toEqual(401);
}
});
I believe the expect.assertions(2) line is redundant here, and can safely be removed, because we already await the async call to login().
Am I correct, or have I misunderstood how expect.assertions works?
expect.assertions is important when testing the error scenarios of asynchronous code, and is not redundant.
If you remove expect.assertions from your example you can't be confident that login did in fact throw the error.
it('calls the API and throws an error', async () => {
try {
await login('email', 'password');
} catch (error) {
expect(error.name).toEqual('Unauthorized');
expect(error.status).toEqual(401);
}
});
Let's say someone changes the behavior of login to throw an error based on some other logic, or someone has affected the mock for this test which no longer causes login to throw. The assertions in the catch block won't run but the test will still pass.
Using expect.assertions at the start of the test ensures that if the assertions inside the catch don't run, we get a failure.
This is from Jest documentation:
Expect.assertions(number) verifies that a certain number of assertions
are called during a test. This is often useful when testing
asynchronous code, in order to make sure that assertions in a callback
actually got called.
So to put in other words, expect.assertions makes sure that the n number of assertions are made by the end of the test.
It's good to use it especially when writing a new tests, so one can easily check that correct assertions are made during the test. Async tests often pass because the intended assertions were not made before the test-runner (Jest,Mocha etc.) thought the test was finished.
I think we are missing the obvious here.
expect.assertions(3) is simply saying ...
I expected 3 expect statements to be called before the test times out. e.g.
expect(actual1).toEqual(expected1);
expect(actual2).toEqual(expected2);
expect(actual3).toEqual(expected3);
This timing out business is the reason to use expect.assertions. It would be silly to use it in a purely synchronous test. At least one of the expect statements would be found in a subscribe block (or other async block) within the spec file.
To ensure that the assertions in the catch block of an async/await test are adequately tested, expect.assertions(n) must be declared as shown in your code snippet. Such declaration is unnecessary for async/await tests without the catch block.
It seems quite unintuitive but it is simply the way it is. Perhaps, for certain reasons well deep within the javascript runtime, the test environment can detect when an await'ed' promise successfully resolved but cannot detect same for await'ed' promises that failed to resolve. The creators of the test environment would likely know verbatim why such is the case.
I have to admit that apart from error testing, I find it challenging to see a real use for expect.assertions. The above snippet can be changed to the following with the same guarantee but I think it reads more naturally and doesn't require me to count how many time I call expect. This is especially error-prone if a test if complex:
it('calls the API and throws an error', async () => {
try {
await login('email', 'password');
fail('must throw')
} catch (error) {
expect(error.name).toEqual('Unauthorized');
expect(error.status).toEqual(401);
}
});

How to get postman tests to Fail - mine are always passing?

I'm writing a simple Postman test that even checks if true == false but it always passes. What am I doing wrong? You can see the green light here:
Just a single test on its own without the wrapper function will fail [good!], but that doesn't seem a scalable way to write a lot of tests.
so wrapping stuff in pm.test( ) with either a function() or an ()=> arrow function means everything false passes... ???
If I use a test runner, or check test results below I can see the fails. So maybe that little happy green light in the test authoring panel is just buggy / should be ignored? Or maybe it means syntax error rather than results error? Confusing.
I think there is a misunderstanding here.
pm.expect(true).to.eql(false); throws an error.
If it is wrapped by a test, this error is being caught.
If no test wrapper, it's not being caught.
The red/ green dot next to "Tests" just indicates if the Javascript has been executed without problems.
So if you execute this as a test, the Javascript went trough without errors, thus the green dot. Because the error has been caught by the test function.
If you only execute the .expect() without a test, the error is not caught, thus the Javascript failed, thus the red dot.
Did you check the Test Results area at the bottom?
There you can clearly see, that a test which expects true to equal false is failing.

How to mark a Google Test test-case as "expected to fail"?

I want to add a testcase for functionality not yet implemented and mark this test case as "it's ok that I fail".
Is there a way to do this?
EDIT:
I want the test to be executed and the framework should verify it is failing as long as the testcase is in the "expected fail" state.
EDIT2:
It seems that the feature I am interested in does not exist in google-test, but it does exist in the Boost Unit Test Framework, and in LIT.
EXPECT_NONFATAL_FAILURE is what you want to wrap around the code that you expect to fail. Note you will hav to include the gtest-spi.h header file:
#include "gtest-spi.h"
// ...
TEST_F( testclass, testname )
{
EXPECT_NONFATAL_FAILURE(
// your code here, or just call:
FAIL()
,"Some optional text that would be associated with"
" the particular failure you were expecting, if you"
" wanted to be sure to catch the correct failure mode" );
}
Link to docs: https://github.com/google/googletest/blob/955c7f837efad184ec63e771c42542d37545eaef/docs/advanced.md#catching-failures
You can prefix the test name with DISABLED_.
I'm not aware of a direct way to do this, but you can fake it with something like this:
try {
// do something that should fail and throw and exception
...
EXPECT_TRUE(false); // this should not be reached!
} catch (...) {
// return or print a message, etc.
}
Basically, the test will fail if it reaches the contradictory expectation.
It would be unusual to have a unit test in an expected-to-fail state. Unit tests can test for positive conditions ("expect x to equal 2") or negative conditions ("expect save to throw an exception if name is null"), and can be flagged not to run at all (if the feature is pending and you don't want the noise in your test output). But what you seem to be asking for is a way to negate a feature's test while you're working on it. This is against the tenants of Test Driven Development.
In TDD, what you should do is write tests that accurately describe what a feature should do. If that feature isn't written yet then, by definition, those tests will and should fail. Then you implement the feature until, one by one, all those tests pass. You want all the tests to start as failing and then move to passing. That's how you know when your feature is complete.
Think of how it would look if you were able to mark failing tests as passing as you suggest: all tests would pass and everything would look complete when the feature didn't work. Then, once you were done and the feature worked as expected, suddenly your tests would start to fail until you went in and unflagged them. Beyond being a strange way to work, this workflow would be very prone to error and false-positives.

Conflicting results when unit testing MVC controller

I'm writing unit tests (using NUnit & Moq) for my MVC 2 controllers, and am following examples in the Pro ASP.net MVC 2 Framework book by Steven Sanderson (great book, btw). However, I've run into problems, which I think are just due to my lack of understanding of NUnit.
Here's an excerpt, with the irrelevant parts removed:
[Test]
public void Cannot_Save_Invalid_Event()
{
...
repository.Setup(x => x.SaveEvent(evt)).Callback(Assert.Fail);
...
repository.Verify(x => x.SaveEvent(evt));
}
This test is passing for me, although from what I understand, those two statements should directly conflict with each other. The second one wasn't there originally, but I put it in to verify that it was passing for the right reasons.
From what I understand, my repository is set up to fail if "repository.SaveEvent(evt)" is called. However, later in the test, I try to verify that "repository.SaveEvent(evt)" was called. Since it passes, doesn't this mean that it was both called, and not called? Perhaps those statements don't act as I suspect they do.
Can someone explain how these two statements are not opposites, and how they can both exist and the test still pass?
Maybe your tests doesn-t fail beacuse it has a catch-everything block that also hides the assert/verify-exception that is necessary for the test to fail.
Note: the following unittest will allways pass
[Test]
public void HidingAssertionFailure()
{
try {
Assert.AreEqual(0,1); // this should fail
} catch (Exception ex) {
// this will hide the assertion failure
}
}
The reason for this behavior was that it was running "SaveEvent()", however, since the mocked repository didn't define that action, it was throwing an exception in my controller, which my controller was catching.
So, it seems that the callback will only execute if control returns successfully.

JUnit - Testing a method that in turn invokes a few more methods

This is my doubt on what we regard as a "unit" while unit-testing.
say I have a method like this,
public String myBigMethod()
{
String resultOne = moduleOneObject.someOperation();
String resultTwo = moduleTwoObject.someOtherOperation(resultOne);
return resultTwo;
}
( I have unit-tests written for someOperation() and someOtherOperation() seperately )
and this myBigMethod() kinda integrates ModuleOne and ModuleTwo by using them as above,
then, is the method "myBigMethod()" still considered as a "unit" ?
Should I be writing a test for this "myBigMethod()" ?
Say I have written a test for myBigMethod()... If testSomeOperation() fails, it would also result in testMyBigMethod() to fail... Now testMyBigMethod()'s failure might show a not-so-correct-location of the bug.
One-Cause causing two tests to fail doesn't look so good to me. But donno if there's any better way...? Is there ?
Thanks !
You want to test the logic of myBigMethod without testing the dependencies.
It looks like the specification of myBigMethod is:
Call moduleOneObject.someOperation
Pass the result into moduleTwoObject.someOtherOperation
Return the result
The key to testing just this behavior is to break the dependencies on moduleOneObject and moduleTwoObject. Typically this is done by passing the dependencies into the class under test in the constructor (constructor injection) or setting them via properties (setter injection).
The question isn't just academic because in practice moduleOneObject and moduleTwoObject could go out and hit external systems such as a database. A true unit test doesn't hit external systems as that would make it an "integration test".
The test for myBigMethod() should test the combination of the results of the other two methods called. So, yes it should fail if either of the methods it depends on fails, but it should be testing more. There should be some case where someOperation() and someOtherOperation() work correctly, but myBigMethod() can still fail. If that's not possible, then there's no need to test myBigMethod().