One of my tests is failing intermittently when running the whole suite, but it doesn't fail when running it by itself.
I created a very basic repository with a vanilla application that reproduces the issue:
https://github.com/juanazam/ember-cli-test-issue.
Basically, I created a component with a text field and a button. The button is disabled while the text is empty.
The issues happens when two tests use the fillIn helper on the input.
Here is the testing code taken from the vanilla app:
test('test 1', function(assert) {
visit('/');
fillIn('input[type=text]', "Algo");
andThen(function() {
assert.equal(currentRouteName(), "index");
});
});
test('test 2', function(assert) {
visit('/');
andThen(function() {
assert.ok(find('input[type=submit]').is(':disabled'));
});
fillIn('input[type=text]', "Algo");
andThen(function() {
assert.ok(!find('input[type=submit]').is(':disabled'));
});
});
As you can see test 1 only fills the input but doesn't do anything with it. The second test tests if the button is disabled.
Test 2 fails intermittently when running the whole suite. If you run ember test -s it fails, if you reload the browser tab (re running the whole suite without restarting the server process) it passes. The same behavior happens with multiple runs (one run fails, the next succeeds).
I didn't create a twiddle reproduction case because the test runner doesn't behave the same way.
With you app:
ember test will always fail.
ember test --filter 'test 1' will always pass.
ember test --filter 'test 2' will always pass.
If you split your 2 test functions into different acceptance tests ember test will always pass.
Qunit in the browser tries to execute failing tests first. (I think to reduce the time until the most interesting failing tests are executed.). With ember -s your tests are always executed in order and tests are failing (I guess test 2 failes because test1 already filled your input and it is not initially disabled as expected).
When reloading qunit in browser after the first failed test, the failing test2 is executed first (and passes).
Also have a look on https://dockyard.com/blog/2014/04/17/ember-object-self-troll . There might be a problem within your component definition leading to the unexpectedly filled input in test2.
Related
I have been trying to configure offline unit tests for polymer web components that use the latest release of Firebase distributed database. Some of my tests are passing, but others—that look nigh identical to passing ones—are not running properly.
I have set up a project on github that demonstrates my configuration, and I'll provide some more commentary below.
Sample:
https://github.com/doctor-g/wct-firebase-demo
In that project, there are two suites of tests that work fine. The simplest is offline-test, which doesn't use web components at all. It simply shows that it's possible to use the firebase database's offline mode to run some unit tests. The heart of this trick is the in the suiteSetup method shown below—a trick I picked up from nfarina's work on firebase-server.
suiteSetup(function() {
app = firebase.initializeApp({
apiKey: 'fake',
authDomain: 'fake',
databaseURL: 'https://fakeserver.firebaseio.com',
storageBucket: 'fake'
});
db = app.database();
db.goOffline();
});
All the tests in offline-test pass.
The next suite is wct-firebase-demo-app_test.html, which test the eponymous web component. This suite contains a series of unit tests that are set up like offline-test and that pass. Following the idea of dependency injection, the wct-firebase-demo-app component has a database attribute into which is passed the firebase database reference, and this is used to make all the firebase calls. Here's an example from the suite:
test('offline set string from web component attribute', function(done) {
element.database = db;
element.database.ref('foo').set('bar');
element.database.ref('foo').once('value', function(snapshot) {
assert.equal(snapshot.val(), 'bar');
done();
});
});
I have some very simple methods in the component as well, in my attempt to triangulate toward the broken pieces I'll talk about in a moment. Suffice it to say that this test passes:
test('offline push string from web component function', function(done) {
element.database = db;
let resultRef = element.pushIt('foo', 'bar');
element.database.ref('foo').once('value', function(snapshot) {
assert.equal(snapshot.val()[resultRef.key], 'bar');
done();
});
});
and is backed by this implementation in wct-firebase-demo-app:
pushIt: function(at, value) {
return this.database.ref(at).push(value);
},
Once again, these all pass. Now we get to the real quandary. There's a suite of tests for another element, x-element, which has a method pushData:
pushData: function(at, data) {
this.database.ref(at).push(data);
}
The test for this method is the only test in its suite:
test('pushData has an effect', function(done) {
element.database = db;
element.pushData('foo', 'xyz');
db.ref('foo').once('value', function(snapshot) {
expect(snapshot.val()).not.to.be.empty;
done();
});
});
This test does not pass. While this test is running, the console comes up with an error message:
Your API key is invalid, please check you have copied it correctly.
By setting some breakpoints and walking through the execution, it seems to me that this error comes up after the call to once but before the callback is triggered. Note, again, this doesn't happen with the same test structure described above that's in wct-firebase-demo-app.
That's where I'm stuck. Why do offline-test and wct-firebase-demo-app_test suites work fine, but I get this API key error in x-element_test? The only other clue I have is that if I copy in a valid API key into my initializeApp configuration, then I get a test timeout instead.
UPDATE:
Here is a (patched-together) image of my console log when running the tests.:
To illustrate the issue brought up by tony19 below, here's the console log with just pushData has an effect in x-element_test commented out:
The offline-test results are apparently false positives. If you check the Chrome console, offline-test actually throws the same error:
The error doesn't affect the test results most likely because the API key validation occurs asynchronously after the test has already completed. If you could somehow hook into that validation, you'd be able to to catch the error in your tests.
Commenting out all tests except for offline firebase is ok shows the error still occurring, which points to suiteSetup(). Narrowing the problem down further by commenting 2 of the 3 function calls in the setup, we'll see the error is caused by the call to firebase.initializeApp() (and not necessarily related to once() as you had suspected).
One workaround to consider is wrapping the Firebase library in a class/interface, and mocking that for unit tests.
I have these two Ember integration tests, A and B. (I have a lot more, but in debugging this I have literally removed every other test in order to isolate the problem. There are 9 tests in the same file as A and I commented the other 8.) If A runs before B, B will fail. If B runs by itself, or before A, it will pass.
From this description it seems pretty clear that A is doing something to the test environment that screws up B. After liberally salting the tests and the production code involved with log messages, however, I'm no closer to figuring out what's up, and I'm hoping someone else can spot if there's an obvious issue.
Right now I'm looking closely at the afterEach blocks in both tests. Here's an outline of what the beforeEach and afterEach blocks look like for test A:
beforeEach: function() {
server = new Pretender(function() {
// Pretender setup removed for brevity
});
App = startApp();
},
afterEach: function() {
server.shutdown();
Ember.run(App, App.destroy);
}
That afterEach is pretty much the stock ember-cli code, but it baffles me a bit. The documentation on Ember.run() suggests it should get a function as an argument, but we're not giving it one here, so I'm not sure how that works. And, should the Pretender shutdown() call be inside the Ember.run (or in its own Ember.run)?
The versions, for the record: ember-cli 0.2.0, Ember 1.10.1.
ETA: The issue goes away when I update to ember-cli 0.2.3 and Ember 1.11.3. Now if only I could figure out the other failing tests we have with that update...
Your setup and teardown looks fine. They are commonly used and are properly defined.
However, there is (still) open issue on ember-qunit about not tearing down the app properly - take a look here to see the progress.
As you said, it does not happen in Ember 1.13.
I have been working with ember for a little over a month now and I have yet to find a solution to some testing inconsistencies I have been experiencing.
The problem is that when I run ember test from the command line and visit /tests in the browser sometimes I see a different total number of tests. It seems like ember test with phantomjs as the test runner is skipping some tests. On top of that the results seem to be inconsistent as well.
For instance, I have a simple acceptance test:
import Ember from 'ember';
import startApp from '../helpers/start-app';
var App;
module('Acceptance: Login', {
setup: function() {
App = startApp();
},
teardown: function() {
Ember.run(App, 'destroy');
}
});
test('Page contents', function() {
visit('/login');
andThen(function() {
equal(find('form.login').length, 1);
});
});
When I visit /tests, all of my tests pass, however when I run Ember test I get one failure:
not ok 1 PhantomJS 1.9 - Acceptance: Login: Page contents
---
actual: >
0
expected: >
1
Log: >
...
Thanks in advance for any help.
I had the same frustration as you until I looked a bit closer at what was being counted.
When I run my tests in a browser, it shows how many assertions are being run. When I run phantomjs (via 'ember test' command line), the log only reports how many tests are run. There can be many assertions in a test.
If I scroll to the very bottom of the page after a test run is complete in a browser, I see that the number next to the final test matches the total number of tests run in phantomjs.
As for why your test is breaking in phantomjs, it could be due to a number of things. Without seeing your handlebars and implementation it can be hard to tell, but I've seen problems with timing and also jquery binding issues that fail only in a headless browser (aka phantomjs).
If you post some more specifics, I may be able to help.
Using ember.js v 1.5.1.
I use karma and qunit to test my ember application. In several of my tests I have situations where 1.a user clicks->2.an async call is made to our server->and then 3. a transition via "this.transitionToRoute('someroute')" in the controller is called. When it hits the transitionToRoute method while testing, karma hangs. Tried wrapping it with an ember.run call but didn't seem to help.
When I comment out the transition call it runs, and fails accordingly.
Example Test Code where it hangs and doesn't reach equal calls
test('successful registration request', function() {
setupMockRegistrationRequests();
visit("/register")
.fillIn('#email', 'test2')
.fillIn('#password','password')
.click('#submit')
.andThen(function() {
equal(find(".register-page .form-alert").length, 0, "Should be no error");
equal(find(".login-page").length, 1, "Should be on login screen");
});
});
Controller Code
Test case runs
//this.transitionToRoute('login');
Test case hangs
this.transitionToRoute('login');
Any body know why it is hanging?/What I can do to allow it continue?
The problem was that it was transitioning, but there were more async requests being made by the next route that were not being handled by my mockjax requests. This caused the testing environment to hang without any errors being thrown.
I'm trying to troubleshoot a unit test issue.
I used to have a mostly working Maven -> PhantomJS -> Qunit setup, but it was unpredictable so I took it apart to try to fix it.
I upgraded the software:
Qunit: 1.11.0
PhantomJS: 1.8
Phantom Qunit Runner Latest: https://github.com/jquery/qunit/tree/master/addons/phantomjs
I see the web GUI working. It runs and passes all 102 tests. The console prints this:
$ phantomjs --disk-cache=false runner.js http://localhost/ui/dcx/test.html
$ Took 16ms to run 0 tests. 0 passed, 0 failed.
If I comment out the exit command in the runner, it prints the console output for QUnit.done multiple times.
$ phantomjs --disk-cache=false runner.js http://localhost/ui/dcx/test.html
$ PhantomJS successfully loaded a page
$ QUnit.done callback fired
$ Took 15ms to run 0 tests. 0 passed, 0 failed.
$ QUnit.done callback fired
$ Took 1840ms to run 102 tests. 102 passed, 0 failed.
$ QUnit.done callback fired
$ Took 1841ms to run 102 tests. 102 passed, 0 failed.
$ QUnit.done callback fired
$ Took 1842ms to run 102 tests. 102 passed, 0 failed.
$ QUnit.done callback fired
$ Took 1848ms to run 102 tests. 102 passed, 0 failed.
$ ^C
$
Looks to me like the Qunit.done callback is getting executed too soon, then multiple times.
Anyone know why that callback fires?
My test inclusions and login delay might be relevant. I use AMD modules to define tests and curl.js to bring them in. Nothing happens until the security login does:
curl(['dolla'], function($){
$.ajax({
type: 'POST',
url: '/svc/j_spring_security_check',
data: {
j_username: '7',
j_password: '7'
},
success: function() {
loadTests()
}
});
})
var loadTests = function () {
curl([
// Unit tests
'dcx/dataControls/activity.test'
, 'dcx/dataControls/eventList.test'
, 'dcx/dataControls/mapViewer.view.test'
, 'dcx/pages/deviceDetails.view.test'
, 'dcx/pages/login.test'
, 'dcx/pages/nodeProfiles.test'
, 'dcx/pages/settings.view.test'
], function() {}, function(ex) { throw new Error(ex) })
})
EDIT:
I'm down to a root cause, I think.
If you include QUnit on a blank page, it calls QUnit.begin and QUnit.done right away.
I need to delay execution of Qunit until after the security login is successful and curl has brought in my unit tests. Is there a way to delay the start of QUnit, but still keep the Qunit object available? I can't use stop() because there are many async tests that will call start().
Found the answer. You can configure QUnit to not start, then start it manually when all your tests are loaded. This prevents the duplicate calling of Qunit.done which is the root cause of this issue.
http://forum.jquery.com/topic/are-qunit-and-requirejs-compatible#14737000001967123
This is one way to do it--modify the runner to not exit if there are not test results.
https://gist.github.com/SimpleAsCouldBe/5059623
This doesn't work though--Qunit.done fires whenever the test stack is cleared. In an asynchronously loaded environment like Curl/Require.js, this can happen any time.
This is if you don't want to use require. For example, in a browser context maybe.
Having spent ages to find a method to turn loading of scripts (and stylesheets) into Promises (see here), I then found big problems with QUnit test suites starting to run before all these had loaded. Typically a handful of tests, at the start, would complain than a certain variable or class was undefined, although later tests wouldn't have that difficulty.
You can stop automatic starting by going like this:
QUnit.config.autostart = false;
... seemingly just putting it in one of several files will suffice.
To start the QUnit tests, then, you have to go QUnit.start();. But, understandably perhaps, you can't execute this from inside any code which is being run by a QUnit test. This gets complicated. In the end I did this in my app-starting code:
await this.loadInjectedFile( GLOBAL_SCRIPT );
await this.loadInjectedFile( DBFORM_SCRIPT );
await this.loadInjectedFile( UDV_SCRIPT );
await this.loadInjectedFile( REACTIVITY_SCRIPT );
console.log( '... injected files loaded' );
// to allow QUnit to start testing
window.QUnitGreenLight = true;
... strictly speaking a naughty thing to do (allowing test-related code to sneak into your app code). A more compartmentalised approach could probably be found.
Then, inline in the HTML file from where you launch your testing:
<script>
const tryToStartTesting = function(){
setTimeout( function(){
if( window.QUnitGreenLight ){
QUnit.start();
}
else {
console.log( 'QUnit green light not yet given!' );
tryToStartTesting();
};
}, 10 );
};
tryToStartTesting();
</script>
... in practice it seems to take maybe a few hundredths of a second before the green light is given.
A bit scrappy, perhaps, but it seems to work.