I want to do a simple spec that involve DOM manipulate that phantomJS is working:
/*global describe it */
'use strict';
(function () {
describe('DOM Tests', function () {
var strong = 'Viva!';
var text = document.getElementById('test').innerHTML;
console.log(text);
it("is in DOM", function(){
expect(strong).equal('Viva!');
});
});
})();
But after running grunt test
there was no assertion
Running "clean:server" (clean) task
Cleaning ".tmp"...OK
Running "haml:app" (haml) task
Running "coffee:dist" (coffee) task
File .tmp/scripts/app.js created.
Running "coffee:test" (coffee) task
Running "compass:dist" (compass) task
directory .tmp/styles/
create .tmp/styles/main.css
Running "compass:server" (compass) task
unchanged app/styles/main.scss
Running "connect:test" (connect) task
Starting connect web server on localhost:9000.
Running "mocha:all" (mocha) task
Testing index.htmlOK
>> 0 assertions passed (0s)
Done, without errors.
Here is the code repository, in case you need to check my code
Related
all. I'm trying to implement an app that using parse server as backend.
And I'm trying to use mocha/chai to do the unit test for the cloud code function.
Like the code below.
const { expect } = require('chai');
const { server } = require('../index.js');
const Parse = require('parse/node');
let loggedUser;
let loggedUserSessionToken;
describe('SMS APIs', function() {
before('Initialize parse server.', function(done) {
Parse.initialize("appId");
Parse.serverURL = 'http://localhost:1337/parse';
done();
});
after('Close server', function(done) {
done();
server.close();
});
it('Pass', function(done) {
expect(1).to.equal(1);
done();
})
)};
After I run yarn mocha. The command line shows lots of log message. It is hard to read the mocha test result. like the picture below. Is there any method to turn off parse logger?
command line logger image
Take a look how the parse-server repo does it: helper.js
the key is to set 'silent: true' in the parse-server configuration.
I do this by using the wonderful config package, creating a test.js config that sets silent to true and then setting NODE_ENV=test when running my unit tests. Sounds like a lot to do, but this pattern is commonly reused for many things. Good luck!
I am using gulp and mocha to run unit tests which is part of a gulp workflow to run my reactjs app. The unit test works:
gulp.task('mocha', function () {
return gulp
.src(['test/*.js'])
.pipe(mocha({
compilers: {
js: babel
}
})
})
However if the unit test is broken I would like to exit the whole gulp workflow. How can I do this?
You could try to just kill the process and restarting it when you want? Prehaps i do not fully understand your question, if so please ellaborate.
In cmd where you run the gulpscript you can press CTRL + C, and than Y to affirm. This stops the current script.
Mocha runs the tests. Gulp simply groups files, folder locations and pipe invokes mocha with this grouped information. What you are asking for is for a mechanism for mocha to communicate back to gulp the test results instead of its stdout if I read your question correctly. gulp automatically exits when mocha exits but if it does not then either you have a watch task or there is a allback in your gulp file that has not been resolved or this issue - [https://github.com/sindresorhus/gulp-mocha/issues/1][1]
You can use
.on('error', process.exit.bind(process, 1))
to check if the process exits
Or, if it is a callback issue, resolve the call with a done()
gulp.task('taskname', function (done) {
gulp.src('test/testfile.js')
.pipe(gulpmocha(),setTimeout(function() {
done(null);
}, 5000))
.on('error', process.exit.bind(process, 1))
});
One of my tests is failing intermittently when running the whole suite, but it doesn't fail when running it by itself.
I created a very basic repository with a vanilla application that reproduces the issue:
https://github.com/juanazam/ember-cli-test-issue.
Basically, I created a component with a text field and a button. The button is disabled while the text is empty.
The issues happens when two tests use the fillIn helper on the input.
Here is the testing code taken from the vanilla app:
test('test 1', function(assert) {
visit('/');
fillIn('input[type=text]', "Algo");
andThen(function() {
assert.equal(currentRouteName(), "index");
});
});
test('test 2', function(assert) {
visit('/');
andThen(function() {
assert.ok(find('input[type=submit]').is(':disabled'));
});
fillIn('input[type=text]', "Algo");
andThen(function() {
assert.ok(!find('input[type=submit]').is(':disabled'));
});
});
As you can see test 1 only fills the input but doesn't do anything with it. The second test tests if the button is disabled.
Test 2 fails intermittently when running the whole suite. If you run ember test -s it fails, if you reload the browser tab (re running the whole suite without restarting the server process) it passes. The same behavior happens with multiple runs (one run fails, the next succeeds).
I didn't create a twiddle reproduction case because the test runner doesn't behave the same way.
With you app:
ember test will always fail.
ember test --filter 'test 1' will always pass.
ember test --filter 'test 2' will always pass.
If you split your 2 test functions into different acceptance tests ember test will always pass.
Qunit in the browser tries to execute failing tests first. (I think to reduce the time until the most interesting failing tests are executed.). With ember -s your tests are always executed in order and tests are failing (I guess test 2 failes because test1 already filled your input and it is not initially disabled as expected).
When reloading qunit in browser after the first failed test, the failing test2 is executed first (and passes).
Also have a look on https://dockyard.com/blog/2014/04/17/ember-object-self-troll . There might be a problem within your component definition leading to the unexpectedly filled input in test2.
I'm trying to run some Jasmine tests in Karma but the tests are failing because it's saying that it ran 0 of 0 tests. Can someone tell me what I'm doing wrong?
The async request mock fires and hits the callback. Even when I go to the debugger, it says 2 tests completed in the debugger, but failing in the console. What gives?
describe('User Info Tests:', function () {
describe('Fetch User Info:', function () {
it("User Name should match", function(done) {
// mock async request
getUserProfile(1, 2, function (userProfile) {
var match = userProfile.displayName === 'Unit Test User';
expect(match).toBeTruthy();
done();
}, function (msg) {
done();
throw msg;
});
});
});
});
See the screenshot below of the debug console of the tests running. You will see the tests ran with a status of SUCCESS.
So the problem was I wasn't including the karam-requirejs plugin in the karam.conf.js file. Apparently it doesn't want you to include your own copy of require.js in the files collection. Once I added that plugin in, everything just worked.
frameworks: ['jasmine-jquery', 'jasmine', 'requirejs'],
plugins: [
'karma-phantomjs-launcher',
'karma-chrome-launcher',
'karma-jasmine-jquery',
'karma-jasmine',
'karma-requirejs'
],
Make sure the karma-requirejs plugin is actually installed through npm and in your package.json as well!
I'm running a set of QUnit tests that use module level setup and teardown methods. I've noticed that using start() and stop() inside my tests appears to disrupt when these get called, which causes problems as certain items made available in my setup are not available to some tests that run.
Edit: I've noticed that this happens exclusively when I load my test scripts programmatically (I'm using a script loader: LABjs). I have modified the subject and content of this question accordingly. I am loading tests like this:
$LAB.script('/static/tests.js')
Still not sure why this happens.
Here's a sample of my test module:
module('Class Foo', {
setup: function() {
console.log('setup called');
},
teardown: function() {
console.log('teardown called');
}
});
test('Test1', function() {
stop();
console.log('test1');
ok(true);
start();
});
test('Test2', function() {
stop();
console.log('test2');
ok(true);
start();
});
test('Test3', function() {
stop();
console.log('test3');
ok(true);
start();
});
This yields the console output (note that setup is called twice, then not again):
setup called
test1
teardown called
(2)setup called
test3
teardown called
test2
teardown called
Remove the start/stop, or modifying my test files to not be loaded programatically (i.e.: using traditional tags):
test('Test3', function() {
console.log('test3');
ok(true);
});
Yields a more expected order of execution:
setup called
test1
teardown called
setup called
test2
teardown called
setup called
test3
teardown called
Am I misunderstanding something about how this should be functioning?
It appears that QUnit likes your test scripts to be loaded when it kicks itself off. This action is configurable, so I found the following setup worked to defer QUnit starting until all test scripts were available:
QUnit.config.autostart = false;
$LAB.script('/static/tests.js').wait(function() {
QUnit.start();
});
I'm still not sure why this happens so would be interested to see any answers in that regard (or I'll update this when I figure it out!) but this solution gets me by for now.