Jasmine test with Require/Backbone project - unit-testing

My objective is to test the TDD (Test driven development). But after one weekend on it , I really need your help :)
First Question : "What is the best way to TDD between Browser runner
or headless runner" ?
Second : I really want test my project without browser before put it in Production mode. For while I didn't succeed :(
For example if I want test my Projects model who look like :
define([
'underscore',
'backbone'
], function(_, Backbone) {
var projectsModel = Backbone.Model.extend({
defaults: {
score: 10
},
initialize: function(){
}
});
return projectsModel;
});
How can I do ?
I have already check jasmine-node / Js test driver / ... but without success :/
Jasmine-node look great but... I need some help because every tuto I found on web only work for simple model without Require dependence...
Thank you :)
PS : I also check this link here but with the same error :/

Node has issues emulating a real browser, with all it's quirks, ajax, etc. Something like PhantomJS works damn well though. You use a script to open your test running page and let it run in PhantomJS, and have some other code to pull out the results.

Related

How can I unit test a MassTransit consumer that builds and executes a routing slip?

In .NET Core 2.0 I have a fairly simple MassTransit routing slip that contains 2 activities. This is built and executed in a consumer and it all ties back to an automatonymous state machine. It all works great albeit with a few final clean tweaks needed.
However, I can't quite figure out the best way to write unit tests for my consumer as it builds a routing slip. I have the following code in my consumer:
public async Task Consumer(ConsumerContext<ProcessRequest> context)
{
var builder = new RoutingSlipBuilder(NewId.NextGuid());
SetupRoutingSlipActivities(builder, context);
var routingSlip = builder.Build();
await context.Execute(routingSlip).ConfigureAwait(false);
}
I created the SetupRoutingSlipActivities method as I thought it would help me write tests to make sure the right activities were being added and it simply looks like:
public void SetupRoutingSlipActivities(RoutingSlipBuilder builder, ConsumeContext<IProcessCreateLinkRequest> context)
{
builder.AddActivity(
nameof(ActivityOne),
new Uri("execute_activity_one_example_address"),
new ActivityOneArguments(
context.Message.Id,
context.Message.Name)
);
builder.AddActivity(
nameof(ActivityTwo),
new Uri("execute_activity_two_example_address"),
new ActivityTwoArguments(
context.Message.AnotherId,
context.Message.FileName)
);
}
I tried to just write tests for the SetupRoutingSlipActivities by using a Moq mock builder and a MassTransit InMemoryTestHarness but I found that the AddActivity method is not virtual so I can't verify it as such:
aRoutingSlipBuilder.Verify(x => x.AddActivity(
nameof(ActivityOne),
new Uri("execute_activity_one_example_address"),
It.Is<ActivityOne>(y => y.Id == 1 && y.Name == "A test name")));
Please ignore some of the weird data in the code examples as I just put up a simplified version.
Does anyone have any recommendations on how to do this? I also wanted to test to make sure the RoutingSlipBuilder was created but as that instance is created in the Consume method I wasn't sure how to do it! I've searched a lot online and through the MassTransit repo but nothing stood out.
Look at how the Courier tests are written, there are a number of test fixtures available to test routing slip activities. While they aren't well documented, the unit tests are a working testament to how the testing is used.
https://github.com/MassTransit/MassTransit/blob/develop/src/MassTransit.Tests/Courier/TwoActivityEvent_Specs.cs

Testing Redux : Should tests of a Reducer use actions or not?

I'm currently learning Redux.
So far, as I'm discovering how to manage a state app, I don't want to focus on any integration with a framework (like React). I just want to well understand the idea and concepts behind Redux.
I followed courses given by Dan Abramov on egghead.io.
I like the way he explains by testing his app so I started playing with Redux the same way.
I built an app with Redux. Of course it has multiple reducers and actions.
I won't share any code here because it has no particular interest.
It's more a matter of how to deal with tests and Redux.
I don't know if it makes sense to test reducers with their corresponding actions of if I should mock the actions in my tests.
I started by mocking the actions because at first, I thought that it was a good idea to separate my tests and not having dependencies between reducers and actions. (and it's what I've seen in most tutorials. But in tutorials they often build small app).
Now, I figure out that I sometimes end up with a mock different than the corresponding action and even if my tests are fine, it could break in a real app when I'll use dispatch(myAction()) as it will be something different than exepected.
Should I use my actions in my reducers tests ?
Thanks a lot for any explanation about that.
EDIT : Some code to have a better explanation
REDUCER
case CREATE_USER_IN_PROJECT:
currentProject = state.filter(p => p.id === action.payload.idProjet)[0]
indexCurrentProject = state.indexOf(currentProject)
people = [
...currentProject.people,
action.payload.idUser
]
return [
...state.slice(0, indexCurrentProject),
Object.assign({}, currentProject, {people}),
...state.slice(indexCurrentProject + 1)
]
REDUCER'S TEST
it('CREATE_PROJECT if no project should only have the new project', done => {
let idNewProject = uuid.v4()
expect(
projects(undefined, {
type: CREATE_PROJECT,
payload: {
id: idNewProject,
name: 'New project !'
}
})
)
.toEqual([{
id: idNewProject,
name: 'New project !',
people: [],
money: '€',
operations: [],
archived: false,
closed: false
}])
done()
})
So here, instead of having
{
type: CREATE_PROJECT,
payload: {
id: idNewProject,
name: 'New project !'
}
}
Should I call my action createProject('New project !') ?
Thanks for the clarification. Turns out I misunderstood you in my comment. Here's a hopefully more helpful explanation.
You shouldn't use your actual actions, e.g. createProject('New project !'), in testing your reducers.
Reducers are simple state machines that take an input and return an output. Your tests should check they do exactly that, where:
input = previous state > output = next state. And yes it still count as unit-testing (I don't see why it wouldn't).
I found this a good read on how to test reducers

Unit test a polymer web component that uses firebase

I have been trying to configure offline unit tests for polymer web components that use the latest release of Firebase distributed database. Some of my tests are passing, but others—that look nigh identical to passing ones—are not running properly.
I have set up a project on github that demonstrates my configuration, and I'll provide some more commentary below.
Sample:
https://github.com/doctor-g/wct-firebase-demo
In that project, there are two suites of tests that work fine. The simplest is offline-test, which doesn't use web components at all. It simply shows that it's possible to use the firebase database's offline mode to run some unit tests. The heart of this trick is the in the suiteSetup method shown below—a trick I picked up from nfarina's work on firebase-server.
suiteSetup(function() {
app = firebase.initializeApp({
apiKey: 'fake',
authDomain: 'fake',
databaseURL: 'https://fakeserver.firebaseio.com',
storageBucket: 'fake'
});
db = app.database();
db.goOffline();
});
All the tests in offline-test pass.
The next suite is wct-firebase-demo-app_test.html, which test the eponymous web component. This suite contains a series of unit tests that are set up like offline-test and that pass. Following the idea of dependency injection, the wct-firebase-demo-app component has a database attribute into which is passed the firebase database reference, and this is used to make all the firebase calls. Here's an example from the suite:
test('offline set string from web component attribute', function(done) {
element.database = db;
element.database.ref('foo').set('bar');
element.database.ref('foo').once('value', function(snapshot) {
assert.equal(snapshot.val(), 'bar');
done();
});
});
I have some very simple methods in the component as well, in my attempt to triangulate toward the broken pieces I'll talk about in a moment. Suffice it to say that this test passes:
test('offline push string from web component function', function(done) {
element.database = db;
let resultRef = element.pushIt('foo', 'bar');
element.database.ref('foo').once('value', function(snapshot) {
assert.equal(snapshot.val()[resultRef.key], 'bar');
done();
});
});
and is backed by this implementation in wct-firebase-demo-app:
pushIt: function(at, value) {
return this.database.ref(at).push(value);
},
Once again, these all pass. Now we get to the real quandary. There's a suite of tests for another element, x-element, which has a method pushData:
pushData: function(at, data) {
this.database.ref(at).push(data);
}
The test for this method is the only test in its suite:
test('pushData has an effect', function(done) {
element.database = db;
element.pushData('foo', 'xyz');
db.ref('foo').once('value', function(snapshot) {
expect(snapshot.val()).not.to.be.empty;
done();
});
});
This test does not pass. While this test is running, the console comes up with an error message:
Your API key is invalid, please check you have copied it correctly.
By setting some breakpoints and walking through the execution, it seems to me that this error comes up after the call to once but before the callback is triggered. Note, again, this doesn't happen with the same test structure described above that's in wct-firebase-demo-app.
That's where I'm stuck. Why do offline-test and wct-firebase-demo-app_test suites work fine, but I get this API key error in x-element_test? The only other clue I have is that if I copy in a valid API key into my initializeApp configuration, then I get a test timeout instead.
UPDATE:
Here is a (patched-together) image of my console log when running the tests.:
To illustrate the issue brought up by tony19 below, here's the console log with just pushData has an effect in x-element_test commented out:
The offline-test results are apparently false positives. If you check the Chrome console, offline-test actually throws the same error:
The error doesn't affect the test results most likely because the API key validation occurs asynchronously after the test has already completed. If you could somehow hook into that validation, you'd be able to to catch the error in your tests.
Commenting out all tests except for offline firebase is ok shows the error still occurring, which points to suiteSetup(). Narrowing the problem down further by commenting 2 of the 3 function calls in the setup, we'll see the error is caused by the call to firebase.initializeApp() (and not necessarily related to once() as you had suspected).
One workaround to consider is wrapping the Firebase library in a class/interface, and mocking that for unit tests.

One passing Ember integration test breaks the next one: bad teardown block?

I have these two Ember integration tests, A and B. (I have a lot more, but in debugging this I have literally removed every other test in order to isolate the problem. There are 9 tests in the same file as A and I commented the other 8.) If A runs before B, B will fail. If B runs by itself, or before A, it will pass.
From this description it seems pretty clear that A is doing something to the test environment that screws up B. After liberally salting the tests and the production code involved with log messages, however, I'm no closer to figuring out what's up, and I'm hoping someone else can spot if there's an obvious issue.
Right now I'm looking closely at the afterEach blocks in both tests. Here's an outline of what the beforeEach and afterEach blocks look like for test A:
beforeEach: function() {
server = new Pretender(function() {
// Pretender setup removed for brevity
});
App = startApp();
},
afterEach: function() {
server.shutdown();
Ember.run(App, App.destroy);
}
That afterEach is pretty much the stock ember-cli code, but it baffles me a bit. The documentation on Ember.run() suggests it should get a function as an argument, but we're not giving it one here, so I'm not sure how that works. And, should the Pretender shutdown() call be inside the Ember.run (or in its own Ember.run)?
The versions, for the record: ember-cli 0.2.0, Ember 1.10.1.
ETA: The issue goes away when I update to ember-cli 0.2.3 and Ember 1.11.3. Now if only I could figure out the other failing tests we have with that update...
Your setup and teardown looks fine. They are commonly used and are properly defined.
However, there is (still) open issue on ember-qunit about not tearing down the app properly - take a look here to see the progress.
As you said, it does not happen in Ember 1.13.

When I run ember test and visit /tests the results are inconsistent, how can I troubleshoot why these are different?

I have been working with ember for a little over a month now and I have yet to find a solution to some testing inconsistencies I have been experiencing.
The problem is that when I run ember test from the command line and visit /tests in the browser sometimes I see a different total number of tests. It seems like ember test with phantomjs as the test runner is skipping some tests. On top of that the results seem to be inconsistent as well.
For instance, I have a simple acceptance test:
import Ember from 'ember';
import startApp from '../helpers/start-app';
var App;
module('Acceptance: Login', {
setup: function() {
App = startApp();
},
teardown: function() {
Ember.run(App, 'destroy');
}
});
test('Page contents', function() {
visit('/login');
andThen(function() {
equal(find('form.login').length, 1);
});
});
When I visit /tests, all of my tests pass, however when I run Ember test I get one failure:
not ok 1 PhantomJS 1.9 - Acceptance: Login: Page contents
---
actual: >
0
expected: >
1
Log: >
...
Thanks in advance for any help.
I had the same frustration as you until I looked a bit closer at what was being counted.
When I run my tests in a browser, it shows how many assertions are being run. When I run phantomjs (via 'ember test' command line), the log only reports how many tests are run. There can be many assertions in a test.
If I scroll to the very bottom of the page after a test run is complete in a browser, I see that the number next to the final test matches the total number of tests run in phantomjs.
As for why your test is breaking in phantomjs, it could be due to a number of things. Without seeing your handlebars and implementation it can be hard to tell, but I've seen problems with timing and also jquery binding issues that fail only in a headless browser (aka phantomjs).
If you post some more specifics, I may be able to help.