I'm recently learning Redux and writing Unit Test as part of TDD process using Jest
Im writing test for action creators and reducers. But i'm struggling with: can I make use of action creators in the reducers test?
import * as types from './../../constants/auth';
import * as actions from './../../actions/auth';
import reducer, {initialState} from './../auth';
can I do this
it('should set isFetching to true', () => {
const expectedState = {
...initialState,
isFetching: true
}
expect(
reducer(initialState, actions.loginPending())
).toEqual(expectedState)
});
instead of this?
it('should set isFetching to true', () => {
const expectedState = {
...initialState,
isFetching: true
}
expect(
reducer(initialState, {type: types.LOGIN_PENDING})
).toEqual(expectedState)
});
I came to this doubt because the official documentation use hard coded action in the reducers test:
expect(
reducer([], {
type: types.ADD_TODO,
text: 'Run the tests'
})
).toEqual([{
text: 'Run the tests',
completed: false,
id: 0
}])
I guess using hard coded actions is the best practice isn't?
Interesting question and I would say it depends how you run your test suite. Personally, I hardcode the actions because if you think about it, they declaratively explain what the reducer is expecting. The argument in favor of importing the actions is that if you ever change their source, the tests will not need to be updated. However, this also means you're expecting your actions to always be correct BEFORE running these tests.
If that's the case (if you always run your actions test suite before this one) then it would be reasonable to import them in your reducer test suite. The only argument against this logic would be that it's not as easy to have a new developer learn how your reducer works by only looking at the reducer test suite as they would also need to look at the actions source file to see what type of actions are dispatched.
On the other hand, hard-coding your actions is more declarative but does require you to update each reducer test if your action changes. The reason I still recommend this approach is that this is that it allows you to send more controlled data but I do agree that it increases maintenance costs.
Related
I was brought in on the "back-end" of a project and asked to help write tests for an app. I am very new to Ember and need just a little help getting started. We are trying to provide unit test for the routes, so we can have a bit more molecular scope over the app, instead of acceptance test. I have looked at some tutorials and went through every possible scenario that I could think of. I just need a bit of jumpstart.
Here is the part of the route.js for this route.
down stream of this parent route, we have another nested route that shows a list of contacts and when a user clicks on a show button it calls "model" and returns that variable "rec" for the template and the url
export default Route.extend(ScrollTo, {
flashMessages: service(),
model: function(params) {
let rec= this.store.peekRecord('contact', params.contact_id);
return rec;
},
actions: {
saveContact: function() {
let model = this.currentRouteModel();
model
.save()
.then(() => {
//this.refresh();
//this.setModelHash();
this.flashMessages
.success(`Saved Contact: ${model.get('title')}`);
//this.transitionTo('contacts');
});
}
Here is pre-generated code for the test. I really haven't made any modifications, because I really didn't know where to start.
This app doesn't have a back-end, it's design is to take in information and provide a iso file based on whatever standard the user wants.
I am sure I need to provide some mock data for the test and send that to the method, but again I am not sure what Qunit parts I use.
import { module, test } from 'qunit';
import { setupTest } from 'ember-qunit';
module('Unit | Route | contact/show', function(hooks) {
setupTest(hooks)
test('it exists', function(assert) {
var route = this.owner.lookup('route:contact/show');
assert.ok(route, 'contact.show route works');
});
});
I struggled with this as well when I first moved to frontend testing. With a few years experience I'm confident in saying you shouldn't unit test the route objects in this way. Instead the testing of an ember app should focus on two types of tests.
What ember calls integration tests (but are actually closer to UI unit tests)
Acceptance tests
Integration tests on components allow you to get into the smaller details and edge cases and they can be super valuable.
What I think you want in this case is actually an acceptance test.
If your testing experience is at all like mine it will feel like you're testing too many things at first, but these types of tests where the browser actually loads the entire application have the most value in hunting down bugs and are the most maintainable over time.
I have a CloudFormation stack that defines a GraphQL API powered by DynamoDB. I would like to run a test script that:
Creates a standard set of fixtures.
Runs various tests, including creating, modifying, and deleting data.
Deletes all of the fixtures and any other objects created during the test.
The “clean way” to do this would be to create a new stage for the tests, but this is extremely time-consuming (in terms of wall-clock time spent waiting for the result).
The “hard way” would be to keep precise track of every DynamoDB record created during the testing process and then delete them afterward one by one (and/or using many batch updates). This would be a huge pain to code, and the likelihood of error is very high.
An intermediate approach would be to use a dedicated pre-existing stage for integration tests, wipe it clean at the end of the tests, and make sure that only one set of tests is running at a time. This would require writing a script to manually clear out the tables, which sounds moderately less tedious and error-prone than the “hard way”.
Is there an established best practice for this? Are there other approaches I haven't considered?
How long does it take to deploy the stack?
If it is only a small portion of the time that takes to run the tests use the "clean way", otherwise use the intermediate approach of having a dedicated test stack already deployed.
You don't have to write scripts.
I actually wrote a testing library for this purpose exactly:
https://github.com/erezrokah/aws-testing-library/blob/master/src/jest/README.md#tohaveitem
Usage example (TypeScript):
import { clearAllItems } from 'aws-testing-library/lib/utils/dynamoDb';
import { invoke } from 'aws-testing-library/lib/utils/lambda';
describe('db service e2e tests', () => {
const region = 'us-east-1';
const table = 'db-service-dev';
beforeEach(async () => {
await clearAllItems(region, table);
});
afterEach(async () => {
await clearAllItems(region, table);
});
test('should create db entry on lambda invoke', async () => {
const result = await invoke(region, 'db-service-dev-create', {
body: JSON.stringify({ text: 'from e2e test' }),
});
const lambdaItem = JSON.parse(result.body);
expect.assertions(1);
await expect({ region, table, timeout: 0 }).toHaveItem(
{ id: lambdaItem.id },
lambdaItem,
);
});
});
If you do write the scripts yourself you might need to consider eventual consistency and retry (as the data might not be available directly after write)
I'd simply delete this "test stack" in the end of the test and let CloudFormation clean up DynamoDB for you -- check DeletionPolicy documentation.
You might want to trigger/hook stack deletion from your CI environment, whatever it is. As an example, I found this CodePipeline walkthrough: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-basic-walkthrough.html
I know mocha has global before and after, and each-test before and after, but what I would like is test-specific before and after. Something like SoapUI has.
For example, say that I have a test checking that the creation of a user works.
I want to remove the user, should it exist, from the database BEFORE the test. And I want the test to ensure that the user is removed AFTER the test. But I do not want to do this for EACH test, as only one test will actually create the user. Other tests will delete user/s, update user/s, fail to create an already existing user etc.
Is this possible, or do I have to include the setup and tear down code in the test? If so, how do I ensure that both the setup and tear down executes properly, independent of the test result?
For tests where I need to have special setup and teardown code but that are not otherwise distinguishable from their siblings, I just put a describe block with an empty title:
describe("SomeClass", () => {
describe("#someMethod", () => {
it("does something", () => {});
it("does something else", () => {});
describe("", () => {
// The before and after hooks apply only to the tests in
// this block.
before(() => {});
after(() => {});
it("does something more", () => {});
});
});
});
Is this possible, or do I have to include the setup and tear down code in the test? If so, how do I ensure that both the setup and tear down executes properly, independent of the test result?
You can put setup and tear down code in the test itself (i.e. inside an the callback you pass to it). However, Mocha will treat any failure there as a failed test, period. It does not matter where in the callback passed to it the failure occurs. Assertion libraries allow you to provide custom error messages which can help you figure out what exactly failed, but Mocha will see all failures in it the same way: the test failed. If you want Mocha to treat failures in setup/teardown code differently from test failures, then you have to use the hooks as I've shown above.
I'm currently learning Redux.
So far, as I'm discovering how to manage a state app, I don't want to focus on any integration with a framework (like React). I just want to well understand the idea and concepts behind Redux.
I followed courses given by Dan Abramov on egghead.io.
I like the way he explains by testing his app so I started playing with Redux the same way.
I built an app with Redux. Of course it has multiple reducers and actions.
I won't share any code here because it has no particular interest.
It's more a matter of how to deal with tests and Redux.
I don't know if it makes sense to test reducers with their corresponding actions of if I should mock the actions in my tests.
I started by mocking the actions because at first, I thought that it was a good idea to separate my tests and not having dependencies between reducers and actions. (and it's what I've seen in most tutorials. But in tutorials they often build small app).
Now, I figure out that I sometimes end up with a mock different than the corresponding action and even if my tests are fine, it could break in a real app when I'll use dispatch(myAction()) as it will be something different than exepected.
Should I use my actions in my reducers tests ?
Thanks a lot for any explanation about that.
EDIT : Some code to have a better explanation
REDUCER
case CREATE_USER_IN_PROJECT:
currentProject = state.filter(p => p.id === action.payload.idProjet)[0]
indexCurrentProject = state.indexOf(currentProject)
people = [
...currentProject.people,
action.payload.idUser
]
return [
...state.slice(0, indexCurrentProject),
Object.assign({}, currentProject, {people}),
...state.slice(indexCurrentProject + 1)
]
REDUCER'S TEST
it('CREATE_PROJECT if no project should only have the new project', done => {
let idNewProject = uuid.v4()
expect(
projects(undefined, {
type: CREATE_PROJECT,
payload: {
id: idNewProject,
name: 'New project !'
}
})
)
.toEqual([{
id: idNewProject,
name: 'New project !',
people: [],
money: '€',
operations: [],
archived: false,
closed: false
}])
done()
})
So here, instead of having
{
type: CREATE_PROJECT,
payload: {
id: idNewProject,
name: 'New project !'
}
}
Should I call my action createProject('New project !') ?
Thanks for the clarification. Turns out I misunderstood you in my comment. Here's a hopefully more helpful explanation.
You shouldn't use your actual actions, e.g. createProject('New project !'), in testing your reducers.
Reducers are simple state machines that take an input and return an output. Your tests should check they do exactly that, where:
input = previous state > output = next state. And yes it still count as unit-testing (I don't see why it wouldn't).
I found this a good read on how to test reducers
I have been trying to configure offline unit tests for polymer web components that use the latest release of Firebase distributed database. Some of my tests are passing, but others—that look nigh identical to passing ones—are not running properly.
I have set up a project on github that demonstrates my configuration, and I'll provide some more commentary below.
Sample:
https://github.com/doctor-g/wct-firebase-demo
In that project, there are two suites of tests that work fine. The simplest is offline-test, which doesn't use web components at all. It simply shows that it's possible to use the firebase database's offline mode to run some unit tests. The heart of this trick is the in the suiteSetup method shown below—a trick I picked up from nfarina's work on firebase-server.
suiteSetup(function() {
app = firebase.initializeApp({
apiKey: 'fake',
authDomain: 'fake',
databaseURL: 'https://fakeserver.firebaseio.com',
storageBucket: 'fake'
});
db = app.database();
db.goOffline();
});
All the tests in offline-test pass.
The next suite is wct-firebase-demo-app_test.html, which test the eponymous web component. This suite contains a series of unit tests that are set up like offline-test and that pass. Following the idea of dependency injection, the wct-firebase-demo-app component has a database attribute into which is passed the firebase database reference, and this is used to make all the firebase calls. Here's an example from the suite:
test('offline set string from web component attribute', function(done) {
element.database = db;
element.database.ref('foo').set('bar');
element.database.ref('foo').once('value', function(snapshot) {
assert.equal(snapshot.val(), 'bar');
done();
});
});
I have some very simple methods in the component as well, in my attempt to triangulate toward the broken pieces I'll talk about in a moment. Suffice it to say that this test passes:
test('offline push string from web component function', function(done) {
element.database = db;
let resultRef = element.pushIt('foo', 'bar');
element.database.ref('foo').once('value', function(snapshot) {
assert.equal(snapshot.val()[resultRef.key], 'bar');
done();
});
});
and is backed by this implementation in wct-firebase-demo-app:
pushIt: function(at, value) {
return this.database.ref(at).push(value);
},
Once again, these all pass. Now we get to the real quandary. There's a suite of tests for another element, x-element, which has a method pushData:
pushData: function(at, data) {
this.database.ref(at).push(data);
}
The test for this method is the only test in its suite:
test('pushData has an effect', function(done) {
element.database = db;
element.pushData('foo', 'xyz');
db.ref('foo').once('value', function(snapshot) {
expect(snapshot.val()).not.to.be.empty;
done();
});
});
This test does not pass. While this test is running, the console comes up with an error message:
Your API key is invalid, please check you have copied it correctly.
By setting some breakpoints and walking through the execution, it seems to me that this error comes up after the call to once but before the callback is triggered. Note, again, this doesn't happen with the same test structure described above that's in wct-firebase-demo-app.
That's where I'm stuck. Why do offline-test and wct-firebase-demo-app_test suites work fine, but I get this API key error in x-element_test? The only other clue I have is that if I copy in a valid API key into my initializeApp configuration, then I get a test timeout instead.
UPDATE:
Here is a (patched-together) image of my console log when running the tests.:
To illustrate the issue brought up by tony19 below, here's the console log with just pushData has an effect in x-element_test commented out:
The offline-test results are apparently false positives. If you check the Chrome console, offline-test actually throws the same error:
The error doesn't affect the test results most likely because the API key validation occurs asynchronously after the test has already completed. If you could somehow hook into that validation, you'd be able to to catch the error in your tests.
Commenting out all tests except for offline firebase is ok shows the error still occurring, which points to suiteSetup(). Narrowing the problem down further by commenting 2 of the 3 function calls in the setup, we'll see the error is caused by the call to firebase.initializeApp() (and not necessarily related to once() as you had suspected).
One workaround to consider is wrapping the Firebase library in a class/interface, and mocking that for unit tests.