Unit testing reducers, test each slice reducer or combined reducer? - unit-testing

Suppose I have a reducer file reducers/group1.js like this
export default combineReducers({
A: combineReducers({ A1, A2 }),
B: reducerB,
C: reducerC
})
Is there any difference between testing each slice reducer (A1, A2, reducerB and reducerC) and testing the combined one?
import group1 from 'reducers/group1'
describe('reducers', () => {
describe('group1', () => {
it('should provide the initial state', () => {
expect(group1(undefined, {})).to.equal({ group1: { A: { ... }, B: ... } })
})
it(...)
// ...
})
})
or
import { A1, A2, reducerB, reducerC } from 'reducers/group1'
describe('reducers', () => {
describe('group1', () => {
describe('A1', () => {
it('should provide the initial state', () => {
expect(A1(undefined, {})).to.equal(0) // if A1 is just a number
})
})
describe('A2', () => { ... })
describe('reducerB', () => { ... })
describe('reducerC', () => { ... })
})
})

Your second example is usually better because it allows for simpler unit tests. I can imagine a scenario where a developer might want to write a bunch of tests for reducer C without knowing anything about reducers A and B. The second code sample allows for that developer to write a suite of C tests without being concerned about what A or B even are. It also helps when rewriting tests if a reducer's behavior is drastically changed: all those tests live in one place instead of being scattered all over the test file.
However, there might be some instances where you want to write a test for the entire reducer. For example, if you have a global reset action, you would want to test that the entire reducer properly responds to that action instead of writing an individual test for each reducer. Most of the time it's probably going to be cleaner to write tests for individual reducers though.

Related

Jest: Create a generic mock where I can pass the custom data

I am using the following in all my individual test case files which runs successfully.
jest.mock('next-i18next', () => {
return {
useTranslation: () => {
return {
t: (key) => {
const translations = {
title: 'May I help?',
showErrors: 'Something went wrong.',
noResultFound: `Oops`,
};
return translations[key];
},
};
},
};
});
The translations change based on different test cases and hence I had duplicated them everywhere I need.
But I want to modify this so that this mock becomes generic(maybe by creating a function) that sits in one place while I call this created function from all the test case files I need and pass my custom dictionary to it. Something like this but it doesn't work:
const mockTranslations = (mockdictionary) => ({
jest.mock('next-i18next', () => {
return {
useTranslation: () => {
return {
t: (key) => {
return mockdictionary[key];
},
};
},
};
})
})
I am not sure if it is possible to do in Jest, still trying to figure out. Any help here or other approach of doing it will be really appreciated.
I would suggest two strategies :
modify your mock, so the t function always returns the translation key (basically, t: (key) => key). In your tests, check that the correct translation key is present in the DOM (expect(getByText("myTranslationKey")).toBeTruthy()). So you can validate that the correct key is used, but not that the corresponding text is correct,
do not mock react-i18next in your test, even better :). More realistic, more confidence. If you load JSON files dynamically, you will need to modify your react-i18next configuration for the Jest tests. In a Jest configuration file, add :
// assuming that you have multiple JSON files in /public/locales/en
import * as namespaces from '/public/locales/en';
i18next.use(initReactI18next).init({
[...]
lng: 'en',
resources: { en: namespaces },
});
So now you can test you real texts in Jest suites.

Jest 27: How to reset mock for jest.spyOn(window, "setTimeout")?

I am updating a project from jest version 26 to jest version 27. As part of the update I had to switch from assertions on setTimeout to assertions on jest.spyOn(window, "setTimeout").
I want to define spy globally and reset it before each test, something like:
const timeoutSpy = jest.spyOn(window, "setTimeout");
beforeEach(() => {
jest.resetAllMocks();
});
This code doesn't work as I expected. Assertions for expect(timeoutSpy).toHaveBeenCalledTimes(n) fail due to mismatch of expected (n) and received (0) number of calls.
What is the correct way to reset a globally defined timeoutSpy before each test?
Thank you.
You should use jest.restoreAllMocks().
Restores all mocks back to their original value. Equivalent to calling .mockRestore() on every mocked function. Beware that jest.restoreAllMocks() only works when the mock was created with jest.spyOn; other mocks will require you to manually restore them.
Using jest.resetAllMocks(); in the beforeEach should be sufficient. The code below can be used to prove it:
function callTimeout() {
setTimeout(() => {
console.log('hello');
})
}
const timeoutSpy = jest.spyOn(window, 'setTimeout');
describe("test timeout", () => {
beforeEach(() => {
jest.resetAllMocks();
});
it("test 1", () => {
callTimeout();
callTimeout();
expect(timeoutSpy).toHaveBeenCalledTimes(2);
})
it("test 2", () => {
callTimeout();
callTimeout();
expect(timeoutSpy).toHaveBeenCalledTimes(2);
})
});
I had to test testEnvironment property in jest.config.js file to jsdom for window variable to work. However you can replace window to global for it to work with the default as node.

How to structure Tests for Vue Components to have a good coverage?

I was wondering in general how to properly test Vue Components to cover almost everything, to confidently tell a customer that it works and it will not fail unless something big happened.
In my case I am using Jest as the test framework. It comes with coverage reports that follow the Istanbul pattern.
However what I am uncertain about is how to structure the tests to minimize the overhead, maximize the test speed (or atleast not create bottlenecks), maximize extendability and maximize the test coverage.
Furthermore I am also uncertain with Integration-tests
where you handle the interaction of multiple components, mixins, vuex-store and other js-files.
Should they have a separate and dedicated file or would put everything into one file?
So my question is, how to put everything together as I can write good single tests in each but finding a good structure to help me and my co-workers with the coverage and low overhead is a bit difficult for me.
Here is an example of a structure I recently developed to test a certain Vue-component with jest & vue-test-utils:
//some imports
describe("Test Vue-Component X", () => {
let wrapper = null;
beforeAll(async () => { //could also be beforeEach. (Depends on the component I guess
localVue = createLocalVue();
localVue.use(Vuex);
vuetify = new Vuetify();
wrapper = mount(somecomponent, {
mocks: {
$t: (key) => translations["messages"]["EN"][key],
},
localVue,
store,
vuetify,
});
});
describe("Initial State test", () => {
/**
* GIVEN
* WHEN
* THEN
*/
test("checks if Component X exists", () => {
expect(wrapper.vm.$options.name).toBe("X");
expect(wrapper.exists()).toBe(true);
});
describe("Created", () => {});
describe("Mounted", () => {});
//Does it generally make sense to go throw each lifecycle-hook or is it rather test specific?
});
describe("computed Properties: ", () => {
describe("Property Y", () => {
/**
* GIVEN
* WHEN
* THEN
*/
test("Test some Expected Outcome", () => {
});
/**
* GIVEN
* WHEN
* THEN
*/
test("Test Error Cases", () => {});
});
describe("Proterty Z", () => {});
});
describe("methods ", () => {
describe("Method 1", () => {
/**
* GIVEN
* WHEN
* THEN
*/
test("Test some expected outcome given a specific imput ", () => {});
/**
* GIVEN
* WHEN
* THEN
*/
test("Test Error-case 1", () => {});
});
describe("Method 2 ", () => {});
describe("Method 3... ", () => {});
});
describe("Watch ", () => {
describe("Watcher A", () => {});
});
describe("Interaction Vuex store? ",()=>{
//e.g. test if the store yields some stuff that is breaking the UI
});
describe("Interaction with template? ",()=>{
// would go in the direction of ui-test similar to the selenium tests
});
describe("Dedicated Bug Area? ",()=>{
//all tests made because of bug reports
});
describe("Events or Interaction with other components? ",()=>{
//all tests made because of bug reports
});
});
The part where I am uncertain about are marked with a "?".
Maybe it would be great if you could link me to a test that was tests a complex Vue-Component with a 100% coverage (covering unit and integration-tests) in a middle to big project.

Writing tests for RxJS that uses retryWhen operator (understanding difference from retry operator)

I'm trying to write tests for the following function that uses retryWhen operator:
// some API I'm using and mocking out in test
import { geoApi } from "api/observable";
export default function retryEpic(actions$) {
return actions$.pipe(
filter(action => action === 'A'),
switchMap(action => {
return of(action).pipe(
mergeMap(() => geoApi.ipLocation$()),
map(data => ({ data })),
retryWhen(errors => {
return errors.pipe(take(2));
}),
);
}),
);
}
The code is supposed to perform a request to some remote API geoApi.ipLocation$(). If it gets an error, it retries 2 times before giving up.
I have written the following test code that uses Jest and RxJS TestScheduler:
function basicTestScheduler() {
return new TestScheduler((actual, expected) => {
expect(actual).toEqual(expected);
});
}
const mockApi = jest.fn();
jest.mock('api/observable', () => {
return {
geoApi: {
ipLocation$: (...args) => mockApi(...args),
},
};
});
describe('retryEpic()', () => {
it('retries fetching 2 times before succeeding', () => {
basicTestScheduler().run(({ hot, cold, expectObservable, expectSubscriptions }) => {
const actions$ = hot('-A');
// The first two requests fail, third one succeeds
const stream1 = cold('-#', {}, new Error('Network fail'));
const stream2 = cold('-#', {}, new Error('Network fail'));
const stream3 = cold('-r', { r: 123 });
mockApi.mockImplementationOnce(() => stream1);
mockApi.mockImplementationOnce(() => stream2);
mockApi.mockImplementationOnce(() => stream3);
expectObservable(retryEpic(actions$)).toBe('----S', {
S: { data: 123 },
});
expectSubscriptions(stream1.subscriptions).toBe('-^!');
expectSubscriptions(stream2.subscriptions).toBe('--^!');
expectSubscriptions(stream3.subscriptions).toBe('---^');
});
});
});
This test fails.
However, when I replace retryWhen(...) with simply retry(2), then the test succeeds.
Looks like I don't quite understand how to implement retry with retryWhen. I suspect this take(2) is closing the stream and kind of preventing everything from continuing. But I don't quite understand it.
I actually want to write some additional logic inside retryWhen(), but first I need to understand how to properly implement retry() with retryWhen(). Or perhaps that's actually not possible?
Additional resources
My implementation of retryWhen + take was based on this SO answer:
How to create an RXjs RetryWhen with delay and limit on tries
Official docs:
retryWhen
You can use retryWhen for those two purposes, one to have your logic in it and the second is the retry numbers you'd like to give it (no need to use retry operator):
// some API I'm using and mocking out in test
import { geoApi } from "api/observable";
export default function retryEpic(actions$) {
return actions$.pipe(
filter(action => action === 'A'),
switchMap(action => {
return of(action).pipe(
mergeMap(() => geoApi.ipLocation$()),
map(data => ({ data })),
retryWhen(errors =>
errors.pipe(
mergeMap((error, i) => {
if (i === 2) {
throw Error();
}
// return your condition code
})
)
)
)
}),
);
}
Here is a simple DEMO of that.
As for understanding this logic:
retryWhen and retry operators, according to the Official docs you've referenced:
resubscribing to the source Observable (if no error or complete executes)
This is why you can't pipe retry and retryWhen together. You can say that these operators are a chain breakers...

How to unit test a redux application that uses redux-thunk

Here is a test stub that I have written in Mocha/Chai. I can easily dispatch an action and assert that the state is equal to what I expect, but how do I validate that it followed the expected process along the way (IE the earlier tests)?
/**
* This test describes the INITIALIZE_STATE action.
* The action is asynchronous using the async/await pattern to query
* The database. The action creator returns a thunk which should in turn
* return the new state with a list of tables and their relationships with eachother
**/
describe('Initialize state', () => {
it('Should check if state is empty', () => {});
it('Should check if tables/relationships exist', () => {});
it('Should check if new tables have been added', () => {});
it('Should merge new and existing tables and relationships', () => {
// Here is where we would dispatch the INITIALIZE_STATE
// action and assert that the new state is what I expect it to be.
});
});
I haven't written any code for the actual action itself yet, because I want the code to pass these validations. Some psuedo-code might look like this
export function initializeState() {
return function(dispatch) {
let empty = store.getState().empty
let state = (empty) ? await getLastPersistedState() : store.getState()
let tables = state.tables;
let payload = tables.concat(await getNewTables(tables));
dispatch({type: 'INITIALIZE_STATE', payload});
}
}
function getLastPerisistedState() {
return mongodb.findall(state, (s) => s);
}
function getNewTables(tableFilter) {
return sql.query("select table_name from tables where table_name not in (" + tableFilter + ")");
}
Here is the solution that I came up with. There may be a better one, but so far no one has been able to provide one. I decided to go with a refactored set of actions and a separate store for my testing. The actions are function generators rather than using thunk. They yield out the actions that thunk will dispatch in the production code. In my test I can then dispatch those actions myself and verify that the resulting state is what I'd expect. This is exactly what thunk will do but it allows me to insert myself as the middle man rather than depending on the thunk middle-ware.
This is also super useful because it makes separating the action logic from the dispatch and state logic extremely easy, even when you are testing asynchronous flow.
For the database, I automatically generate a stub and use promises to simulate the async query. Since this project is using sequelize anyways, I just used sequelize to generate the stub.
Here is the code
_actions.js
export function *initializeState() {
//We will yield the current step so that we can test it step by step
//Should check if state is empty
yield {type: 'UPDATE_STEP', payload: 'IS_STATE_EMPTY'};
yield {type: 'UPDATE_STEP_RESULT', payload: stateIsEmpty()};
if(stateIsEmpty()) {
//todo: Implement branch logic if state is empty
}
//...
}
sequelize/_test/generate.js
async function createMockFromSql(db, sql, filename) {
let results = await db.query(sql, {type: db.Sequelize.QueryTypes.SELECT});
return new Promise((resolve, reject) => {
// Trim the results to a reasonable size. Keep it unique each time
// for more rigorous testing
console.log('trimming result set');
while (results.length > 50) {
results.splice(results.length * Math.random() | 0, 1);
}
fs.writeFile(path.resolve(__dirname, '../../sequelize/_test', filename), JSON.stringify(results, null, 2), err => {
if (err) {
console.error(err);
reject(false);
}
resolve(true);
})
})
}
test/actions.js
...
it('Should check if state is empty', () => {
let action = initializeState();
expect(action.next()).to.deep.equal({type: 'UPDATE_STEP', payload: 'IS_STATE_EMPTY'})
});