What if tests share and mutate some common state, and their logic depends on previous tests? Is it acceptable practice?
Simple code for example (js):
describe('Some tests', () => {
const state = {
value: 'test'
addMe() {
this.value = this.value + ' me'
}
addPlease() {
this.value = this.value + ', please'
}
}
it('Some test', () => {
state.addMe()
expect(state.value).toBe('test me')
})
it('Another test', () => {
state.addPlease()
expect(state.value).toBe('test me, please')
})
})
Typically, tests should be designed not to depend on each other. This is certainly not a law, but a good practice, because it gives your test suite a number of nice properties:
With independent tests, you can add tests at any place, delete tests, re-order tests without unexpected impacts on other tests.
You can improve tests individually without having to think about impact on other tests, for example simplifying the internal working of a test.
Every test can be understood without looking at other tests, and in case of failing tests the reason for the failure is easier to find.
Independent tests succeed and fail individually. With dependent tests, if one test fails, subsequent tests likely also fail.
You can execute your tests selectively, for example to save time during test case execution.
Related
I've found a lot of this sort of thing when refactoring our Jest test suites:
it('calls the API and throws an error', async () => {
expect.assertions(2);
try {
await login('email', 'password');
} catch (error) {
expect(error.name).toEqual('Unauthorized');
expect(error.status).toEqual(401);
}
});
I believe the expect.assertions(2) line is redundant here, and can safely be removed, because we already await the async call to login().
Am I correct, or have I misunderstood how expect.assertions works?
expect.assertions is important when testing the error scenarios of asynchronous code, and is not redundant.
If you remove expect.assertions from your example you can't be confident that login did in fact throw the error.
it('calls the API and throws an error', async () => {
try {
await login('email', 'password');
} catch (error) {
expect(error.name).toEqual('Unauthorized');
expect(error.status).toEqual(401);
}
});
Let's say someone changes the behavior of login to throw an error based on some other logic, or someone has affected the mock for this test which no longer causes login to throw. The assertions in the catch block won't run but the test will still pass.
Using expect.assertions at the start of the test ensures that if the assertions inside the catch don't run, we get a failure.
This is from Jest documentation:
Expect.assertions(number) verifies that a certain number of assertions
are called during a test. This is often useful when testing
asynchronous code, in order to make sure that assertions in a callback
actually got called.
So to put in other words, expect.assertions makes sure that the n number of assertions are made by the end of the test.
It's good to use it especially when writing a new tests, so one can easily check that correct assertions are made during the test. Async tests often pass because the intended assertions were not made before the test-runner (Jest,Mocha etc.) thought the test was finished.
I think we are missing the obvious here.
expect.assertions(3) is simply saying ...
I expected 3 expect statements to be called before the test times out. e.g.
expect(actual1).toEqual(expected1);
expect(actual2).toEqual(expected2);
expect(actual3).toEqual(expected3);
This timing out business is the reason to use expect.assertions. It would be silly to use it in a purely synchronous test. At least one of the expect statements would be found in a subscribe block (or other async block) within the spec file.
To ensure that the assertions in the catch block of an async/await test are adequately tested, expect.assertions(n) must be declared as shown in your code snippet. Such declaration is unnecessary for async/await tests without the catch block.
It seems quite unintuitive but it is simply the way it is. Perhaps, for certain reasons well deep within the javascript runtime, the test environment can detect when an await'ed' promise successfully resolved but cannot detect same for await'ed' promises that failed to resolve. The creators of the test environment would likely know verbatim why such is the case.
I have to admit that apart from error testing, I find it challenging to see a real use for expect.assertions. The above snippet can be changed to the following with the same guarantee but I think it reads more naturally and doesn't require me to count how many time I call expect. This is especially error-prone if a test if complex:
it('calls the API and throws an error', async () => {
try {
await login('email', 'password');
fail('must throw')
} catch (error) {
expect(error.name).toEqual('Unauthorized');
expect(error.status).toEqual(401);
}
});
I know mocha has global before and after, and each-test before and after, but what I would like is test-specific before and after. Something like SoapUI has.
For example, say that I have a test checking that the creation of a user works.
I want to remove the user, should it exist, from the database BEFORE the test. And I want the test to ensure that the user is removed AFTER the test. But I do not want to do this for EACH test, as only one test will actually create the user. Other tests will delete user/s, update user/s, fail to create an already existing user etc.
Is this possible, or do I have to include the setup and tear down code in the test? If so, how do I ensure that both the setup and tear down executes properly, independent of the test result?
For tests where I need to have special setup and teardown code but that are not otherwise distinguishable from their siblings, I just put a describe block with an empty title:
describe("SomeClass", () => {
describe("#someMethod", () => {
it("does something", () => {});
it("does something else", () => {});
describe("", () => {
// The before and after hooks apply only to the tests in
// this block.
before(() => {});
after(() => {});
it("does something more", () => {});
});
});
});
Is this possible, or do I have to include the setup and tear down code in the test? If so, how do I ensure that both the setup and tear down executes properly, independent of the test result?
You can put setup and tear down code in the test itself (i.e. inside an the callback you pass to it). However, Mocha will treat any failure there as a failed test, period. It does not matter where in the callback passed to it the failure occurs. Assertion libraries allow you to provide custom error messages which can help you figure out what exactly failed, but Mocha will see all failures in it the same way: the test failed. If you want Mocha to treat failures in setup/teardown code differently from test failures, then you have to use the hooks as I've shown above.
I'm recently learning Redux and writing Unit Test as part of TDD process using Jest
Im writing test for action creators and reducers. But i'm struggling with: can I make use of action creators in the reducers test?
import * as types from './../../constants/auth';
import * as actions from './../../actions/auth';
import reducer, {initialState} from './../auth';
can I do this
it('should set isFetching to true', () => {
const expectedState = {
...initialState,
isFetching: true
}
expect(
reducer(initialState, actions.loginPending())
).toEqual(expectedState)
});
instead of this?
it('should set isFetching to true', () => {
const expectedState = {
...initialState,
isFetching: true
}
expect(
reducer(initialState, {type: types.LOGIN_PENDING})
).toEqual(expectedState)
});
I came to this doubt because the official documentation use hard coded action in the reducers test:
expect(
reducer([], {
type: types.ADD_TODO,
text: 'Run the tests'
})
).toEqual([{
text: 'Run the tests',
completed: false,
id: 0
}])
I guess using hard coded actions is the best practice isn't?
Interesting question and I would say it depends how you run your test suite. Personally, I hardcode the actions because if you think about it, they declaratively explain what the reducer is expecting. The argument in favor of importing the actions is that if you ever change their source, the tests will not need to be updated. However, this also means you're expecting your actions to always be correct BEFORE running these tests.
If that's the case (if you always run your actions test suite before this one) then it would be reasonable to import them in your reducer test suite. The only argument against this logic would be that it's not as easy to have a new developer learn how your reducer works by only looking at the reducer test suite as they would also need to look at the actions source file to see what type of actions are dispatched.
On the other hand, hard-coding your actions is more declarative but does require you to update each reducer test if your action changes. The reason I still recommend this approach is that this is that it allows you to send more controlled data but I do agree that it increases maintenance costs.
tl;dr is it possible to unit test this code without rewriting it?
http://jsbin.com/jezosegopo/edit?js,console
const keyUpObserver = ($input, fetchResults) => {
const source = Rx.Observable.fromEvent($input, 'keyup')
.map(e => e.target.value)
.filter(text => text.length > 2)
.debounce(300)
.distinctUntilChanged();
return source.flatMapLatest(text => Rx.Observable.fromPromise(fetchResults(text)));
};
keyUpObserver in the above code is heavily based on the RxJS autocomplete example, and uses a debounce to prevent hammering the server.
I'm trying to unit test this function, but Sinon's useFakeTimers doesn't appear to be working.
const clock = sinon.useFakeTimers();
const $input = $('<input>');
const fetchResults = (text) => new Promise(resolve => resolve(text + ' done!'));
keyUpObserver($input, fetchResults).subscribe(text => console.log(text));
$input.val('some text').trigger('keyup');
clock.tick(500);
// Enough time should have elapsed to make this a new event
$input.val('some more text').trigger('keyup');
I'm guessing this isn't sinon-related either, rather that RxJS uses some internal clock which must be unaffected by an external fake clock.
Given that, is there anyway to unit test my keyUpObserver code without rewriting it to also take a scheduler (default in production, test in unit tests)?
...to approach an answer: it seems that RxJs is using the default/global setTimeout implementation which Sinon should be able to overwrite. At least that's what I'd say from reading the defiultscheduler's code which it uses if you don't pass (as mentioned) an custom scheduler.
Still, I am a bit confused about the intend. From this little fork I'd expect only the 3rd trigger to actually fire something which it does or am I should out of line? 👀
I've got a bunch of methods in my application service layer that are doing things like this:
public void Execute(PlaceOrderOnHoldCommand command)
{
var order = _repository.Load(command.OrderId);
order.PlaceOnHold();
_repository.Save(order);
}
And at present, I have a bunch of unit tests like this:
[Test]
public void PlaceOrderOnHold_LoadsOrderFromRepository()
{
var repository = new Mock<IOrderRepository>();
const int orderId = 1;
var order = new Mock<IOrder>();
repository.Setup(r => r.Load(orderId)).Returns(order.Object);
var command = new PlaceOrderOnHoldCommand(orderId);
var service = new OrderService(repository.Object);
service.Execute(command);
repository.Verify(r => r.Load(It.Is<int>(x => x == orderId)), Times.Exactly(1));
}
[Test]
public void PlaceOrderOnHold_CallsPlaceOnHold()
{
/* blah blah */
}
[Test]
public void PlaceOrderOnHold_SavesOrderToRepository()
{
/* blah blah */
}
It seems to be debatable whether these unit tests add value that's worth the effort. I'm quite sure that the application service layer should be integration tested, though.
Should the application service layer be tested to this level of granularity, or are integration tests sufficient?
I'd write a unit test despite there also being an integration test. However, I'd likely make the test much simpler by eliminating the mocking framework, writing my own simple mock, and then combining all those tests to check that the the order in the mock repository was on hold.
[Test]
public void PlaceOrderOnHold_LoadsOrderFromRepository()
{
const int orderId = 1;
var repository = new MyMockRepository();
repository.save(new MyMockOrder(orderId));
var command = new PlaceOrderOnHoldCommand(orderId);
var service = new OrderService(repository);
service.Execute(command);
Assert.IsTrue(repository.getOrder(orderId).isOnHold());
}
There's really no need to check to be sure that load and/or save is called. Instead I'd just make sure that the only way that MyMockRepository will return the updated order is if load and save are called.
This kind of simplification is one of the reasons that I usually don't use mocking frameworks. It seems to me that you have much better control over your tests, and a much easier time writing them, if you write your own mocks.
Exactly: it's debatable! It's really good that you are weighing the expense/effort of writing and maintaining your test against the value it will bring you - and that's exactly the consideration you should make for every test you write. Often I see tests written for the sake of testing and thereby only adding ballast to the code base.
As a guideline I usually take that I want a full integration test of every important successful scenario/use case. Other tests I'll write are for parts of the code that are likely to break with future changes, or have broken in the past. And that is definitely not all code. That's where your judgement and insight in the system and requirements comes into play.
Assuming that you have an (integration) test for service.Execute(placeOrderOnHoldCommand), I'm not really sure if it adds value to test if the service loads an order from the repository exactly once. But it could be! For instance when your service previously had a nasty bug that would hit the repository ten times for a single order, causing performance issues (just making it up). In that case, I'd rename the test to PlaceOrderOnHold_LoadsOrderFromRepositoryExactlyOnce().
So for each and every test you have to decide for yourself ... hope that helps.
Notes:
The tests you show can be perfectly valid and look well written.
Your test sequence methods seems to be inspired on the way the Execute(...) method is currently implemented. When you structure your test this way, it could be that you are tying yourself to a specific implementation. This way, tests can actually make it harder to change - make sure you're only testing the important external behavior of your class.
I usually write a single integration test of the primary scenario. By primary scenario i mean the successful path of all the code being tested. Then I write unit tests of all the other scenarios like checking all the cases in a switch, testing exception and so forth.
I think it is important to have both and yes it is possible to test it all with integration tests only, but that makes your tests long running and harder to debug. In average I think I have 10 unit tests per integration test.
I don't bother to test methods with one-liners unless something bussines logic-like happens in that line.
Update: Just to make it clear, cause I'm doing test-driven development I always write the unit tests first and typically do the integration test at the end.