I'm currently learning Redux.
So far, as I'm discovering how to manage a state app, I don't want to focus on any integration with a framework (like React). I just want to well understand the idea and concepts behind Redux.
I followed courses given by Dan Abramov on egghead.io.
I like the way he explains by testing his app so I started playing with Redux the same way.
I built an app with Redux. Of course it has multiple reducers and actions.
I won't share any code here because it has no particular interest.
It's more a matter of how to deal with tests and Redux.
I don't know if it makes sense to test reducers with their corresponding actions of if I should mock the actions in my tests.
I started by mocking the actions because at first, I thought that it was a good idea to separate my tests and not having dependencies between reducers and actions. (and it's what I've seen in most tutorials. But in tutorials they often build small app).
Now, I figure out that I sometimes end up with a mock different than the corresponding action and even if my tests are fine, it could break in a real app when I'll use dispatch(myAction()) as it will be something different than exepected.
Should I use my actions in my reducers tests ?
Thanks a lot for any explanation about that.
EDIT : Some code to have a better explanation
REDUCER
case CREATE_USER_IN_PROJECT:
currentProject = state.filter(p => p.id === action.payload.idProjet)[0]
indexCurrentProject = state.indexOf(currentProject)
people = [
...currentProject.people,
action.payload.idUser
]
return [
...state.slice(0, indexCurrentProject),
Object.assign({}, currentProject, {people}),
...state.slice(indexCurrentProject + 1)
]
REDUCER'S TEST
it('CREATE_PROJECT if no project should only have the new project', done => {
let idNewProject = uuid.v4()
expect(
projects(undefined, {
type: CREATE_PROJECT,
payload: {
id: idNewProject,
name: 'New project !'
}
})
)
.toEqual([{
id: idNewProject,
name: 'New project !',
people: [],
money: '€',
operations: [],
archived: false,
closed: false
}])
done()
})
So here, instead of having
{
type: CREATE_PROJECT,
payload: {
id: idNewProject,
name: 'New project !'
}
}
Should I call my action createProject('New project !') ?
Thanks for the clarification. Turns out I misunderstood you in my comment. Here's a hopefully more helpful explanation.
You shouldn't use your actual actions, e.g. createProject('New project !'), in testing your reducers.
Reducers are simple state machines that take an input and return an output. Your tests should check they do exactly that, where:
input = previous state > output = next state. And yes it still count as unit-testing (I don't see why it wouldn't).
I found this a good read on how to test reducers
Related
I have a CloudFormation stack that defines a GraphQL API powered by DynamoDB. I would like to run a test script that:
Creates a standard set of fixtures.
Runs various tests, including creating, modifying, and deleting data.
Deletes all of the fixtures and any other objects created during the test.
The “clean way” to do this would be to create a new stage for the tests, but this is extremely time-consuming (in terms of wall-clock time spent waiting for the result).
The “hard way” would be to keep precise track of every DynamoDB record created during the testing process and then delete them afterward one by one (and/or using many batch updates). This would be a huge pain to code, and the likelihood of error is very high.
An intermediate approach would be to use a dedicated pre-existing stage for integration tests, wipe it clean at the end of the tests, and make sure that only one set of tests is running at a time. This would require writing a script to manually clear out the tables, which sounds moderately less tedious and error-prone than the “hard way”.
Is there an established best practice for this? Are there other approaches I haven't considered?
How long does it take to deploy the stack?
If it is only a small portion of the time that takes to run the tests use the "clean way", otherwise use the intermediate approach of having a dedicated test stack already deployed.
You don't have to write scripts.
I actually wrote a testing library for this purpose exactly:
https://github.com/erezrokah/aws-testing-library/blob/master/src/jest/README.md#tohaveitem
Usage example (TypeScript):
import { clearAllItems } from 'aws-testing-library/lib/utils/dynamoDb';
import { invoke } from 'aws-testing-library/lib/utils/lambda';
describe('db service e2e tests', () => {
const region = 'us-east-1';
const table = 'db-service-dev';
beforeEach(async () => {
await clearAllItems(region, table);
});
afterEach(async () => {
await clearAllItems(region, table);
});
test('should create db entry on lambda invoke', async () => {
const result = await invoke(region, 'db-service-dev-create', {
body: JSON.stringify({ text: 'from e2e test' }),
});
const lambdaItem = JSON.parse(result.body);
expect.assertions(1);
await expect({ region, table, timeout: 0 }).toHaveItem(
{ id: lambdaItem.id },
lambdaItem,
);
});
});
If you do write the scripts yourself you might need to consider eventual consistency and retry (as the data might not be available directly after write)
I'd simply delete this "test stack" in the end of the test and let CloudFormation clean up DynamoDB for you -- check DeletionPolicy documentation.
You might want to trigger/hook stack deletion from your CI environment, whatever it is. As an example, I found this CodePipeline walkthrough: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-basic-walkthrough.html
In .NET Core 2.0 I have a fairly simple MassTransit routing slip that contains 2 activities. This is built and executed in a consumer and it all ties back to an automatonymous state machine. It all works great albeit with a few final clean tweaks needed.
However, I can't quite figure out the best way to write unit tests for my consumer as it builds a routing slip. I have the following code in my consumer:
public async Task Consumer(ConsumerContext<ProcessRequest> context)
{
var builder = new RoutingSlipBuilder(NewId.NextGuid());
SetupRoutingSlipActivities(builder, context);
var routingSlip = builder.Build();
await context.Execute(routingSlip).ConfigureAwait(false);
}
I created the SetupRoutingSlipActivities method as I thought it would help me write tests to make sure the right activities were being added and it simply looks like:
public void SetupRoutingSlipActivities(RoutingSlipBuilder builder, ConsumeContext<IProcessCreateLinkRequest> context)
{
builder.AddActivity(
nameof(ActivityOne),
new Uri("execute_activity_one_example_address"),
new ActivityOneArguments(
context.Message.Id,
context.Message.Name)
);
builder.AddActivity(
nameof(ActivityTwo),
new Uri("execute_activity_two_example_address"),
new ActivityTwoArguments(
context.Message.AnotherId,
context.Message.FileName)
);
}
I tried to just write tests for the SetupRoutingSlipActivities by using a Moq mock builder and a MassTransit InMemoryTestHarness but I found that the AddActivity method is not virtual so I can't verify it as such:
aRoutingSlipBuilder.Verify(x => x.AddActivity(
nameof(ActivityOne),
new Uri("execute_activity_one_example_address"),
It.Is<ActivityOne>(y => y.Id == 1 && y.Name == "A test name")));
Please ignore some of the weird data in the code examples as I just put up a simplified version.
Does anyone have any recommendations on how to do this? I also wanted to test to make sure the RoutingSlipBuilder was created but as that instance is created in the Consume method I wasn't sure how to do it! I've searched a lot online and through the MassTransit repo but nothing stood out.
Look at how the Courier tests are written, there are a number of test fixtures available to test routing slip activities. While they aren't well documented, the unit tests are a working testament to how the testing is used.
https://github.com/MassTransit/MassTransit/blob/develop/src/MassTransit.Tests/Courier/TwoActivityEvent_Specs.cs
I'm recently learning Redux and writing Unit Test as part of TDD process using Jest
Im writing test for action creators and reducers. But i'm struggling with: can I make use of action creators in the reducers test?
import * as types from './../../constants/auth';
import * as actions from './../../actions/auth';
import reducer, {initialState} from './../auth';
can I do this
it('should set isFetching to true', () => {
const expectedState = {
...initialState,
isFetching: true
}
expect(
reducer(initialState, actions.loginPending())
).toEqual(expectedState)
});
instead of this?
it('should set isFetching to true', () => {
const expectedState = {
...initialState,
isFetching: true
}
expect(
reducer(initialState, {type: types.LOGIN_PENDING})
).toEqual(expectedState)
});
I came to this doubt because the official documentation use hard coded action in the reducers test:
expect(
reducer([], {
type: types.ADD_TODO,
text: 'Run the tests'
})
).toEqual([{
text: 'Run the tests',
completed: false,
id: 0
}])
I guess using hard coded actions is the best practice isn't?
Interesting question and I would say it depends how you run your test suite. Personally, I hardcode the actions because if you think about it, they declaratively explain what the reducer is expecting. The argument in favor of importing the actions is that if you ever change their source, the tests will not need to be updated. However, this also means you're expecting your actions to always be correct BEFORE running these tests.
If that's the case (if you always run your actions test suite before this one) then it would be reasonable to import them in your reducer test suite. The only argument against this logic would be that it's not as easy to have a new developer learn how your reducer works by only looking at the reducer test suite as they would also need to look at the actions source file to see what type of actions are dispatched.
On the other hand, hard-coding your actions is more declarative but does require you to update each reducer test if your action changes. The reason I still recommend this approach is that this is that it allows you to send more controlled data but I do agree that it increases maintenance costs.
I have been trying to configure offline unit tests for polymer web components that use the latest release of Firebase distributed database. Some of my tests are passing, but others—that look nigh identical to passing ones—are not running properly.
I have set up a project on github that demonstrates my configuration, and I'll provide some more commentary below.
Sample:
https://github.com/doctor-g/wct-firebase-demo
In that project, there are two suites of tests that work fine. The simplest is offline-test, which doesn't use web components at all. It simply shows that it's possible to use the firebase database's offline mode to run some unit tests. The heart of this trick is the in the suiteSetup method shown below—a trick I picked up from nfarina's work on firebase-server.
suiteSetup(function() {
app = firebase.initializeApp({
apiKey: 'fake',
authDomain: 'fake',
databaseURL: 'https://fakeserver.firebaseio.com',
storageBucket: 'fake'
});
db = app.database();
db.goOffline();
});
All the tests in offline-test pass.
The next suite is wct-firebase-demo-app_test.html, which test the eponymous web component. This suite contains a series of unit tests that are set up like offline-test and that pass. Following the idea of dependency injection, the wct-firebase-demo-app component has a database attribute into which is passed the firebase database reference, and this is used to make all the firebase calls. Here's an example from the suite:
test('offline set string from web component attribute', function(done) {
element.database = db;
element.database.ref('foo').set('bar');
element.database.ref('foo').once('value', function(snapshot) {
assert.equal(snapshot.val(), 'bar');
done();
});
});
I have some very simple methods in the component as well, in my attempt to triangulate toward the broken pieces I'll talk about in a moment. Suffice it to say that this test passes:
test('offline push string from web component function', function(done) {
element.database = db;
let resultRef = element.pushIt('foo', 'bar');
element.database.ref('foo').once('value', function(snapshot) {
assert.equal(snapshot.val()[resultRef.key], 'bar');
done();
});
});
and is backed by this implementation in wct-firebase-demo-app:
pushIt: function(at, value) {
return this.database.ref(at).push(value);
},
Once again, these all pass. Now we get to the real quandary. There's a suite of tests for another element, x-element, which has a method pushData:
pushData: function(at, data) {
this.database.ref(at).push(data);
}
The test for this method is the only test in its suite:
test('pushData has an effect', function(done) {
element.database = db;
element.pushData('foo', 'xyz');
db.ref('foo').once('value', function(snapshot) {
expect(snapshot.val()).not.to.be.empty;
done();
});
});
This test does not pass. While this test is running, the console comes up with an error message:
Your API key is invalid, please check you have copied it correctly.
By setting some breakpoints and walking through the execution, it seems to me that this error comes up after the call to once but before the callback is triggered. Note, again, this doesn't happen with the same test structure described above that's in wct-firebase-demo-app.
That's where I'm stuck. Why do offline-test and wct-firebase-demo-app_test suites work fine, but I get this API key error in x-element_test? The only other clue I have is that if I copy in a valid API key into my initializeApp configuration, then I get a test timeout instead.
UPDATE:
Here is a (patched-together) image of my console log when running the tests.:
To illustrate the issue brought up by tony19 below, here's the console log with just pushData has an effect in x-element_test commented out:
The offline-test results are apparently false positives. If you check the Chrome console, offline-test actually throws the same error:
The error doesn't affect the test results most likely because the API key validation occurs asynchronously after the test has already completed. If you could somehow hook into that validation, you'd be able to to catch the error in your tests.
Commenting out all tests except for offline firebase is ok shows the error still occurring, which points to suiteSetup(). Narrowing the problem down further by commenting 2 of the 3 function calls in the setup, we'll see the error is caused by the call to firebase.initializeApp() (and not necessarily related to once() as you had suspected).
One workaround to consider is wrapping the Firebase library in a class/interface, and mocking that for unit tests.
My objective is to test the TDD (Test driven development). But after one weekend on it , I really need your help :)
First Question : "What is the best way to TDD between Browser runner
or headless runner" ?
Second : I really want test my project without browser before put it in Production mode. For while I didn't succeed :(
For example if I want test my Projects model who look like :
define([
'underscore',
'backbone'
], function(_, Backbone) {
var projectsModel = Backbone.Model.extend({
defaults: {
score: 10
},
initialize: function(){
}
});
return projectsModel;
});
How can I do ?
I have already check jasmine-node / Js test driver / ... but without success :/
Jasmine-node look great but... I need some help because every tuto I found on web only work for simple model without Require dependence...
Thank you :)
PS : I also check this link here but with the same error :/
Node has issues emulating a real browser, with all it's quirks, ajax, etc. Something like PhantomJS works damn well though. You use a script to open your test running page and let it run in PhantomJS, and have some other code to pull out the results.