Trying to understand how functional programmers unit test functions that have dependencies without dependency injection.
In order to unit test with mocks, you can either provide your dependency through the method signature or through a constructor/constructor-like mechanism.
so if you have function composition like this:
a -> b -> c -> d
If you have d talking to some dependency, how does a get unit tested?
Where ever the dependency is kept, I'd want to have it unit tested.
I want to know what approach functional programmers take.
You can unit test depenent code with proper scoping, mocking, and dependency injection. Let me show you what I mean.
const square = x => x ** 2
const isOdd = x => x % 2 === 1
const addDependentValue = dependency => x => x + dependency.value
const mockDependency = { value: 5 }
it('filters out evens, squares, and adds value from dependency', () => {
const output = pipe([
filter(isOdd),
map(square),
map(addDependentValue(mockDependency)),
])([1, 2, 3, 4, 5])
assert.deepEqual(output, [6, 14, 30]) // OK
})
We scoped our function addDependentValue so that the return could "see" the dependency (it's in scope). In our test, we mocked our dependency with mockDependency. This would enable us to derive a predictable result in our test. In a less contrived example, your mock should mirror the interface of whatever dependency you are mocking. Finally, we inject the mockDependency in our test, allowing us to reasonably test this pipeline.
how does a get unit tested?
a is pure, so you can just write a simple unit test for it
// if a is square
it('squares', () => {
assert.strictEqual(square(3), 9) // OK
})
Related
I'm very new to unit test and moq. In my .net core 3.1 project I', using xUnit and Moq to write unit test. I have below scenario that I couldn't figure out why the moq can't detect my function.
I have configured my Unit test as below,
VASTest t = new VASTest()
{
RecurringAndOneOffChargeID = 2
};
_dataserviceMock.Setup(x => x.CreateVASBillingRunRecurringChargesTest(t))
.ReturnsAsync(() => true);
_dataserviceMock.Setup(x => x.CreateVASBillingRunRecurringChargesTest(2))
.ReturnsAsync(() => true);
In my test function I have blow two functions which I'm trying to moqup with above setup,
var result1 = _VASBillingDataAccess.CreateVASBillingRunRecurringChargesTest(tvt).Result;
var result2 = _VASBillingDataAccess.CreateVASBillingRunRecurringChargesTest(2).Result;
I have blow class in my models,
public class VASTest
{
public int RecurringAndOneOffChargeID { get; set; }
}
when I'm running the unit test, result1 is always false, but result2 is always true.
Could you please give me some suggestion how to fix the result1?
Thank you.
Your setup:
VASTest t = new VASTest()
{
RecurringAndOneOffChargeID = 2
};
_dataserviceMock.Setup(x => x.CreateVASBillingRunRecurringChargesTest(t)).ReturnsAsync(() => true);
will only work if that instance of VASTest - t - is used in the SUT invocation, or you implement your own equals method for the VASTest class that can resolve if 2 instances of VASTest are equal to each other. If either of those are not the case then you're performing the equality check by reference and that will not be satisfied. Your other setup using the int (2) works as an int is a value type and an equality check is always done on the value iself.
I'd just do the following if I wasn't implementing IEquatable<T>:
_dataserviceMock.Setup(x =>
x.CreateVASBillingRunRecurringChargesTest(
It.Is<VASTest>(t => t.RecurringAndOneOffChargeID.Equals(2))))
.ReturnsAsync(() => true);
I’m working on a Rust library that provides access to some hardware devices. There are two device types, 1 and 2, and the functionality for type 2 is a superset of the functionality for type 1.
I want to provide different test suites for different circumstances:
tests with no connected device (basic sanity checks, e. g. for CI servers)
tests for the shared functionality (requires a device of type 1 or 2)
tests for the type 2 exclusive functionality (requires a device of type 2)
I’m using features to represent this behavior: a default feature test-no-device and optional features test-type-one and test-type-two. Then I use the cfg_attr attribute to ignore the tests based on the selected features:
#[test]
#[cfg_attr(not(feature = "test-type-two"), ignore)]
fn test_exclusive() {
// ...
}
#[test]
#[cfg_attr(not(any(feature = "test-type-two", feature = "test-type-one")), ignore)]
fn test_shared() {
// ...
}
This is rather cumbersome as I have to duplicate this condition for every test and the conditions are hard to read and maintain.
Is there any simpler way to manage the test suites?
I tried to set the ignore attribute when declaring the module, but apparently it can only be set for each test function. I think I could disable compilation of the excluded tests by using cfg on the module, but as the tests should always compile, I would like to avoid that.
Is there a simple way to conditionally enable or ignore entire test suites in Rust?
The easiest is to not even compile the tests:
#[cfg(test)]
mod test {
#[test]
fn no_device_needed() {}
#[cfg(feature = "test1")]
mod test1 {
fn device_one_needed() {}
}
#[cfg(feature = "test2")]
mod test2 {
fn device_two_needed() {}
}
}
I have to duplicate this condition for every test and the conditions are hard to read and maintain.
Can you represent the desired functionality in pure Rust? yes
Is the existing syntax overly verbose? yes
This is a candidate for a macro.
macro_rules! device_test {
(no-device, $name:ident, {$($body:tt)+}) => (
#[test]
fn $name() {
$($body)+
}
);
(device1, $name:ident, {$($body:tt)+}) => (
#[test]
#[cfg_attr(not(feature = "test-type-one"), ignore)]
fn $name() {
$($body)+
}
);
(device2, $name:ident, {$($body:tt)+}) => (
#[test]
#[cfg_attr(not(feature = "test-type-two"), ignore)]
fn $name() {
$($body)+
}
);
}
device_test!(no-device, one, {
assert_eq!(2, 1+1)
});
device_test!(device1, two, {
assert_eq!(3, 1+1)
});
the functionality for type 2 is a superset of the functionality for type 1
Reflect that in your feature definitions to simplify the code:
[features]
test1 = []
test2 = ["test1"]
If you do this, you shouldn't need to have any or all in your config attributes.
a default feature test-no-device
This doesn't seem useful; instead use normal tests guarded by the normal test config:
#[cfg(test)]
mod test {
#[test]
fn no_device_needed() {}
}
If you follow this, you can remove this case from the macro.
I think if you follow both suggestions, you don't even need the macro.
I'am having an issue designing black-box unit tests without redundancy.
Here is an example :
class A {
Float function operationA(int: aNumber){
if(aNumber > 0){
return aNumber * 10 + 5.2;
}
else if (aNumber < 0) {
return aNumber * 7 - 5.2;
}
else {
return aNumber * 78 + 9.3;
}
}
}
class B {
boolean status = true;
Float function opearationB(int: theNumber){
if(status == true){
return a.operationA(aNumber);
}
}
}
In order to correctly test A.operationA(), I would have to write at least three unit tests (aNumber = 0, aNumber > 0 and aNumber < 0).
Now let's say I want to test B.functionB, using black-box strategy, should I re-write the similar three unit tests (theNumber= 0, theNumber> 0 and theNumber< 0) ? In that case, I would have to create a lot of tests each time I use the method A.operationA ...
If the black box constraint can be loosened you can remove all the duplication. I really like Jay Fields definitions of solitary vs sociable unit tests, explained here.
It should be trivial to test class A in isolation. It has no side effects and no collaborators. Ideally class B could also be tested in isolation (solitary) where it's collaborator's, class a, is stubbed out. Not only does this let you exercise class B in isolation, it helps control cascading failures. If class B is tested with the real life class A when class A changes it could cause a failure in class B.
At some point collaboration (sociable) should probably be checked, a couple ways may be:
a single socialable test that calls b through its public interface and triggers the default case in class A
Higher level tests that exercise a specific user story or external flow path, which triggers class B
Sorry didn't answer your direct question.
How can I use the Sinon package to stub/mock a method call where one of the parameters I have to mock is called using an arrow function? eg
let objWithMethod = { method : function(x) {}; };
function SUT() {
// use case
let x = 'some value';
let y = { anotherMethod : function(func) {}; };
// I want to test that `y.anotherMethod()` is called with
// `(x) => objWithMethod.method(x)` as the argument
y.anotherMethod((x) => objWithMethod.method(x));
}
let mockObj = sinon.mock(objWithMethod);
// Both of these fail with a "never called" error
mockObj.expects('method').once().withArgs(objWithMethod.method.bind(this, x));
mockObj.expects('method').once().withArgs((x) => objWithMethod.method(x));
SUT();
mockObj.verify();
I couldn't find anything in the sinon docs nor after a few attempts at a google search.
Loose matches you're trying to do can be done with matchers, to compare against any function it should be
mockObj.expects('method').withArgs(sinon.match.func)
And it will fail, because objWithMethod.method isn't called at all.
This
// I want to test that y.anotherMethod() is called with
// (x) => objWithMethod.method(x) as the argument
cannot be done, because the code wasn't written with tests in mind. JS can't reflect local variables, and SUT function is a blackbox.
In order to be reachable for tests and get 100% coverage, each and every variable and closure should be exposed to the outer world.
I want to be able to split a big test to smaller tests so that when the smaller tests pass they imply that the big test would also pass (so there is no reason to run the original big test). I want to do this because smaller tests usually take less time, less effort and are less fragile. I would like to know if there are test design patterns or verification tools that can help me to achieve this test splitting in a robust way.
I fear that the connection between the smaller tests and the original test is lost when someone changes something in the set of smaller tests. Another fear is that the set of smaller tests doesn't really cover the big test.
An example of what I am aiming at:
//Class under test
class A {
public void setB(B b){ this.b = b; }
public Output process(Input i){
return b.process(doMyProcessing(i));
}
private InputFromA doMyProcessing(Input i){ .. }
..
}
//Another class under test
class B {
public Output process(InputFromA i){ .. }
..
}
//The Big Test
#Test
public void theBigTest(){
A systemUnderTest = createSystemUnderTest(); // <-- expect that this is expensive
Input i = createInput();
Output o = systemUnderTest.process(i); // <-- .. or expect that this is expensive
assertEquals(o, expectedOutput());
}
//The splitted tests
#PartlyDefines("theBigTest") // <-- so something like this should come from the tool..
#Test
public void smallerTest1(){
// this method is a bit too long but its just an example..
Input i = createInput();
InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow
Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow
B b = mock(B.class);
when(b.process(x)).thenReturn(expected);
A classUnderTest = createInstanceOfClassA();
classUnderTest.setB(b);
Output o = classUnderTest.process(i);
assertEquals(o, expected);
verify(b).process(x);
verifyNoMoreInteractions(b);
}
#PartlyDefines("theBigTest") // <-- so something like this should come from the tool..
#Test
public void smallerTest2(){
InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow
Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow
B classUnderTest = createInstanceOfClassB();
Output o = classUnderTest.process(x);
assertEquals(o, expected);
}
The first suggestion that I'll make is to re-factor your tests on red (failing). To do so, you'll have to break your production code temporarily. This way, you know the tests are still valid.
One common pattern is to use a separate test fixture per collection of "big" tests. You don't have to stick to the "all tests for one class in one test class" pattern. If a a set of tests are related to each other, but are unrelated to another set of tests, then put them in their own class.
The biggest advantage to using a separate class to hold the individual small tests for the big test is that you can take advantage of setup and tear-down methods. In your case, I would move the lines you have commented with:
// this should be the same in both tests and it should be ensured somehow
to the setup method (in JUnit, a method annotated with #Before). If you have some unusually expensive setup that needs to be done, most xUnit testing frameworks have a way to define a setup method that runs once before all of the tests. In JUnit, this is a public static void method that has the #BeforeClass annotation.
If the test data is immutable, I tend to define the variables as constants.
Putting all this together, you might have something like:
public class TheBigTest {
// If InputFromA is immutable, it could be declared as a constant
private InputFromA x;
// If Output is immutable, it could be declared as a constant
private Output expected;
// You could use
// #BeforeClass public static void setupExpectations()
// instead if it is very expensive to setup the data
#Before
public void setUpExpectations() throws Exception {
x = expectedInputFromA();
expected = expectedOutput();
}
#Test
public void smallerTest1(){
// this method is a bit too long but its just an example..
Input i = createInput();
B b = mock(B.class);
when(b.process(x)).thenReturn(expected);
A classUnderTest = createInstanceOfClassA();
classUnderTest.setB(b);
Output o = classUnderTest.process(i);
assertEquals(o, expected);
verify(b).process(x);
verifyNoMoreInteractions(b);
}
#Test
public void smallerTest2(){
B classUnderTest = createInstanceOfClassB();
Output o = classUnderTest.process(x);
assertEquals(o, expected);
}
}
All I can suggest is the book xUnit Test Patterns. If there is a solution it should be in there.
theBigTest is missing the dependency on B. Also smallerTest1 mocks B dependency. In smallerTest2 you should mock InputFromA.
Why did you create a dependency graph like you did?
A takes a B then when A::process Input, you then post process InputFromA in B.
Keep the big test and refactor A and B to change the dependency mapping.
[EDIT] in response to remarks.
#mkorpela, my point is that by looking at the code and their dependencies is how you start to get an idea of how to create smaller tests. A has a dependency on B. In order for it to complete its process() it must use B's process(). Because of this, B has a dependency on A.