State vs Interaction based testing - unit-testing

Assume we have an Order class with a method called Approve. When this method is called, it checks certain conditions and either puts the Order in the state of Approved or throws an exception. In the service layer, we've got something like this:
var order = _repository.Single(o => o.ID == orderID);
order.Approve();
_context.SaveChanges(); // or _session.SaveChanges();
There are 2 ways to test this method and I'd like to hear your insight on this:
Solution 1: Stub the repository to return an Order object. Then assert the Order is in the state of "Approved".
Solution 2: Stub the repository to return a Mock Order object. Assert that Approve() method was called.
Solution 1 is easier and I personally favor state-based testing to interaction-based testing, as the latter can target implementation details and should be avoided. However, I believe testing that the given Order is in the state of Approved is not the concern of this service method. I think we need a separate test method for the Order class to test whether an exception is thrown or the Order's state is changed to Approved.
Solution 2 may sound logical as we are delegating the responsibility of Approving an Order to the Order class itself. So perhaps we need 2 tests for this service method: One to ensure it delegates the task of Approving an Order to the Order class and one to ensure it saves the changes.
What's your insight on this? Which solution do you prefer?
Cheers

Unit tests are to test whether the observed behavior is conforming to expectations/specification.
The answer to your question boils down to what you consider "expected behavior" in this case: a) if the expected behavior is that the Order is in approved state after calling the service method, then test the state; b) if the expected behavior is that the approve action is delegated, then test the method call.
You will need to test the Order object's behavior as well (so that calling Approve() changes the state to approved) in either case.
The second solution plays well as it decouples the behavior of the two objects, but if there are more than one ways that the order can be in approved state (and that's what you are testing -- case a)), then you limit the accepted behavior needlessly.
Also, I would create a separate test for testing the saving part, if that is not essential to the approval part

Related

How to centralize refetch and update logic in Apollo Client

When performing mutations it is possible to value refetchQueries update and updateQueries options to provide reaction to current app query results.
However, i'd like to centralize reactive implementation, so various app components performing mutations will not have to deal - likely redundantly - with mutation|query relations.
I noticed ApolloClient constructor's options: ApolloClientOptions.defaultOptions.mutate.* which are functions that give chance to return RefetchQueryDescriptions or operate on DataProxy in response of a ExecutionResult argument alone which, in my opinion, does not provide enough context for the function implementation, as the triggering Operation would also be necessary.
I then took in consideration ApolloLink which seems a perfect candidate, having the chance to hook and override the Operation object, but unfortunately Operation object does not define any of those reacting properties.
Any advice on how to implement my use case?

How can I tell GoogleMock to stop checking an expectation after the test finished?

I have two unit tests that share some state (unfortunately I can't change this since the point is to test the handling of this very state).
TEST(MySuite, test1)
{
shared_ptr<MockObject> first(make_shared<MockObject>());
SubscribeToFooCallsGlobal(first);
EXPECT_CALL(*first, Foo(_));//.RetiresOnSaturation();
TriggerFooCalls(); // will call Foo in all subscribed
}
TEST(MySuite, test2)
{
shared_ptr<MockObject> second(make_shared<MockObject>());
SubscribeToFooCallsGlobal(second);
EXPECT_CALL(*second, Foo(_)).Times(1);
TriggerFooCalls(); // will call Foo in all subscribed
}
If I run the tests separately, both are successful. If I run them in the order test1, test2, I will get the following error in test2:
mytest.cpp(42): error: Mock function called more times than expected - returning directly.
Function call: Foo(0068F65C)
Expected: to be called once
Actual: called twice - over-saturated and active
The expectation that fails is the one in test1. The call does take place, but I would like to tell GoogleMock to not care after test1 is complete (in fact, I only want to check expectations in a test while the test is running).
I was under the impression that RetiresOnSaturation would do this, but with it I get:
Unexpected mock function call - returning directly.
Function call: Foo(005AF65C)
Google Mock tried the following 1 expectation, but it didn't match:
mytest.cpp(42): EXPECT_CALL(first, Foo(_))...
Expected: the expectation is active
Actual: it is retired
Expected: to be called once
Actual: called once - saturated and retired
Which I have to admit, confuses me. What does it mean? How can I solve this?
You can read in documentation of Mock almost literally described your case:
Forcing a Verification
When it's being destoyed, your friendly mock object will automatically
verify that all expectations on it have been satisfied, and will
generate Google Test failures if not. This is convenient as it leaves
you with one less thing to worry about. That is, unless you are not
sure if your mock object will be destoyed.
How could it be that your mock object won't eventually be destroyed?
Well, it might be created on the heap and owned by the code you are
testing. Suppose there's a bug in that code and it doesn't delete the
mock object properly - you could end up with a passing test when
there's actually a bug.
So you shall not expect, that at the end of the test case, in some magic way expectations will be "deactivated". As cited above - the mock destructor is the point of verification.
In your case - you mocks are not local variable - they are created in dynamic memory (heap in cited doc) and kept in your tested code via SubscribeToFooCallsGlobal(), So for sure mock created in one test is still alive in next test.
The easy, and proper solution is to unsubscribe at the end of each TC - I do not know if you have any UnsubscribeToFooCallsGlobal() - if not - create such function. To be sure that it will be always called - use ScopeGuard pattern.
There is one function to manually enforce verification Mock::VerifyAndClearExpectations(&mock_object) - but use it only if you need this verification not in the last line of your testcase, because that should be point of destruction.
edit: Fixed the googlemock link.

Consistently using the value of "now" throughout the transaction

I'm looking for guidelines to using a consistent value of the current date and time throughout a transaction.
By transaction I loosely mean an application service method, such methods usually execute a single SQL transaction, at least in my applications.
Ambient Context
One approach described in answers to this question is to put the current date in an ambient context, e.g. DateTimeProvider, and use that instead of DateTime.UtcNow everywhere.
However the purpose of this approach is only to make the design unit-testable, whereas I also want to prevent errors caused by unnecessary multiple querying into DateTime.UtcNow, an example of which is this:
// In an entity constructor:
this.CreatedAt = DateTime.UtcNow;
this.ModifiedAt = DateTime.UtcNow;
This code creates an entity with slightly differing created and modified dates, whereas one expects these properties to be equal right after the entity was created.
Also, an ambient context is difficult to implement correctly in a web application, so I've come up with an alternative approach:
Method Injection + DeterministicTimeProvider
The DeterministicTimeProvider class is registered as an "instance per lifetime scope" AKA "instance per HTTP request in a web app" dependency.
It is constructor-injected to an application service and passed into constructors and methods of entities.
The IDateTimeProvider.UtcNow method is used instead of the usual DateTime.UtcNow / DateTimeOffset.UtcNow everywhere to get the current date and time.
Here is the implementation:
/// <summary>
/// Provides the current date and time.
/// The provided value is fixed when it is requested for the first time.
/// </summary>
public class DeterministicTimeProvider: IDateTimeProvider
{
private readonly Lazy<DateTimeOffset> _lazyUtcNow =
new Lazy<DateTimeOffset>(() => DateTimeOffset.UtcNow);
/// <summary>
/// Gets the current date and time in the UTC time zone.
/// </summary>
public DateTimeOffset UtcNow => _lazyUtcNow.Value;
}
Is this a good approach? What are the disadvantages? Are there better alternatives?
Sorry for the logical fallacy of appeal to authority here, but this is rather interesting:
John Carmack once said:
There are four principle inputs to a game: keystrokes, mouse moves, network packets, and time. (If you don't consider time an input value, think about it until you do -- it is an important concept)"
Source: John Carmack's .plan posts from 1998 (scribd)
(I have always found this quote highly amusing, because the suggestion that if something does not seem right to you, you should think of it really hard until it seems right, is something that only a major geek would say.)
So, here is an idea: consider time as an input. It is probably not included in the xml that makes up the web service request, (you wouldn't want it to anyway,) but in the handler where you convert the xml to an actual request object, obtain the current time and make it part of your request object.
So, as the request object is being passed around your system during the course of processing the transaction, the time to be considered as "the current time" can always be found within the request. So, it is not "the current time" anymore, it is the request time. (The fact that it will be one and the same, or very close to one and the same, is completely irrelevant.)
This way, testing also becomes even easier: you don't have to mock the time provider interface, the time is always in the input parameters.
Also, this way, other fun things become possible, for example servicing requests to be applied retroactively, at a moment in time which is completely unrelated to the actual current moment in time. Think of the possibilities. (Picture of bob squarepants-with-a-rainbow goes here.)
Hmmm.. this feels like a better question for CodeReview.SE than for StackOverflow, but sure - I'll bite.
Is this a good approach?
If used correctly, in the scenario you described, this approach is reasonable. It achieves the two stated goals:
Making your code more testable. This is a common pattern I call "Mock the Clock", and is found in many well-designed apps.
Locking the time to a single value. This is less common, but your code does achieve that goal.
What are the disadvantages?
Since you are creating another new object for each request, it will create a mild amount of additional memory usage and additional work for the garbage collector. This is somewhat of a moot point since this is usually how it goes for all objects with per-request lifetime, including the controllers.
There is a tiny fraction of time being added before you take the reading from the clock, caused by the additional work being done in loading the object and from doing lazy loading. It's negligible though - probably on the order of a few milliseconds.
Since the value is locked down, there's always the risk that you (or another developer who uses your code) might introduce a subtle bug by forgetting that the value won't change until the next request. You might consider a different naming convention. For example, instead of "now", call it "requestRecievedTime" or something like that.
Similar to the previous item, there's also the risk that your provider might be loaded with the wrong lifecycle. You might use it in a new project and forget to set the instancing, loading it up as a singleton. Then the values are locked down for all requests. There's not much you can do to enforce this, so be sure to comment it well. The <summary> tag is a good place.
You may find you need the current time in a scenario where constructor injection isn't possible - such as a static method. You'll either have to refactor to use instance methods, or will have to pass either the time or the time-provider as a parameter into the static method.
Are there better alternatives?
Yes, see Mike's answer.
You might also consider Noda Time, which has a similar concept built in, via the IClock interface, and the SystemClock and FakeClock implementations. However, both of those implementations are designed to be singletons. They help with testing, but they don't achieve your second goal of locking the time down to a single value per request. You could always write an implementation that does that though.
Code looks reasonable.
Drawback - most likely lifetime of the object will be controlled by DI container and hence user of the provider can't be sure that it always be configured correctly (per-invocation and not any longer lifetime like app/singleton).
If you have type representing "transaction" it may be better to put "Started" time there instead.
This isn't something that can be answered with a realtime clock and a query, or by testing. The developer may have figured out some obscure way of reaching the underlying library call...
So don't do that. Dependency injection also won't save you here; the issue is that you want a standard pattern for time at the start of the 'session.'
In my view, the fundamental problem is that you are expressing an idea, and looking for a mechanism for that. The right mechanism is to name it, and say what you mean in the name, and then set it only once. readonly is a good way to handle setting this only once in the constructor, and lets the compiler and runtime enforce what you mean which is that it is set only once.
// In an entity constructor:
this.CreatedAt = DateTime.UtcNow;

What is the difference in OCMock expect and stub methods?

I am trying to use OCMock for testing my app. But I am confused where should we use expect and where to use stub? Can anyone help please?
The basic difference is this: you expect things that must happen, and stub things that might happen.
There are 2 ways mock objects fail: either an unexpected/unstubbed method is called, or an expected method is not called.
Unexpected invocations. When a mock object receives a message that hasn't been either stubbed or expected, it throws an exception immediately and your test fails.
Expected invocations. When you call verify on your mock (generally at the end of your test), it checks to make sure all of the methods you expected were actually called. If any were not, your test will fail.
There are a couple of types of mocks that change this behavior: nice mocks and partial mocks. Nice mocks prevent you having to stub methods--basically they let unexpected invocations occur. Partial mocks are a way of intercepting messages sent to actual objects. Any messages you expect or stub on a partial mock will be sent to the mock object. All other messages are sent to the actual object. For both nice mocks and partial mocks, you won't get a test failure on unexpected invocations (rule #1 above).

Correct Approach for Unit Testing Complex Interactions

I had to start writing some unit tests, using QualityTools.UnitTestFramework, for a web service layer we have developed, when my approach seemed to be incorrect from the beginning.
It seems that unit tests should be able to run in any order and not rely on other tests.
My initial thought was to have the something similar to the following tests (a simplified expample) which would run as an ordered test in the same order.
AddObject1SuccessTest
AddObject2WithSameUniqueCodeTest(relies on first test having created object1 first then expects fail)
AddObject2SuccessTest
UpdateObject2WithSameUniqueCodeTest(relies on first test having created object1 and thrid test having created object2 first then expects fail)
UpdateObject2SuccessTest
GetObjectListTest
DeleteObjectsTest(using added IDs)
However, there is no state between tests and no apparent way of passing say added IDs to the deletetest for example.
So, is it then the case that the correct approach for unit testing complex interactions is by scenario?
For example
AddObjectSuccessTest
(which creates an object, gets it to validate the data and then deletes it)
AddObjectWithSameUniqueCodeTest
(which creates object 1 then attempts to create object 2 with a fail and then deletes object 1)
UpdateObjectWithSameUniqueCodeTest
(which creates object 1 then creates object 2 and then attempts to update object 2 to have the same unique code as object 1 with a fail and then deletes object 1 and object 2)
Am I coming at this wrong?
Thanks
It is a tenet of unit testing that each test case should be independent of any other test case. MSTest (as well as all other unit testing frameworks) enforce this by not guaranteeing the order in which tests are run - some (xUnit.NET) even go so far as to randomize the order between each test run.
It is also a recommended best practice that units are condensed into simple interactions. Although no hard and fast rule can be provided, it's not a unit test if the interaction is too complex. In any case, complex tests are brittle and have a very high maintainance overhead, which is why simple tests are preferred.
It sounds like you have a case of shared state between your tests. This leads to interdependent tests and should be avoided. Instead you can write reusable code that sets up the pre-condition state for each test, ensuring that this state is always correct.
Such a pre-condition state is called a Fixture. The book xUnit Test Patterns contains lots of information and guidance on how to manage Fixtures in many different scenarios.
As a complement to what Mark said, yes, each test should be completely independent from the others, and, to use your terms, each test should be a self-contained scenario, which can run independently of the others.
I assume from what you describe that you are testing persistence, because you have in your steps the deletion of the entities you created at the end of the test, to clean up the state. Ideally, a unit test is running completely in memory, with no shared state between each test. One way to achieve that is to use Mocks. I assume you have something like a Repository in place, so that your class calls Repository.Add(myNewObject), which calls something like Repository.ValidateObjectCanBeAdded(myNewObject). Rather than testing against the real repository, which will add objects in the database and require to delete them to clean the state after the test, you can create an interface IRepository, with the two same methods, and use a Mock to check that when your class calls IRepository, it is exercising the right methods, with the right arguments, in the right order. It also gives you the ability to set the "fake" repository to any state you want, in memory, without having to physically add or delete records from a real storage.
Hope this helps!