Rails / Rspec - writing spec for delegate method (allow_nil option) - delegation

Given the code below:
(1) How would you go about writing a spec to test the :allow_nil => false option?
(2) Is it even worth writing a spec to test?
class Event < ActiveRecord::Base
belongs_to :league
delegate :name, :to => :league, :prefix => true, :allow_nil => false
end
describe Event do
context 'when delegating methods to league object' do
it { should respond_to(:league_name) }
end
end
It would actually be nice if you could extend shoulda to do:
it { should delegate(:name).to(:league).with_options(:prefix => true, :allow_nil => false) }

According to the documentation for the delegate rails module:
If the delegate object is nil an exception is raised, and that happens no matter whether nil responds to the delegated method. You can get a nil instead with the :allow_nil option.
I would create an Event object event with a nil league, or set event.league = nil, then try to call event.name, and check that it raises an exception, since that is what is supposed to happen when allow_nil is false (which is also the default). I know rspec has this idiom for exception testing:
lambda{dangerous_operation}.should raise_exception(optional_exception_class)
I'm not sure if shoulda has this construct, though there are some articles, kinda old, about how to get this behavior in shoulda.
I think this is worth testing if it is behavior that users of this class can expect or assume will happen - which I think is probably true in this case. I wouldn't extend shoulda to test "should delegate", because that seems more implementation-dependent: you're really saying your Event should raise an exception if you try to call #name when it has a nil league. It's really unimportant to users of Event how you are making that happen. I would even go so far, if you wish to assert and make a note of this behavior, to test that Event#name has the same semantics as League#name, without mentioning anything about delegate, since this is a behavior-centric approach.
Build your tests based on how your code behaves, not on how it's built - testing this way is better documentation for those who stumble into your tests, as they're more interested in the question "why is my Event throwing?" or "what can cause Event to throw?" than "is this Event delegating?".
You can highlight this sort of situation by imagining what failures might happen if you change your code in a way that users of Event shouldn't care about. If they don't care about it, the test shouldn't break when you change it. What if you want to, for example, handle the delegation yourself, by writing a #name function that first logs or increments a counter and then delegates to league? by testing the exception-raising behavior, you are protected from this change, but by testing if Event is a delegate, you will break that test when you make this change - and so your test wasn't looking at what is really important about the call to #name.
Anyway, that's all just soapbox talk. tl;dr: test it if it's behavior someone might rely upon. Anything that's not tested is Shroedinger's cat: broken and not broken at the same time. Truthfully, much of the time this can be OK: It's a matter of taste whether you want to say something rigorous and definitive about how the system should be behaving, or just let it be "unspecified behavior".

So a couple of things here on whether or not to test this:
1) I don't think there is anything wrong with spec'ing out this behavior. If you have users who need to learn your software/library, its often very helpful to ensure that all your methods that are part of your public facing contract are spec'd. If you don't want to make this part of this model's api, I might recommend doing the delegation manually as to not expose more methods to the outside world than you need to.
2) Specs of this sort help to ensure that the contract that you have with the object your delegating to remains enforced. This is particularly helpful if you are using stubs and mocks in your tests, since they are often implementing the same contract, so at least you are aware when this contract changes.
In terms of how you test the allow_nil portion of it, I would agree with Matt, the best idea is to ensure that league is nil, then attempt to call name on league. This you could test to ensure that nil is returned.
Hope this helps.

Shoulda matchers now check allow nil option.
class Account
delegate :name, to: :league, allow_nil: true
end
# RSpec
describe Account do
it { should delegate_method(:name).to(:league).allow_nil }
end

I would have tested the delegation, as effecitvely we have created is a contract between two classes
describe '#league_name' do
let(:event) { create(:event) }
after { event.league_name }
it "delegates to league's name with prefix" do
expect(event.league).to receive(:name)
end
end

Related

Elixir mock only one function from a file

I have a test case where I need to mock downloading an image. The issue is when I mock this download function, it makes the other functions in that file undefined, but I also need to call the other functions in the test as they originally exist without mocking.
Is there a way to mock only one function from App.Functions in the example below and keep the rest of the functions working the same?
The code looks like this for setting up the mock:
setup_with_mocks(
[
{App.Functions, [], [download_file: fn _url -> :ok end]}
],
context
)
Seems that you are using Mock (https://hexdocs.pm/mock/Mock.html). In that case you can use the passthrough option:
test_with_mock "test_name", App.Functions, [:passthrough], [download_file: fn _url -> :ok end] do
end
I don't know if the option is available also for setup_with_mocks.
More info here: https://github.com/jjh42/mock#passthrough---partial-mocking-of-a-module
Sometimes when you encounter difficulty in mocking functions for testing it can indicate an organizational problem in your code, e.g. a violation of the single-responsibility-principle. Pondering things like this starts to venture into more philosophical territory (which Stackoverflow is not geared towards), but generally it's helpful to isolate your modules in a way that is compatible with testing -- some of the common code/repo organizational patterns fall into place more easily when giving due consideration to facilitating testing.
As already noted, Mock allows the passthrough option.
The Mox package does not have a viable solution to this particular use case -- even its skipping-optional-callbacks option does not really fit the bill.
Another option is to go the more manual route: pass an opt (or read one out of the Application config) that can be overridden at runtime to facilitate testing. This tactic smells to me a bit like Javascript's heavy reliance on passing callback functions, but it can work in a pinch, e.g. something like:
def download(url, opts \\ []) do
http_client = Keyword.get(opts, :client, HTTPoison)
http_client.get(url)
end
# OR
def download(url) do
http_client = Application.get_env(:myapp, :http_client, HTTPoison)
http_client.get(url)
end
Then in your tests:
test "download a file" do
assert {:ok, _} = MyApp.download("http://example", client: HttpClientMock)
end
# OR...
setup do
starting_value = Application.get_env(:myapp, :http_client)
on_exit(fn ->
Application.put_env(:myapp, :http_client, starting_value)
end)
end
test "download a file" do
Application.put_env(:myapp, :http_client, ClientMock)
# ...
end
This has the disadvantage of punting compile-time errors into runtime (which might be a worthwhile tradeoff to achieve test coverage), and this approach can become disorganized, so use with care.
Generally, I've found Mox's approach to rely on behaviours/callbacks to lead to cleaner tests and cleaner code, but your mileage and use-cases may vary.

Correct testing pattern in Project Reactor block() vs StepVerifier

Recently I noticed that my team follows two approaches on how to write tests in Reactor. First one is with help of .block() method. And it looks something like that:
#Test
void set_entity_version() {
Entity entity = entityRepo.findById(ID)
.block();
assertNotNull(entity);
assertFalse(entity.isV2());
entityService.setV2(ID)
.block();
Entity entity = entityRepo.findById(ID)
.block();
assertNotNull(entity);
assertTrue(entity.isV2());
}
And the second one is about using of StepVerifier. And it looks something like that:
#Test
void set_entity_version() {
StepVerifier.create(entityRepo.findById(ID))
.assertNext(entity -> {
assertNotNull(entity);
assertFalse(entity.isV2());
})
.verifyComplete();
StepVerifier.create(entityService.setV2(ID)
.then(entityRepo.findById(ID)))
.assertNext(entity -> {
assertNotNull(entity);
assertTrue(entity.isV2());
})
.verifyComplete();
}
In my humble opinion, the second approach looks more reactive I would say. Moreover, official docs are very clear on that:
A StepVerifier provides a declarative way of creating a verifiable script for an async Publisher sequence, by expressing expectations about the events that will happen upon subscription.
Still, I'm really curious, what way should be encouraged to use as the main road for doing testing in Reactor. Should .block() method be abandoned completly or it could be useful in some cases? If yes, what such cases are?
Thanks!
You should use StepVerifier. It allows more options:
Verify that you expect n element in a flux
Verify that the flux/mono complete
Verify that an error is expected
Verify that a sequence is expected n element followed by an error (impossible to test with .block())
From the official doc:
public <T> Flux<T> appendBoomError(Flux<T> source) {
return source.concatWith(Mono.error(new IllegalArgumentException("boom")));
}
#Test
public void testAppendBoomError() {
Flux<String> source = Flux.just("thing1", "thing2");
StepVerifier.create(
appendBoomError(source))
.expectNext("thing1")
.expectNext("thing2")
.expectErrorMessage("boom")
.verify();
}
Create initial context
Using virtual time to manipulate time. So when you have something like Mono.delay(Duration.ofDays(1)) you don't have to wait 1 day for your test to complete.
Expect that no event are emitted for a given duration...
from https://medium.com/swlh/stepverifier-vs-block-in-reactor-ca754b12846b
There are pros and cons of both block() and StepVerifier testing
patterns. Hence, it is necessary to define a pattern or set of rules
which can guide us on how to use StepVerifier and block().
In order to decide which patterns to use, we can try to answer the
following questions which will provide a clear expectation from the
tests we are going to write:
Are we trying to test the reactive aspect of the code or just the output of the code?
In which of the patterns we find clarity based on the 3 A’s of testing i.e Arrange, Act, and Assert, in order to make the test
understandable?
What are the limitations of the block() API over StepVerifier in testing reactive code? Which API is more fluent for writing tests in
case of Exception?
If you try answering all these questions above, you will find the
answers to “what” and “where”. So, just give it a thought before
reading the following answers:
block() tests the output of the code and not the reactive aspect. In such a case where we are concerned about testing the output of
the code, rather than the reactive aspect of the code we can use a
block() instead of StepVerifier as it is easy to write and the tests
are more readable.
The assertion library for a block() pattern is better organised in terms of 3 A’s pattern i.e Arrange, Act, and Assert than
StepVerifier. In StepVerfier while testing a method call for a mock
class or even while testing a Mono output one has to write expectation
in the form of chained methods, unlike assert which in my opinion
decreases the readability of the tests. Also, if you forget to write
the terminal step i.e verify() in case of StepVerifier, the code
will not get executed and the test will go green. So, the developer
has to be very careful about calling verify at end of the chain.
There are some aspects of reactive code that can not be tested by using block() API. In such cases, one should use StepVerifier when we
are testing a Flux of data or subscription delays or subscriptions
on different Schedulers, etc, where the developer is bound to use
StepVerifier.
To verify exception by using block() API you need to use assertThatThrownBy API in assertions library that catches the
exception. With the use of an assertion API, error message and
instance of the exception can be asserted. StepVerifier also provides
assertions on exception by expectError() API and supports the
assertion of the element before errors are thrown in a Flux of
elements that can not be achieved by block(). So, for the assertion of
exception, StepVerifier is better than a block() as it can assert
both Mono/Flux.

Mocking/stubbing whether or debug log is enabled?

How do I write a mock test that allows me to validate that an inaccessible property (debugLog) is set to true? Do I try to find a way to find the value of the property? Do I verify that console.debug is set? Does a spy make sense in this situation or should I use a stub?
Class X
let showDebugLogs = false,
debugLog = _.noop
/**
* Configures Class X instances to output or not output debug logs.
* #param {Boolean} state The state.
*/
exports.showDebugLogs = function (state) {
showDebugLogs = state;
debugLog = showDebugLogs ? console.debug || console.log : _.noop;
};
Unit Test
describe('showDebugLogs(state)', function () {
let spy;
it('should configure RealtimeEvents instances to output or not output debug logs', function () {
spy = sinon.spy(X, 'debugLog');
X.showDebugLogs(true);
assert.strictEqual(spy.calledOnce, true, 'Debug logging was not enabled as expected.');
spy.restore();
});
});
Mock testing is used for "isoloting" a class under test from its environment to decrease its side effects and to increase its test-ability. For example, if you are testing a class which makes AJAX calls to a web server, you'd probably do not want to:
1) wait for AJAX calls to complete (waste of time)
2) observe your tests fall apart because of possible networking problems
3) cause data modifications on the server side
and so on.
So what you do is to "MOCK" the part of your code, which makes the AJAX call, and depending on your test you either:
1) return success and response accompanying a successful request
2) return an error and report the nature of the point of failure to see how your code is handing it.
For your case, what you need is just a simple unit test case. You can use introspection techniques to assert internal states of your object, if this is what you really want to. However, this comes with a warning: it is discouraged. Please see Notes at the bottom
Unit testing should be done to test behavior or public state of an object. So, you should really NOT care about internals of a class.
Therefore, I suggest you to re-consider what you are trying to test and find a better way of testing it.
Suggestion: Instead of checking a flag in your class, you can mock up logger for your test. And write at least two test cases as follows:
1) When showDebugLogs = true, make sure log statement of your mock logger is fired
2) When showDebuLogs = false, log statement of your mock logger is not called.
Notes: There has been a long debate between two schools of people: a group advocating that private members/methods are implementation details and should NOT be tested directly, and another group which opposes this idea:
Excerpt from a wikipedia article:
There is some debate among practitioners of TDD, documented in their
blogs and other writings, as to whether it is wise to test private
methods and data anyway. Some argue that private members are a mere
implementation detail that may change, and should be allowed to do so
without breaking numbers of tests. Thus it should be sufficient to
test any class through its public interface or through its subclass
interface, which some languages call the "protected" interface.[29]
Others say that crucial aspects of functionality may be implemented in
private methods and testing them directly offers advantage of smaller
and more direct unit tests

How to unit test a method whose side effect is to call other method?

Here is my example:
void doneWithCurrentState(State state) {
switch (state) {
case State.Normal:
// this method is never actually called with State.Normal
break;
case State.Editing:
controller.updateViewState(State.Normal);
database.updateObjectWithDetails(controller.getObjectDetailsFromViews())
break;
case State.Focus:
controller.updateViewState(State.Editing);
break;
}
}
My controller calls the doneWithCurrentState when a specific button is pressed. The states are different positions on screen that the views for that controller can assume.
If the current state is Normal, the button will be hidden.
If the button is pressed with the current state as Editing, the doneWithCurrentState method (I say method because it is actually inside a class ``) will be called and it should change the controller's views state to Normal and update the Object in the database using the ObjectDetails (which is just a struct with data that will be used to update the Object) that should be retrieved from the controller's views (i.e., text fields, checkboxes, etc).
If the button is pressed with the current state as Focus, it should just send back to the Editing state.
I am unit testing it like this:
void testDoneWithCurrentStateEditing() {
mockController.objectDetails = ...;
myClass.doneWithCurrentState(State.Editing);
AssertEqual(mockController.viewState, State.Normal, "controller state should change to Normal");
AssertTrue(mockDatabase.updateObjectWithDetailsWasCalled, "updateObjectWithDetails should be called");
AssertEqual(mockDatabase.updatedWithObjectDetail, mockController.objectDetails, "database should be updated with corresponding objectDetails");
}
void testDoneWithCurrentStateFocus() {
myClass.doneWithCurrentState(State.Focus);
AssertEqual(mockController.viewState, State.Editing, "controller state should change to Editing");
AssertFalse(mockDatabase.updateObjectWithDetailsWasCalled, "updateObjectWithDetails should not be called");
}
But it seems wrong, it seems like I'm asserting a method call is made and then I'm making the call... it's just like asserting setter and getter methods.
What would be the right way of testing that doneWithCurrentState method?
As part of the answer, I do accept something like "first you should refactor the method to better separate these concerns...".
Thank you.
If you wrote this not test-first, an obvious way to write it would be to write one case, then copy-paste into the next case. An easy mistake to make in that case would be to forget to update the parameter to updateViewState(). So (for instance) you might find yourself going from State.Focus to State.Normal. The test you've written, although it may seem weak to you, protects against mistakes of that nature. So I think it's doing what it should.
First of all, please consider using state machine for your state transitions, you will get out of switch statement branching business, which will result in a great simplification of your tests.
Next, treat your tests as a potential source for code and design smells. If it is hard to write a test for a piece of code - probably the code is lacking quality (breaking SRP, too coupled, etc.) and can be simplified/improved.
void doneWithCurrentState(State state) {
State nextState = this.stateMachine.GetNextState(state);
controller.updateViewState(nextState);
if(nextState == State.Editing)
database.updateObjectWithDetails(controller.getObjectDetailsFromViews());
}
Then you can notice that you can pull out the call to the state machine of of the method and pass in the nextState.
//whoever calls this method should get nextState from state machine.
void doneWithCurrentState(State nextState) {
controller.updateViewState(nextState);
if(nextState == State.Editing)
database.updateObjectWithDetails(controller.getObjectDetailsFromViews());
}
and so forth.. you will write simple tests for state transitions in your state machine tests.. your overall code complexity gets down and all is goodness!? Well, there is hardly a limit to the level of goodness you can achieve and I can see multiple ways the code can be cleaned up even further.
As per your original question, how to test that code of your class makes a call on 'database' or 'controller' with proper parameters with specific state is passed in. You are doing it "right", that what mocks are meant to do. However, there are better ways. Consider event-based design. What if your controller could fire the events like "NextState" and your 'database' object can just subscribe to it? Then all your test needs to test is that the proper event is fired and not include anything about database (eliminating dependencies :))
I think Paul is spot on: put the state changes based on the incoming state into a state machine, i.e. an objects whose repsonsibility is to determine what comes next. This may sound dumb, because you kind of move the same code to another object, but at least this puts the controller on a diet. It shouldn't worry about too much details itself to be maintainable.
I worry about updateViewState, though. Why does it take the same kind of parameter as the controller's callback for user interaction? Can you model this differently? It's hard to tell you anything specific without looking at the flow of information (a detailed sequence diagram with comments might help), because usually the real insight into problems like these lies multiple levels deeper in the call stack. Without knowledge about the meaning of all this, it's hard to come up with a canned solution that fits.
Questions that might help:
if State represents 3 (?) user interactions which all go through the same tunnel, can you model the actions to take as Strategy or Command?
if doneWithCurrentState represents finishing one of many interaction modes, do you really need to use a shared doneWithCurrentState method? Couldn't you use three different callbacks? Maybe this is the wrong kind of abstraction. ("Don't Repeat Yourself" isn't about code but about things that change (in)dependently)

Mocked object set to true is being treated like it is false

I have a unit test (using typemock 5.4.5.0) that is testing a creation service. The creation service is passed in a validation service in its constructor. The validation service returns an object that has a boolean property (IsValid). In my unit test I am mocking the validation service call to return an instance that has IsValid set to true. The creation service has an if statement that checks the value of that property. When I run the unit test, the object returned from the validation service has its property set to true, but when the if statement is executed, it treats it as though it was false.
I can verify this by debugging the unit test. The object returned by the validation service does indeed have its IsValid property set to true, but it skips the body of my if statement entirely and goes to the End If.
Here is a link to the unit test itself - https://gist.github.com/1076372
Here is a link to the creation service function I am testing - https://gist.github.com/1076376
Does anyone know why the hell the IsValid property is true but is treated like it is false?
P.S. I have also entered this issue in TypeMock's support system, but I think I will probably get a quicker response here!
First, if possible, I'd recommend upgrading to the latest version of Typemock Isolator that you're licensed for. Each version that comes out, even minor releases, contains fixes for interesting edge cases that sometimes make things work differently. I've found upgrading sometimes fixes things.
Next, I see this line in your unit test:
Isolate.WhenCalled(() => validator.ValidateNewQuestionForExistingQuestionPool(new QuestionViewModel())).WillReturn(new Validation(true));
The red flag for me is the "new QuestionViewModel()" that's inside the "WhenCalled()" block.
Two good rules of thumb I always follow:
Don't put anything in the WhenCalled() that you don't want mocked.
If you don't care about the arguments, don't pass real arguments.
In this case, the first rule makes me think "I don't want the constructor for the QuestionViewModel mocked, so I shouldn't put it in there."
The second rule makes me consider whether the argument to the "ValidateNewQuestionForExistingPool" method really isn't important. In this case, it's not, so I'd pass null rather than a real object. If there's an overload you're specifically looking at, cast the null first.
Finally, sort of based on that first rule, I generally try not to inline my return values, either. That means I'd create the new Validation object before the Isolate call.
var validation = new Validator(true);
Isolate.WhenCalled(() => validator.ValidateNewQuestionForExistingQuestionPool(null)).WillReturn(validation);
Try that, see how it runs. You might also watch in the Typemock Tracer utility to see what's getting set up expectation-wise when you run your test to ensure additional expectations aren't being set up that you're not... expecting.