validate_presence_of shoulda spec and default values - ruby-on-rails-4

I have some code in a Rails model that does this:
has_one :plan
validates_presence_of :plan
And used to do this:
after_initialize :set_plan
def set_plan
self.plan ||= FreePlan.new
end
and now does:
def plan
super || (self.plan = FreePlan.new)
end
Now, however, this test fails:
it { is_expected.to validate_presence_of(:plan) }
The new code is nicer in that it doesn't always have to look up a plan object in the DB for every instantiation of this object, but I'm curious what the test is doing in terms of the object under test and its lifecycle.

Here's what is happening:
with the older code, after_initialize is called, and the value is automatically populated. But it's possible to subsequently set plan to nil and then try and save it.
With the new code, valid? is going to call the plan method and therefore populate it, so that when the test suite inspects errors, it's not going to find any for plan.
Perhaps the best course of action is to remove the line from the test suite and add one ensuring that plan is never nil.

Related

How to unit test a method not invoking another object method in RSpec

I have a rails controller which I would like to test a method test_method.
class ActiveController < ActionController::Base
def test_method
user = acc_users.all_users.find params[:id]
if !user.active?
user.call_method!
end
end
end
I have to test that call_method isn't being called. This is what I have come up with but I don't think this will work.
it "should not call call_method" do
u = user(#acc)
put :test_method, :id => u.id
expect(u).not_to have_received(:call_method!)
end
I followed this question here and found it almost similar except that the method being called is in another class. When I try this above code I get an error message like "expected Object to respond to has_received?"
I believe I will not be able to test this with the given setup as the user is not being injected in the test_method.
call_method is a call to a method that enqueues a job so I want to be sure it doesn't get invoked.
How would I go about testing this method?
You could use the expect_any_instance_of method on User model with a count, say "n", to test that the model receives a particular method "n" times.
Also, you would have to set this expectation BEFORE actually calling your action because the expectation is based on something that happens inside the action itself, and not on something that the action returns.
The following line should work, assuming your user variable is an instance of the class User:
u = user(#acc)
expect_any_instance_of(User).to receive(:call_method!).once
put :test_method, :id => u.id
Alternately, you could change the spec to behave like a black box, rather than test invocations of particular methods. For example, you could mock call_method! to always return a value such as and then continue your test based on that if that is possible. That could be achieved using expect_any_instance_of(User).to receive(:call_method!).and_return(<some_object>). Your test could later be assumed to behave according to the value of that you have set. This alternate solution is just a suggestion, and it is likely that it may not work according to your specific needs.

Spock Testing when method under test contains closure

I'm using grails plugin multi-tenant-single-db. Within that context I need to write a spock test in which we temporarily remove the tenant restrictions. Location is my Tenant, so my method looks like this:
def loadOjectDetails(){
Location.withoutTenantRestriction{
// code here to retrieve specific items to the object to be loaded
render( template: "_loadDetails", model:[ ... ]
}
}
The method runs as expected, but trying to put method under test coverage the error output suggests that:
groovy.lang.MissingMethodException: No signature of method: com.myPackage.myController.Location.withoutTenantRestriction() is applicable for argument types:
and a stacktrace that stems on from there.
Do I need to Stub this? The withoutTenantRestriction is a wrapper around my entire method logic.
UPDATE:
The test code looks like this:
given:
params.id = 3002
currentUser = Mock(User)
criteriaSetup()
controller.getSalesOrder >> salesOrders[2]
when:
controller.loadOrderManageDetails()
then:
(1.._) controller.springSecurityService.getCurrentUser() >> currentUser
expect:
view == 'orderMange/orderManageDetail'
model.orderInstance == salesOrders[2]
Yes! You should be stubbing it as is created at run time not compile time.
You could stub it like below:
Your_Domain.metaClass.withoutTenantRestriction{Closure closure ->
closure.call()
}
This way your regular code will work in test cases. Also,as in withoutTenantRestriction it basically starts a new hibernate session, which doesn't matter much as now you have stubbed the closure, you could perform desired action in place of calling closure.call() only.
Also, same could be applied to withThisTenant.
In integration tests you don't need to stub it as is loading the whole environment.
Hope it helps!!

Moving transactional operations away from the controller

def cancel
begin
to_bank = #transfer.main_to_bank
to_bank.with_lock do
to_bank.locked_balance -= #transfer.amount
to_bank.available_balance += #transfer.amount
to_bank.save!
#transfer.cancel
#transfer.save!
end
rescue ActiveRecord::ActiveRecordError => e
redirect_to admin_transfer_url(#transfer), alert: "Error while cancelling."
return
end
redirect_to admin_transfer_url(#transfer), notice: 'Transfer was successfully cancelled.'
end
I would want to refactor the above code to the Transfer model or some other place, because this same code is used elsewhere. However, ActiveRecord does some transaction magic within the model, so I'm worried I might introduce some unexpected side effects by simply moving the code under the model.
Are my worries unfounded and how would the above code typically be refactored to be outside of the controller for reusability?
Update: This seems like the perfect spot for a service object, as described here http://blog.codeclimate.com/blog/2012/10/17/7-ways-to-decompose-fat-activerecord-models/.
1) As you mention in your update, this is a perfect fit for service objects. Put them in a directory like app/services since anything in /app is autoloaded. There are two popular ways of implementing them:
As a static class:
AccountService.transfer(from_account, to_account, amount)
As an object:
service = AccountService.new(from_account, to_account)
service.transfer(amount)
I prefer option one coming from a Java enterprise development background where you would use Java beans similarly.
I would also recommend returning result objects from all services as a rule. This means you create a small class called "ServiceResult" which contains a boolean flag of whether or not the call was successful, a user friendly message and optionally a result object (which is the method return value if you didn't have service result objects). In other words checking the result from the controller or any other place would be:
response = AccountService.transfer(from, to, amount)
if response.success?
flash[:notice] = response.message
else
flash[:alert] = response.message
end
You can always refactor this into a method:
flash_service_result response
After adding some helper methods to your services, a service method could look like this:
def self.transfer(from_account, to_account, amount)
ActiveRecord::Base.transaction do
..do stuff..
from_account.save!
to_account.save!
service_success("Transfer succesfull...")
end
rescue SomeException => error
service_error("Failed to transfer...")
rescue ActiveRecord::RecordInvalid => invalid_record_error
service_validation_error(invalid_record_error)
end
When using result objects, never raise an exception from the service that you expect to be handled (checked exceptions in other languages).
2) Using the active record transactional methods will behave the same regardless of where it's called from. It will not add any side effects. So yes you can call it from a controller or service.

Rails / Rspec - writing spec for delegate method (allow_nil option)

Given the code below:
(1) How would you go about writing a spec to test the :allow_nil => false option?
(2) Is it even worth writing a spec to test?
class Event < ActiveRecord::Base
belongs_to :league
delegate :name, :to => :league, :prefix => true, :allow_nil => false
end
describe Event do
context 'when delegating methods to league object' do
it { should respond_to(:league_name) }
end
end
It would actually be nice if you could extend shoulda to do:
it { should delegate(:name).to(:league).with_options(:prefix => true, :allow_nil => false) }
According to the documentation for the delegate rails module:
If the delegate object is nil an exception is raised, and that happens no matter whether nil responds to the delegated method. You can get a nil instead with the :allow_nil option.
I would create an Event object event with a nil league, or set event.league = nil, then try to call event.name, and check that it raises an exception, since that is what is supposed to happen when allow_nil is false (which is also the default). I know rspec has this idiom for exception testing:
lambda{dangerous_operation}.should raise_exception(optional_exception_class)
I'm not sure if shoulda has this construct, though there are some articles, kinda old, about how to get this behavior in shoulda.
I think this is worth testing if it is behavior that users of this class can expect or assume will happen - which I think is probably true in this case. I wouldn't extend shoulda to test "should delegate", because that seems more implementation-dependent: you're really saying your Event should raise an exception if you try to call #name when it has a nil league. It's really unimportant to users of Event how you are making that happen. I would even go so far, if you wish to assert and make a note of this behavior, to test that Event#name has the same semantics as League#name, without mentioning anything about delegate, since this is a behavior-centric approach.
Build your tests based on how your code behaves, not on how it's built - testing this way is better documentation for those who stumble into your tests, as they're more interested in the question "why is my Event throwing?" or "what can cause Event to throw?" than "is this Event delegating?".
You can highlight this sort of situation by imagining what failures might happen if you change your code in a way that users of Event shouldn't care about. If they don't care about it, the test shouldn't break when you change it. What if you want to, for example, handle the delegation yourself, by writing a #name function that first logs or increments a counter and then delegates to league? by testing the exception-raising behavior, you are protected from this change, but by testing if Event is a delegate, you will break that test when you make this change - and so your test wasn't looking at what is really important about the call to #name.
Anyway, that's all just soapbox talk. tl;dr: test it if it's behavior someone might rely upon. Anything that's not tested is Shroedinger's cat: broken and not broken at the same time. Truthfully, much of the time this can be OK: It's a matter of taste whether you want to say something rigorous and definitive about how the system should be behaving, or just let it be "unspecified behavior".
So a couple of things here on whether or not to test this:
1) I don't think there is anything wrong with spec'ing out this behavior. If you have users who need to learn your software/library, its often very helpful to ensure that all your methods that are part of your public facing contract are spec'd. If you don't want to make this part of this model's api, I might recommend doing the delegation manually as to not expose more methods to the outside world than you need to.
2) Specs of this sort help to ensure that the contract that you have with the object your delegating to remains enforced. This is particularly helpful if you are using stubs and mocks in your tests, since they are often implementing the same contract, so at least you are aware when this contract changes.
In terms of how you test the allow_nil portion of it, I would agree with Matt, the best idea is to ensure that league is nil, then attempt to call name on league. This you could test to ensure that nil is returned.
Hope this helps.
Shoulda matchers now check allow nil option.
class Account
delegate :name, to: :league, allow_nil: true
end
# RSpec
describe Account do
it { should delegate_method(:name).to(:league).allow_nil }
end
I would have tested the delegation, as effecitvely we have created is a contract between two classes
describe '#league_name' do
let(:event) { create(:event) }
after { event.league_name }
it "delegates to league's name with prefix" do
expect(event.league).to receive(:name)
end
end

Mocked object set to true is being treated like it is false

I have a unit test (using typemock 5.4.5.0) that is testing a creation service. The creation service is passed in a validation service in its constructor. The validation service returns an object that has a boolean property (IsValid). In my unit test I am mocking the validation service call to return an instance that has IsValid set to true. The creation service has an if statement that checks the value of that property. When I run the unit test, the object returned from the validation service has its property set to true, but when the if statement is executed, it treats it as though it was false.
I can verify this by debugging the unit test. The object returned by the validation service does indeed have its IsValid property set to true, but it skips the body of my if statement entirely and goes to the End If.
Here is a link to the unit test itself - https://gist.github.com/1076372
Here is a link to the creation service function I am testing - https://gist.github.com/1076376
Does anyone know why the hell the IsValid property is true but is treated like it is false?
P.S. I have also entered this issue in TypeMock's support system, but I think I will probably get a quicker response here!
First, if possible, I'd recommend upgrading to the latest version of Typemock Isolator that you're licensed for. Each version that comes out, even minor releases, contains fixes for interesting edge cases that sometimes make things work differently. I've found upgrading sometimes fixes things.
Next, I see this line in your unit test:
Isolate.WhenCalled(() => validator.ValidateNewQuestionForExistingQuestionPool(new QuestionViewModel())).WillReturn(new Validation(true));
The red flag for me is the "new QuestionViewModel()" that's inside the "WhenCalled()" block.
Two good rules of thumb I always follow:
Don't put anything in the WhenCalled() that you don't want mocked.
If you don't care about the arguments, don't pass real arguments.
In this case, the first rule makes me think "I don't want the constructor for the QuestionViewModel mocked, so I shouldn't put it in there."
The second rule makes me consider whether the argument to the "ValidateNewQuestionForExistingPool" method really isn't important. In this case, it's not, so I'd pass null rather than a real object. If there's an overload you're specifically looking at, cast the null first.
Finally, sort of based on that first rule, I generally try not to inline my return values, either. That means I'd create the new Validation object before the Isolate call.
var validation = new Validator(true);
Isolate.WhenCalled(() => validator.ValidateNewQuestionForExistingQuestionPool(null)).WillReturn(validation);
Try that, see how it runs. You might also watch in the Typemock Tracer utility to see what's getting set up expectation-wise when you run your test to ensure additional expectations aren't being set up that you're not... expecting.