def cancel
begin
to_bank = #transfer.main_to_bank
to_bank.with_lock do
to_bank.locked_balance -= #transfer.amount
to_bank.available_balance += #transfer.amount
to_bank.save!
#transfer.cancel
#transfer.save!
end
rescue ActiveRecord::ActiveRecordError => e
redirect_to admin_transfer_url(#transfer), alert: "Error while cancelling."
return
end
redirect_to admin_transfer_url(#transfer), notice: 'Transfer was successfully cancelled.'
end
I would want to refactor the above code to the Transfer model or some other place, because this same code is used elsewhere. However, ActiveRecord does some transaction magic within the model, so I'm worried I might introduce some unexpected side effects by simply moving the code under the model.
Are my worries unfounded and how would the above code typically be refactored to be outside of the controller for reusability?
Update: This seems like the perfect spot for a service object, as described here http://blog.codeclimate.com/blog/2012/10/17/7-ways-to-decompose-fat-activerecord-models/.
1) As you mention in your update, this is a perfect fit for service objects. Put them in a directory like app/services since anything in /app is autoloaded. There are two popular ways of implementing them:
As a static class:
AccountService.transfer(from_account, to_account, amount)
As an object:
service = AccountService.new(from_account, to_account)
service.transfer(amount)
I prefer option one coming from a Java enterprise development background where you would use Java beans similarly.
I would also recommend returning result objects from all services as a rule. This means you create a small class called "ServiceResult" which contains a boolean flag of whether or not the call was successful, a user friendly message and optionally a result object (which is the method return value if you didn't have service result objects). In other words checking the result from the controller or any other place would be:
response = AccountService.transfer(from, to, amount)
if response.success?
flash[:notice] = response.message
else
flash[:alert] = response.message
end
You can always refactor this into a method:
flash_service_result response
After adding some helper methods to your services, a service method could look like this:
def self.transfer(from_account, to_account, amount)
ActiveRecord::Base.transaction do
..do stuff..
from_account.save!
to_account.save!
service_success("Transfer succesfull...")
end
rescue SomeException => error
service_error("Failed to transfer...")
rescue ActiveRecord::RecordInvalid => invalid_record_error
service_validation_error(invalid_record_error)
end
When using result objects, never raise an exception from the service that you expect to be handled (checked exceptions in other languages).
2) Using the active record transactional methods will behave the same regardless of where it's called from. It will not add any side effects. So yes you can call it from a controller or service.
Related
I created a simple test case in Symfony.
So one client which should listen for an event which will be dispatched during an request.
But nothing happen because the request have an own scope or I dont know why Im not able to access the dispatcher in it.
$this->client = static::createClient();
self::$container = $this->client->getContainer();
$dispatcher = self::$container->get('event_dispatcher');
$dispatcher->addListener('example', function ($event) {
// Never executed
});
$this->client->request('POST', $endpoint, $this->getNextRequestParameters($i), [$file], $this->requestHeaders);
$this->client->getResponse();
The listener is never called.
When I debug it a bit I find out that the object hash via spl_object_hash($dispatcher) is different on the highest level than on within the request level.
So it seems that the request has an own world and ignores everything outside.
But then is the question how I can put my listener to this "world"?
I think part of the problem is the mixing of testing styles. You have a WebTestCase which is intended for a very high level of testing (requests & responses). It should not really care about internals, i.e. which services or listeners are called. It only cares that given input x (your request) you will get output y (your response). This allows to ensure the basic functionality as perceived by your users is always met, without caring how it is done. Making these tests very flexible.
By looking into the container and the services you are going into a lower level of testing, which tests interconnected services. This is usually only done within the same process for the reasons you already found out. The higher level test has 2 separate lifecycles, one for the test itself and one for the simulated web request to your application, hence the different object ids.
The solution is either to emit something to the higher level, e.g. by setting headers or changing the output, so you can inspect the response body. You could also write into some log file and check the logs before/after the request for that message.
A different option would be to move the whole test into a lower level where you do not need the requests and instead only work with the services. For this you can use the KernelTestCase (instead of the WebTestCase) and instead of calling createClient() you call bootKernel. This will give you access to your container where you can modify the EventDispatcher. Rather than sending a request you can then either call the code directly, e.g. dispatch an event if you only want to test the listeners, or you can make your controller accessible as service and then manually create a request, call the action and then either check the response or whatever else you want to assert on. This could look roughly like this:
public function testActionFiresEvent()
{
$kernel = static::bootKernel();
$eventDispatcher = $kernel->getContainer()->get('event_dispatcher');
// ...
$request = Request::create();
// This might not work when the controller
// You can create a service configuration only used by tests,
// e.g. "config/services_test.yaml" and provide the controller service there
$controller = $kernel->getContainer()->get(MyController::class);
$response = $controller->endpointAction($request);
// ...Do assertions...
}
I have a rails controller which I would like to test a method test_method.
class ActiveController < ActionController::Base
def test_method
user = acc_users.all_users.find params[:id]
if !user.active?
user.call_method!
end
end
end
I have to test that call_method isn't being called. This is what I have come up with but I don't think this will work.
it "should not call call_method" do
u = user(#acc)
put :test_method, :id => u.id
expect(u).not_to have_received(:call_method!)
end
I followed this question here and found it almost similar except that the method being called is in another class. When I try this above code I get an error message like "expected Object to respond to has_received?"
I believe I will not be able to test this with the given setup as the user is not being injected in the test_method.
call_method is a call to a method that enqueues a job so I want to be sure it doesn't get invoked.
How would I go about testing this method?
You could use the expect_any_instance_of method on User model with a count, say "n", to test that the model receives a particular method "n" times.
Also, you would have to set this expectation BEFORE actually calling your action because the expectation is based on something that happens inside the action itself, and not on something that the action returns.
The following line should work, assuming your user variable is an instance of the class User:
u = user(#acc)
expect_any_instance_of(User).to receive(:call_method!).once
put :test_method, :id => u.id
Alternately, you could change the spec to behave like a black box, rather than test invocations of particular methods. For example, you could mock call_method! to always return a value such as and then continue your test based on that if that is possible. That could be achieved using expect_any_instance_of(User).to receive(:call_method!).and_return(<some_object>). Your test could later be assumed to behave according to the value of that you have set. This alternate solution is just a suggestion, and it is likely that it may not work according to your specific needs.
I have some code in a Rails model that does this:
has_one :plan
validates_presence_of :plan
And used to do this:
after_initialize :set_plan
def set_plan
self.plan ||= FreePlan.new
end
and now does:
def plan
super || (self.plan = FreePlan.new)
end
Now, however, this test fails:
it { is_expected.to validate_presence_of(:plan) }
The new code is nicer in that it doesn't always have to look up a plan object in the DB for every instantiation of this object, but I'm curious what the test is doing in terms of the object under test and its lifecycle.
Here's what is happening:
with the older code, after_initialize is called, and the value is automatically populated. But it's possible to subsequently set plan to nil and then try and save it.
With the new code, valid? is going to call the plan method and therefore populate it, so that when the test suite inspects errors, it's not going to find any for plan.
Perhaps the best course of action is to remove the line from the test suite and add one ensuring that plan is never nil.
I have read Is Rails shared-nothing or can separate requests access the same runtime variables? and they explain my problem:
class variable are maybe share between two request to my rails srver, but where is the solution!?
How can I implement a safe singleton between request?
class Foo
##instances = []
end
How can I be sure instances will be reset for each request HTTP?!
EDIT:
I find "config.reload_classes_only_on_change = false" solution but i'm not sure it's the best for performance.
What are the consequences to this option?
I have an exemple to test safe classes variables :
class Test
def self.log
#test ||= false
puts #test
#test = true
end
end
class ApplicationController < ActionController::Base
def index
Test.log
Test.log
end
end
if I start this code with reloading action (F5), I want to read "false" each time in log of rails server. But, by default it's "false" only the first time.
EDIT 2:
In fact this option reload class, but not resolve the concurency problem in thread.
Classes variables are reset but they can be modify by other thread.
How threadsafe classes variables?
I use the request_store gem, it works great.
My use case is for adding methods to the user model class, like the current_user, their locale, location etc as often my other models need this information.
I just setup the current user from my application controller:
User.current = the_authenticated_user
User.request = request
And in my user model class:
class User
def self.current
RequestStore.store[:current_user]
end
def self.current=(user)
RequestStore.store[:current_user] = user
end
def self.request
RequestStore.store[:current_request]
end
def self.request=(request)
# stash the request so things like IP address and GEO-IP based location is available to other models
RequestStore.store[:current_request] = request
end
def self.location
# resolve the location just once per request
RequestStore.store[:current_location] ||= self.request.try(:location)
end
end
I don't enable the reload classes option as it causes too many problems, I've witnessed multiple versions of classes hanging around. If you use model inheritance (ie STI) lazy loading and/or dynamic class loading will often break how model classes are resolved. You need to use require_dependency in your base and intermediate model classes to ensure downstream classes also get loaded.
My development settings mirror my production settings wrt class handling which is not convenient (requires server restart after a change) but more convenient than chasing non-existent bugs. The rerun gem can monitor file-system changes and restart your server for you so that you get reliable change handling in development, albeit slower than rails broken class reloading.
config/environment/development.rb:
# Rails class reloading is broken, anytime a class references another you get multiple
# class instances for the same named class and that breaks everything. This is especially
# important in Sequel as models resolve classes once.
# So always cache classes (true)
config.cache_classes = true
# Always eager load so that all model classes are known and STI works
config.eager_load = true
Q: How threadsafe classes variables?
A: No variables are thread safe unless protected by synchronize.
From an architectural perspective threading in Rails is a waste of time. The only way I have been able to get true parallel performance/concurrency is multiple processes. It also avoids locking and threading related overheads that just don't exist with long running processes. I tested parallel CPU intensive code using threads with Ruby 2.x and got no parallelism at all. With 1 ruby process per core I got real parallelism.
I would seriously consider Thin with multiple processes and then decide if you want to use Thin+EventMachine to increase overall throughput per process.
I've a Grails (3+) service where a domain object is retrieved from the DB, modified , and then updated in the DB.
class MyService {
def modifyObject(String uuid) {
def md = MyDomain.findByUuid(uuid)
md.someField = true
if (!md.save()){
throw new MyException()
}
}
}
Now I need to test this service method with a negative test, to be sure that the exception is thrown. But I can't figure out how to force a failing save in the service method.
#TestFor(MyService)
#Mock(MyDomain)
class MyServiceSpec extends Specification {
void "test An exceptionl situation was found"() {
given: "An uuid"
def md = new MyDomain(uuid: "123")
md.save(failOnError: true)
when: "service is called"
service.modifyObject("123")
then: "An exception is thrown"
thrown MyException
}
}
Obviously, re-defining the service method in a more functional way (the object is passed directly to the method, modified and returned without save .e.g MyDomain modifyObject(MyDomain md)) will be a good solution since I could create an invalid ad hoc object outside or even invalidate it after the method execution.
But the question is: "is there a way to test the service code as is?"
Assuming that you really do want to throw an exception and not just handle validation errors, then sure. You'll want to utilize Spock's support for interaction based testing and leverage static method stubs. See similar question, Unit test grails with domain objects using GORM functions.
You need some way to isolate the service method and stub out the GORM functionality. This can be tricky with static methods, but can be accomplished with a global GroovyMock or GroovySpy. In essence, you're replacing all instances/references to MyDomain for the duration of the method (though the GroovySpy will fall back on the actual domain class unless an interaction matches).
With the Mock/Spy in place, you can specify the interactions you expect to occur and specify what those interactions should return. In this case, we expect the findByUuid to be invoked with an argument of "123" once, and we return a mock MyDomain object. That mock object then has it's save() method invoked once where we return null, i.e. the save failed.
void "test An exceptional situation was found"() {
setup:
GroovySpy(MyDomain, global: true)
def mockDomain = Mock(MyDomain)
when: "service is called"
service.modifyObject("123")
then: "An exception is thrown"
1 * MyDomain.findByUuid("123") >> mockDomain
1 * mockDomain.save() >> null
thrown Exception
}
If you violate a constraint on MyDomain and the save should fail, no?
If the modifyObject is really that simple, and you can't pass in a bogus value to force a constraint violation, maybe you need to metaClass the MyDomain.save and force a failure there.