I am using factory_boy package and DjangoModelFactory to generate a factory model with muted signals
#factory.django.mute_signals(signals.post_save)
class SomeModelTargetFactory(DjangoModelFactory):
name = factory.Sequence(lambda x: "Name #{}".format(x))
...
I have a post_save signal connected to the model:
def send_notification(sender, instance, created, **kwargs):
if created:
send_email(...)
post_save.connect(send_notification, SomeModel)
How can I test the signals works when I create an instance of the model using the factory class?
Some solutions for the direct question. Followed by a caution.
A) Instead of turning off the signals, mock the side effects
#mock.patch('send_email')
def test_mocking_signal_side_effects(self, mocked_send_email):
my_obj = SomeModelTargetFactory()
# mocked version of send_email was called
self.assertEqual(mocked_send_email.call_count, 1)
my_obj.foo = 'bar'
my_obj.save()
# didn't call send_email again
self.assertEqual(mocked_send_email.call_count, 1)
Note: mock was separate package before joining standard lib in 3.3
B) Use as context manager so you can selectively disable in your tests
This would leave the signals on by default, but you can selectively disable:
def test_without_signals(self):
with factory.django.mute_signals(signals.post_save):
my_obj = SomeModelTargetFactory()
# ... perform actions w/o signals and assert ...
C) Mute signals and an extended version of the base factory
class SomeModelTargetFactory(DjangoModelFactory):
name = factory.Sequence(lambda x: "Name #{}".format(x))
# ...
#factory.django.mute_signals(signals.post_save)
class SomeModelTargetFactoryNoSignals(SomeModelTargetFactory):
pass
I've never tried this, but it seems like it should work. Additionally, if you just need the objects for a quick unit test where persistence isn't required, maybe FactoryBoy's BUILD strategy is a viable option.
Caution: Muting signals, especially like post_save can hide nasty bugs
There are easily findable references about how using signals in your own code can create a false sense of decoupling (post_save for example, essentially is the same as overriding and extending the save method. I'll let you research that to see if it applies to your use case.
Would definitely think twice about making it the default.
A safer approach is to "mute"/mock the receiver/side effect, not the sender.
The default Django model signals are used frequently by third party packages. Muting those can hide hard to track down bugs due to intra-package interaction.
Defining and calling (and then muting if needed) your own signals is better, but often is just re-inventing a method call. Sentry is a good example of signals being used well in a large codebase.
Solution A is by far the most explicit and safe. Solution B and C, without the addition of your own signal requires care and attention.
I wont say there are no use cases for muting post_save entirely. It should be an exception and an alert to maybe double check the need in the first place.
Related
I have seen how to test observable using TestSubscriber but I have no idea how to test Completable.doOnSuccess callback. Specifically this method:
fun setAuthToken(authToken: AuthToken): Completable {
this.authToken = authToken
return Completable.fromSingle<User>(api
.getCurrentUser()
.doOnSuccess {
user = it
})
}
This is not something that might not need to be tested with RxJava test subscribers at all (depending on the rest of the code).
Remember - you don't want to test internal state, or at least do it as rarely as possible. Internal state and class structure can change and it will probably change often. So it's bad practice to check if user is assigned to the field.
So you could make Completable blocking and then assert state of (let’s call it ‚server’) server class, but I would highly discourage doing it this way:
server.setAuthToken(AuthToken("token"))
.blockingAwait()
assertThat(server.user, equalTo(expectedUser))
What you want to test is behavior.
You are probably not assigning user to the field just for the sake of having some fields. You are doing it to use information from user later on. So first you should call setAuthToken and then call function that really uses information from the user. Then you can assert if used information is correct and is coming from correct user.
So sample tests (depending on the class) could look like this:
server.setAuthToken(AuthToken("token"))
.andThen(server.sendRequest())
.blockingAwait()
// assert if correct user info was sent
or
server.setAuthToken(AuthToken("token"))
.andThen(server.sendRequest())
.test()
// assert if correct user info was sent
Suppose I have entity that creates SVN branch during its work. To perform functional testing I create multiple almost same methods (I use python unittest framework but question relates to any test framework):
class Tester(unittest.TestCase):
def test_valid1_url(self):
url="valid1"
BranchCreator().create_branch(url)
self.assertUrlExists(url) # assume I have this method implemented
def test_valid2_url(self):
url="valid2"
BranchCreator().create_branch(url)
self.assertUrlExists(url) # assume I have this method implemented
def test_invalid_url(self):
url="invalid"
self.assertRaises(ValueError, BranchCreator().create_branch, url)
After each test I want to remove the resulting branch or do nothing if test failed. Ideally I would use something like following:
#teardown_params(url='valid1')
def test_valid1_url(self):
def tearDown(self, url):
if (url_exists(url)): remove_branch(url)
But tearDown does not accept any parameter.
I see few quite dirty solutions:
a) create field "used_url" in Tester, set it in every method and use in tearDown:
def test_valid1_url(self):
self.used_url="valid1"
BranchCreator().create_branch(self.used_url)
self.assertUrlExists(url)
...
def tearDown(self):
if (url_exists(self.used_url)): remove_branch(self.used_url)
It should work because (at least in my environment) all tests are run sequentally so there would be no conflicts. But this solution violates tests independency principle due to shared variable, and if I would manage to launch tests simultaneously, it will not work.
b) Use separate method like cleanup(self, url) and call it from every method
Is there any other approach?
I think that the b) solution could work even if it mandates to have the call to the helper method in every test and that sounds to me like a sort of duplication.
Another approach could be the calling the helper method inside the "assertUrlExists" function. In this way the duplication is removed and you can avoid to check again the existence of the URL in order to manage the cleanup: you have the assertion result and you can use it.
I have a command line application written in a scripting language. The startup script does something like this:
import 'App'
app = new App()
app.run()
The run()method is responsible for instantiating all required objects and then actually starting the application:
import 'Artist', 'Song', 'Listener'
class App
method run()
artist = new Artist()
song = new Song()
listener = new Listener(artist, song)
listener->listen()
end
end
How can I write a test to make sure that run() is doing what it's supposed to do?
My initial thought was to add an optional argument so that I could pass a mock of Listener and expect listen() to be called, but it does not tell me if the actual Listener class will be instantiated correctly when running the application.
Another idea is to pass all the objects to run(), but then I would have to create them on the startup script, which I'd also have to test and the same problem arises.
I would say don't test that objects are created correctly. I presume you test the constructor for the Listener class in its own unit tests. Given that I would say you can trust the interpreter to construct your object correctly. If you want to test that the interpreter can construct classes then you're testing your scripting language not your app.
If you want to functionally test the App class that would mean checking that listen() has done whatever it is supposed to do. How you do that would in turn depend on what listen() is supposed to do.
The other option is to expose what you need to test, of course there are lots of arguments about changing your code to make it more testable. I won't go into them here. But you could expose you Listener class in the App so you could query it in testing.
[Slight aside: some languages provide the means to expose properties to specific assemblies so that you don't have to expose properties publicly (internalsvisibleto, I'm looking at you c#).]
I have read Is Rails shared-nothing or can separate requests access the same runtime variables? and they explain my problem:
class variable are maybe share between two request to my rails srver, but where is the solution!?
How can I implement a safe singleton between request?
class Foo
##instances = []
end
How can I be sure instances will be reset for each request HTTP?!
EDIT:
I find "config.reload_classes_only_on_change = false" solution but i'm not sure it's the best for performance.
What are the consequences to this option?
I have an exemple to test safe classes variables :
class Test
def self.log
#test ||= false
puts #test
#test = true
end
end
class ApplicationController < ActionController::Base
def index
Test.log
Test.log
end
end
if I start this code with reloading action (F5), I want to read "false" each time in log of rails server. But, by default it's "false" only the first time.
EDIT 2:
In fact this option reload class, but not resolve the concurency problem in thread.
Classes variables are reset but they can be modify by other thread.
How threadsafe classes variables?
I use the request_store gem, it works great.
My use case is for adding methods to the user model class, like the current_user, their locale, location etc as often my other models need this information.
I just setup the current user from my application controller:
User.current = the_authenticated_user
User.request = request
And in my user model class:
class User
def self.current
RequestStore.store[:current_user]
end
def self.current=(user)
RequestStore.store[:current_user] = user
end
def self.request
RequestStore.store[:current_request]
end
def self.request=(request)
# stash the request so things like IP address and GEO-IP based location is available to other models
RequestStore.store[:current_request] = request
end
def self.location
# resolve the location just once per request
RequestStore.store[:current_location] ||= self.request.try(:location)
end
end
I don't enable the reload classes option as it causes too many problems, I've witnessed multiple versions of classes hanging around. If you use model inheritance (ie STI) lazy loading and/or dynamic class loading will often break how model classes are resolved. You need to use require_dependency in your base and intermediate model classes to ensure downstream classes also get loaded.
My development settings mirror my production settings wrt class handling which is not convenient (requires server restart after a change) but more convenient than chasing non-existent bugs. The rerun gem can monitor file-system changes and restart your server for you so that you get reliable change handling in development, albeit slower than rails broken class reloading.
config/environment/development.rb:
# Rails class reloading is broken, anytime a class references another you get multiple
# class instances for the same named class and that breaks everything. This is especially
# important in Sequel as models resolve classes once.
# So always cache classes (true)
config.cache_classes = true
# Always eager load so that all model classes are known and STI works
config.eager_load = true
Q: How threadsafe classes variables?
A: No variables are thread safe unless protected by synchronize.
From an architectural perspective threading in Rails is a waste of time. The only way I have been able to get true parallel performance/concurrency is multiple processes. It also avoids locking and threading related overheads that just don't exist with long running processes. I tested parallel CPU intensive code using threads with Ruby 2.x and got no parallelism at all. With 1 ruby process per core I got real parallelism.
I would seriously consider Thin with multiple processes and then decide if you want to use Thin+EventMachine to increase overall throughput per process.
I am trying to test how my class reacts on to what happens when the BackgroundWorker fires the RunWorkerCompleted event.
I am using RhinoMocks (if there is another approach I am willing to try it as well) and the code is as follows:
//arrange
var bw1 = MockRepository.GenerateStub<BackgroundWorker>();
Action work1 = () => Thread.Sleep(1);
WorkQueueProcess processInQueue = new WorkQueueProcess(bw1) { Work = work1 };
var tested = new WorkQueue() { processInQueue };
// act
bw1.Raise(
bw => bw.RunWorkerCompleted +=
null,
bw1,
new RunWorkerCompletedEventArgs(null, null, false)
);
// assert
Assert.AreEqual(false, tested.IsBusy);
I am getting an exception that says:
Invalid call, the last call has been
used or no call has been made (make
sure that you are calling a virtual
(C#) / Overridable (VB) method).
What am I doing wrong ? Is it because BackgroundWorker has no virtual methods? I thought I should be able to raise an event regardless, because event are hardly ever virtual.
Only the class that defines an event can raise that event. That is simply how the event system in .NET works.
The Raise method provided by RhinoMocks is intended to be used with interfaces that define events. In that case, any class implementing that interface owns its own event, and thus is able to raise it. That is also true for the run-time emitted types generated by RhinoMocks.
However, when it comes to classes, even sub-types can't raise events defined by their supertypes, which is why we have the OnEvent coding idiom.
BackgroundWorker does have the OnRunWorkerCompleted method that will raise the event. This method is virtual, so if you can get RhinoMocks to invoke this protected method, you should be able to raise the event.
I'm using Moq, which can't do that - I can't remember if RhinoMocks has the ability to invoke protected members.
Otherwise, you can derive a test-specific class from BackgroundWorker and add a public method that invokes OnRunWorkerCompleted.
As a closing remark, however, I would strongly recommend that you don't try to unit test multithreaded code - along that road lies only pain...