Suppose I have entity that creates SVN branch during its work. To perform functional testing I create multiple almost same methods (I use python unittest framework but question relates to any test framework):
class Tester(unittest.TestCase):
def test_valid1_url(self):
url="valid1"
BranchCreator().create_branch(url)
self.assertUrlExists(url) # assume I have this method implemented
def test_valid2_url(self):
url="valid2"
BranchCreator().create_branch(url)
self.assertUrlExists(url) # assume I have this method implemented
def test_invalid_url(self):
url="invalid"
self.assertRaises(ValueError, BranchCreator().create_branch, url)
After each test I want to remove the resulting branch or do nothing if test failed. Ideally I would use something like following:
#teardown_params(url='valid1')
def test_valid1_url(self):
def tearDown(self, url):
if (url_exists(url)): remove_branch(url)
But tearDown does not accept any parameter.
I see few quite dirty solutions:
a) create field "used_url" in Tester, set it in every method and use in tearDown:
def test_valid1_url(self):
self.used_url="valid1"
BranchCreator().create_branch(self.used_url)
self.assertUrlExists(url)
...
def tearDown(self):
if (url_exists(self.used_url)): remove_branch(self.used_url)
It should work because (at least in my environment) all tests are run sequentally so there would be no conflicts. But this solution violates tests independency principle due to shared variable, and if I would manage to launch tests simultaneously, it will not work.
b) Use separate method like cleanup(self, url) and call it from every method
Is there any other approach?
I think that the b) solution could work even if it mandates to have the call to the helper method in every test and that sounds to me like a sort of duplication.
Another approach could be the calling the helper method inside the "assertUrlExists" function. In this way the duplication is removed and you can avoid to check again the existence of the URL in order to manage the cleanup: you have the assertion result and you can use it.
Related
I'm trying to test my view given certain responses from AWS. To do this I want to patch a class I wrote to return certain things while running tests.
#patch.object(CognitoInterface, "get_user_tokens", return_value=mocked_get_user_tokens_return)
class TestLogin(TestCase):
def test_login(self, mocked_get_user_tokens):
print(CognitoInterface().get_user_tokens("blah", "blah")) # Works, it prints the patched return value
login_data = {"email": "whatever#example.com", "password": "password"}
response = self.client.post(reverse("my_app:login"), data=login_data)
Inside the view from "my_app:login", I call...
CognitoInterface().get_user_tokens(email, password)
But this time, it uses the real method. I want it to use the patched return here as well.
It seems my patch only applies inside the test file. How can I make my patch apply to all code during the test?
Edit: Never figured out why #patch.object wasn't working. I just used #patch("path.to.file.from.project.root.ClassName.method_to_patch").
Also, see: http://bhfsteve.blogspot.com/2012/06/patching-tip-using-mocks-in-python-unit.html
Most likely CognitoInterface is imported in the views.py earlier than the monkeypatching happens. You can check this by adding print() into the views where CognitoInterface is imported (or declared, it's not obvious from the amount of code here), and then another print() next to the monkeypatching.
If the monkeypatching happens latter, you have to either delay the import, or in worst case monkeypatch in both places.
I am using factory_boy package and DjangoModelFactory to generate a factory model with muted signals
#factory.django.mute_signals(signals.post_save)
class SomeModelTargetFactory(DjangoModelFactory):
name = factory.Sequence(lambda x: "Name #{}".format(x))
...
I have a post_save signal connected to the model:
def send_notification(sender, instance, created, **kwargs):
if created:
send_email(...)
post_save.connect(send_notification, SomeModel)
How can I test the signals works when I create an instance of the model using the factory class?
Some solutions for the direct question. Followed by a caution.
A) Instead of turning off the signals, mock the side effects
#mock.patch('send_email')
def test_mocking_signal_side_effects(self, mocked_send_email):
my_obj = SomeModelTargetFactory()
# mocked version of send_email was called
self.assertEqual(mocked_send_email.call_count, 1)
my_obj.foo = 'bar'
my_obj.save()
# didn't call send_email again
self.assertEqual(mocked_send_email.call_count, 1)
Note: mock was separate package before joining standard lib in 3.3
B) Use as context manager so you can selectively disable in your tests
This would leave the signals on by default, but you can selectively disable:
def test_without_signals(self):
with factory.django.mute_signals(signals.post_save):
my_obj = SomeModelTargetFactory()
# ... perform actions w/o signals and assert ...
C) Mute signals and an extended version of the base factory
class SomeModelTargetFactory(DjangoModelFactory):
name = factory.Sequence(lambda x: "Name #{}".format(x))
# ...
#factory.django.mute_signals(signals.post_save)
class SomeModelTargetFactoryNoSignals(SomeModelTargetFactory):
pass
I've never tried this, but it seems like it should work. Additionally, if you just need the objects for a quick unit test where persistence isn't required, maybe FactoryBoy's BUILD strategy is a viable option.
Caution: Muting signals, especially like post_save can hide nasty bugs
There are easily findable references about how using signals in your own code can create a false sense of decoupling (post_save for example, essentially is the same as overriding and extending the save method. I'll let you research that to see if it applies to your use case.
Would definitely think twice about making it the default.
A safer approach is to "mute"/mock the receiver/side effect, not the sender.
The default Django model signals are used frequently by third party packages. Muting those can hide hard to track down bugs due to intra-package interaction.
Defining and calling (and then muting if needed) your own signals is better, but often is just re-inventing a method call. Sentry is a good example of signals being used well in a large codebase.
Solution A is by far the most explicit and safe. Solution B and C, without the addition of your own signal requires care and attention.
I wont say there are no use cases for muting post_save entirely. It should be an exception and an alert to maybe double check the need in the first place.
How do we unit test logic in Promises.task?
task{service.method()}
I want to validate invocation of the service method inside the task.
Is this possible? If yes, how?
I read in the documentation that in unit testing async processes, one can use this:
Promises.promiseFactory = new SynchronousPromiseFactory()
Tried adding it in my setup, but still does not work.
The long way
I've been struggling with this for a moment too.
I tried those:
grails unit test + Thread
Verify Spock mock with specified timeout
Also tried the same solution from the docs as you:
Promises.promiseFactory = new SynchronousPromiseFactory()
All went with no luck.
The solution
So I ended up with meta classing.
In the test's setup method, I mocked the Promise.task closure, so it runs the closure in the current thread, not in a new one:
def setup() {
Promises.metaClass.static.task = { Closure c -> c() }
// ...more stuff if needed...
}
Thanks to that, I can test the code as it wouldn't use multi threading.
Even I'm far from being 100% happy with this, I couldn't get anything better so far.
In recent versions of Grails (3.2.3 for instance), there is no need to mock, metaClass or use a Promise factory. I found out the promises in unit tests get executed synchronously. Found no doc for that, I empirically added a sleep inside a promise and noticed the test waited for the pause to complete.
For integration tests and functional tests, that's another story: you have to change the promise provider, for instance in BootStrap.groovy:
if (Environment.current == Environment.TEST) {
Promises.promiseFactory = new SynchronousPromiseFactory()
}
Like Marcin suggested, the metaClass option is not satisfactory. Also bear in mind that previous (or future) versions of Grails are likely to work differently.
If you are stuck with Grails 2 like dinosaurs such as me, then you can just copy the classes SynchronousPromiseFactory and SynchronousPromise from Grails 3 to your project and then the following works:
Promises.promiseFactory = new Grails3SynchronousPromiseFactory()
(Class names are prefixed with Grails3 to make the hack more obvious)
I'd simply mock/override the Promises.task method to invoke the provided closure directly.
I don't understand how teardown in FactoryBoy + Django works.
I have a testcase like this:
class TestOptOutCountTestCase(TestCase):
multi_db = True
def setUp(self):
TestCase.setUp(self)
self.date = datetime.datetime.strptime('05Nov2014', '%d%b%Y')
OptoutFactory.create(p_id=1, cdate=self.date, email='inv1#test.de', optin=1)
def test_optouts2(self):
report = ReportOptOutsView()
result = report.get_optouts()
self.assertEqual(len(result), 1)
self.assertEqual(result[0][5], -1)
setUp is running once for all tests correct? Now if I had a second test and needed a clean state before running it, how do I achieve this? Thanks
If I understand you correctly you don't need tearDown in this case, as resetting the database between each test is the default behaviour for a TestCase.
See:
At the start of each test case, before setUp() is run, Django will flush the database, returning the database to the state it was in directly after migrate was called.
...
This flush/load procedure is repeated for each test in the test case, so you can be certain that the outcome of a test will not be affected by another test, or by the order of test execution.
Or do you mean to limit the creation of instances via the OutputFactory to certain tests?
Then you probably shouldn’t put the creation of instances into setUp.
Or you create two variants of your TestCase, one for all tests that rely on the factory and one for the ones that don't.
Regarding the uses of tearDown check this answer: Django when to use teardown method
Ideally, a test class is written for every class in the production code. In test class, all the test methods may not require the same preconditions. How do we solve this problem?
Do we create separate test classes for these?
I suggest creating separate methods wrapping necessary precondition setup. Do not confuse this approach with traditional test setup. As an example, assume you wrote tests for receipt provider, which searches repository and depending on some validation steps, returns receipt. We might end-up with:
receipt doesn't exist in repository: return null
receipt exists, but doesn't match validator date: return null
receipt exists, matches validator date, but was not fully committed (i.e. was not processed by some external system): return null
We have several conditions here: receipt exists/doesn't exist, receipt is invalid date-wise, receipt is not commited. Our happy path is the default setup (for example done via traditional test setup). Then, happy path test would be as simple as (some C# pseudo-code):
[Test]
public void GetReceipt_ReturnsReceipt()
{
receiptProvider.GetReceipt("701").IsNotNull();
}
Now, for the special condition cases we simply write tiny, dedicated methods that would arrange our test environment (eg. setup dependencies) so that conditions are met:
[Test]
public void GetReceipt_ReturnsNull_WhenReceiptDoesntExist()
{
ReceiptDoesNotExistInRepository("701")
receiptProvider.GetReceipt("701").IsNull();
}
[Test]
public void GetReceipt_ReturnsNull_WhenExistingReceiptHasInvalidDate()
{
ReceiptHasInvalidDate("701");
receiptProvider.GetReceipt("701").IsNull();
}
You'll end up with couple extra helper methods, but your tests will be much easier to read and understand. This is especially helpful when logic is more complicated than simple yes-no setup:
[Test]
public void GetReceipt_ThrowsException_WhenUncommittedReceiptHasInvalidDate()
{
ReceiptHasInvalidDate("701");
ReceiptIsUncommitted("701");
receiptProvider.GetReceipt("701").Throws<Exception>();
}
It's an option to group tests with the same preconditions in the same classes, this also helps avoiding test classes of over a thousand lines. You can also group the creation of the preconditions in seperate methods and let each test call the applicable method. You can do this when most of the methods have different preconditions, otherwise you could just use a setup method that is called before the test.
I like to use a Setup method that will get called before each test runs. In this method I instantiate the class I want to test, giving it any dependencies it needs to be created. Then I will set the specific details for the individual tests inside the test method. It moves any common initialization of the class out to the setup method and allows the test to be focused on what it needs to be evaluated.
You may find this link valuable, it discusses an approach to Test Setups:
In Defense of Test Setup Methods, by Erik Dietrich