Django unit tests failing when run with other test cases - django

I'm getting inconsistent behavior with Django unit tests. On my development machine using sqlite, if I run tests on my two apps separately the tests pass, but if I run manage.py test to test everything at once, I start getting unit test failures consistently on two tests.
On my staging server which uses Postgres, I have a particular test that works when testing it individually (e.g. manage.py test MyApp.tests.MyTestCase.testSomething), but fails when running the entire test case (e.g. manage.py test MyApp.tests.TestCase).
Other related StackOverflow questions seem to have two solutions:
Use Django TestCase's instead of the Python equivalent
Use TransactionTestCase's to make sure the database is cleaned up properly after every test.
I've tried both to no avail. Out of frustration, I also tried using django-nose instead, but I was seeing the same errors. I'm on Django 1.6.

I just spent all day debugging a similar problem. In my case, the issue was as follows.
In one of my view functions I was using the Django send_mail() function. In my test, rather than having it send me an email every time I ran my tests, I patched send_mail in my test method:
from mock import patch
...
def test_stuff(self):
...
with patch('django.core.mail.send_mail') as mocked_send_mail:
...
That way, after my view function is called, I can test that send_mail was called with:
self.assertTrue(mocked_send_mail.called)
This worked fine when running the test on its own, but failed when run with other tests in the suite. The reason this fails is that when it runs as part of the suite other views are called beforehand, causing the views.py file to be loaded, causing send_mail to be imported before I get the chance to patch it. So when send_mail gets called in my view, it is the actual send_mail that gets called, not my patched version. When I run the test alone, the function gets mocked before it is imported, so the patched version ends up getting imported when views.py is loaded. This situation is described in the mock documentation, which I had read a few times before, but now understand quite well after learning the hard way...
The solution was simple: instead of patching django.core.mail.send_mail I just patched the version that had already been imported in my views.py - myapp.views.send_mail. In other words:
with patch('myapp.views.send_mail') as mocked_send_mail:
...
This took me a long time to debug, so I thought I would share my solution. I hope it works for you too. You may not be using mocks, in which case this probably won't help you, but I hope it will help someone.

Besides using TestCase for all your tests, you need to make sure you tear down any patching that was done in your setup methods:
def setUp(self):
self.patcher = patch('my.app.module')
def tearDown(self):
self.patcher.stop()

I had the same thing happening today with a series of tests. I had 23 regular django.test.TestCase tests and then one django.contrib.staticfiles.testing.StaticLiveServerTestCase test. It was that final test that would always fail when ran with the rest of them but pass on its own.
Solution
In the 23 regular TestCase tests I really had implemented a subclass of the regular TestCase so that I could provide some common functionality specific to my application to the tests. In the tearDown methods I had failed to call the super method. Once I called the super method in the tearDown method, it worked. So the lesson here was to check to make sure you are cleaning up your methods.

Related

What is the equivalent of autotest/guard for django

When I code in Ruby on Rails, I rely on Guard to listen for changes to the code base so when I'm writing tests, I don't need to manually run the tests in the file I'm working on each time.
https://github.com/guard/guard-rspec
What is the closest thing to thing for django so I can enjoy the same workflow?
Specifically, what I want to do is be able to have tests run, based on:
what run tests based on files I have changed, and not
know whether to run the test command based on whether a test run is currently taking place
work with existing tests written with unittest
work with something like factory boy to let me use factories instead of fixtures
I've used nose before, and pytest and I'm comfortable using both - but I haven't used many of pytests extensive set of libraries.
What are my options for this?

Django test database not auto-flushing

I have a bunch of unit test files, all of which consist of django.test.TestCase classes.
Wrote myself a little shell script to uncomment/comment test file imports in my __init__.py file, so I can run tests from certain test files, based off the command line arguments I give it. I am also able to run all the tests of all the test files in one go (for regression testing purposes).
I have this one test file that has some JSON fixtures and the first test checks that a certain model/table has 3 records in it (defined by the JSON fixture).
So here is the problem: when I run this test file on its own its tests pass with flying colours, but when I run the test with all other tests, that particular test case I mentioned, fails.
When I run all the tests, the database says there are 6 records in the table/model, but there should only be 3 (from the fixture), like when the test file is run by itself.
I also tried running the that test file with a few other test files (not all) and it passes. So the only time it doesn't, is when all the test files are run.
To me this seems like a bug in Django or PostgreSQL (DB I am using), because aren't Django TestCases supposed to auto-flush/reset the database between each test method, let alone test class?
This is likely due to the difference in how cleanup is done between TestCase and TransactionTestCase in Django. Before Django 1.5 TransactionTestCases needed to be run after TestCases (and Djangos testunner did that for you). This should be fixed in 1.5 though, so try running your tests again there...

Does NUnit create a new instance of the test fixture class for each contained test method nowadays?

As written in a fairly old book XUnit Patterns NUnit 2.0 did not create new test fixtures for each test, and because of that if tests were manipulating some state of fixture it became shared and could cause various bad side effects.
Is this still the same? I tried to find it on official site but failed, and havent used NUnit for a while.
The fixture is created once for all of the tests in that fixture.
For a given fixture class, a FixtureSetup method is run once for all of the tests in a fixture, and a Setup method is run once for each test. So, any state that needs to be reset should be done in a Setup method (or TearDown, which is run at the end of each test.)
Since 3.13 you can configure that with
LifeCycle.SingleInstance A single instance is created and shared for all test cases
LifeCycle.InstancePerTestCase A new instance is created for each test case
https://docs.nunit.org/articles/nunit/writing-tests/attributes/fixturelifecycle.html
I found that this was an issue that affected me and also found this link which provides a bit of history to the issue;
https://blogs.msdn.microsoft.com/jamesnewkirk/2004/12/04/why-variables-in-nunit-testfixture-classes-should-be-static
I think one of the biggest screw-ups that was made when we wrote NUnit V2.0 was to not create a new instance of the test fixture class for each contained test method.
Not yet tested this in V3 to see if its changed

Grails Testing hickups

I have two testing questions. Both are probably easily answered. The first is that I wrote this unit test in Grails:
void testCount() {
mockDomain(UserAccount)
new UserAccount(firstName: "Ken").save()
new UserAccount(firstName: "Bob").save()
new UserAccount(firstName: "Dave").save()
assertEquals(3, UserAccount.count())
}
For some reason, I get 0 returned back. Did I forget to do something?
EDIT: OH, I understand. The validation constraints were violated, so they didn't store. Is there any way to get some feedback here? That's a really crappy thing to have happen....
The second question is for those who use IDEA. What should I be running - IDEA's junit tests, or grails targets? I have two options.
Also, why does IDEA say that my tests pass and it provides a green light even though the test above actually fails? This will really drive me nuts if I have to check the test reports in html every time I run my tests.....
Help?
I always do object.save(failOnError: true) in tests to avoid silent failures like this. This causes an exception to be thrown if validation fails. Even without a real database in a unit test, most of the constraints will be checked, although I prefer to use integration tests if I want to test complex relationships between domain objects.
I personally haven't found the Idea JUnit tests to particularly useful when working with grails. It is likely fine to use the test runner for "Unit" tests. For integration tests you might consider setting up an ant target in "debug" mode to run your tests.
Over time running tests starts to occupy such a long amount of time I tend to run them exclusively from the command line to avoid the additional overhead IntelliJ adds.
In regards to your unit test, I am pretty sure you would need to run an integration test to get a count that is not zero.
I'm not sure what unit test your using exactly but since GORM is not bootstrapped in the unit tests I'm not sure the domain object mocking supports the increment of a count.
Your test would likely pass as an integration test provided that your domain objects validate.
add flush:true to your save method.
new UserAccount(firstName: "Ken").save(flush:true)
...
Grails sets the flush mode of the hibernate session to manual. So the change is not persisted after the action returns but is before the view is rendered. This allows views to access lazy-loaded collections and relationships and prevents changes from automatically being persisted.

In Django (on Google App Engine), should I call main.py when running Unit Tests?

I have a Django application on the Google App Engine, and I would like to start writing unit tests. But I am not sure how to set-up my tests.
When I run my tests, I get the following error:
EnvironmentError: Environment variable DJANGO_SETTINGS_MODULE is undefined.
ERROR: Module: tests could not be imported.
This seems pretty straight forward - my django settings have not been initialized. Setup of th django environment on Google App Engine happens in main.py (specified in app.yaml), but this does obviously not get called for unit tests. Should my unit tests start by calling main() in main.py? I am not sure.
You should probably just export the environment variable in the main entry point into your tests. Depending on your setup, you can probably just do that by importing your main.py file, but it's probably just as easy to add the os.environ['DJANGO_SETTINGS_MODULE'] line to the file you use to run your tests.
This might be a little hard depending on how you've got your tests set up. Are you using a testrunner like nose or Django's test suite tools?
My ultimate solution to this issue was to add the os.environ['DJANGO_SETTINGS_MODULE'] line IMMEDIATELY before using the only real Django function I use, template.render_to_string().
I kept having issues with it getting unset when I had it in the header of a given .py, so I realized just setting it each time would insure it'll always be right.
What a frustrating problem. I really wish there was a simple setting somewhere (perhaps in app.yaml) that would pick the Django version, and set this variable right.
Oh well.