django test - how to avoid the ForeignKeyViolation - django

I am struggling to create tests with django with objects that have foreign keys. The tests run with no errors when run individually, but when running all, i get ForeignKeyViolation in the teardown phase (which I don't understand what it does and can't find any info on it).
When starting the testsuite I am creating a bunch of dummy data like this:
def setUp(self) -> None:
create_bulk_users(5)
create_categories()
create_tags()
create_technologies()
self.tags = Tag.objects.all()
self.categories = Category.objects.all()
self.technologies = Technology.objects.all()
The problems I need help figuring out are:
What exactly the teardown phase does? Are there any detailed docs on it?
How should I structure my tests so to avoid the ForeignKeyViolation issue?

What exactly the teardown phase does? Are there any detailed docs on it?
I couldn't find any detailed docs about it. All I find out was by reading the actual code. Teardown cancels any operations done to the DB during the tests.
The way TestCases work (broadly) is this:
1. SetUp method runs **before** every test
2. individual test method is run
3. TearDown method runs **after** every test
4. Test database is flushed
How should I structure my tests so to avoid the ForeignKeyViolation issue?
My mistake was that I was importing a function for creating dummy data.
The imported module was running a function to populate some primary data. The problem was that was not rerun each within the setUp method because it would run just one, when importing the file.
# imported module
# will delete all tags after running first test but
# the reference to the old tags will persist
tags = FakeTags.create_bulk();
def create_bulk_articles():
article = FakeArticle.create()
article.tags.add(tags)
I fixed it by just using the trouble function inside the imported function.
# imported module
# tags are created each time before running the tests
def create_bulk_articles():
tags = FakeTags.create_bulk();
article = FakeArticle.create()
article.tags.add(tags)

Related

Django unit testing: Separating unit tests without querying the database multiple times

I have a pair of tests like this:
Make sure a task's deletion status initializes as None
def test_initial_task_deletion_status_is_none(self):
unfinished_task = Task.objects.get(tasked_with="Unfinished Test")
self.assertIsNone(unfinished_task.delete_status)
# Make sure a task's deletion status changes appropriately
def test_unfinished_task_deletion_status_updates_appropriately(self):
unfinished_task = Task.objects.get(tasked_with="Unfinished Test")
unfinished_task.timed_delete(delta=.1)
self.assertIs(unfinished_task.delete_status, "Marked for Deletion")
This will go on, but I'll have unfinished_task = Task.objects.get(tasked_with="Unfinished Test") at the beginning of every one. Is there a way to split these types of things into separate tests, but use the same query result?
Assuming you're using Django's testing framework, then you can do this using setUp().
More about unittest.TestCase.setUp() here
So your updated snippet would look like:
from django.test import TestCase
class MyTestCase(TestCase):
def setUp(self):
self.unfinished_task = Task.objects.get(tasked_with="Unfinished Test")
def test_initial_task_deletion_status_is_none(self):
self.assertIsNone(self.unfinished_task.delete_status)
# Make sure a task's deletion status changes appropriately
def test_unfinished_task_deletion_status_updates_appropriately(self):
self.unfinished_task.timed_delete(delta=.1)
self.assertIs(self.unfinished_task.delete_status, "Marked for Deletion")
You can place the repeated line in the setUp method, and that will make your code less repetitive, but as DanielRoseman pointed out, it will still be run for each test, so you won't be using the same query result.
You can place it in the setUpTestData method, and it will be run only once, before all the tests in MyTestCase, but then your unfinished_task object will be a class variable, shared across all the tests. In-memory modifications made to the object during one test will carry over into subsequent tests, and that is not what you want.
In read-only tests, using setUpTestData is a good way to cut out unnecessary queries, but if you're going to be modifying the objects, you'll want to start fresh each time.

Is there a standard way to mock Django models?

I have a model called Pdb:
class Pdb(models.Model):
id = models.TextField(primary_key=True)
title = models.TextField()
It is in a one-to-many relationship with the model Residue:
class Residue(models.Model):
id = models.TextField(primary_key=True)
name = models.TextField()
pdb = models.ForeignKey(Pdb)
Unit tesing Pdb is fine:
def test_can_create_pdb(self):
pdb = Pdb(pk="1XXY", title="The PDB Title")
pdb.save()
self.assertEqual(Pdb.objects.all().count(), 1)
retrieved_pdb = Pdb.objects.first()
self.assertEqual(retrieved_pdb, pdb)
When I unit test Residue I just want to use a mock Pdb object:
def test_can_create_residue(self):
pdb = Mock(Pdb)
residue = Residue(pk="1RRRA1", name="VAL", pdb=mock_pdb)
residue.save()
But this fails because it needs some attribute called _state:
AttributeError: Mock object has no attribute '_state'
So I keep adding mock attributes to make it look like a real model, but eventually I get:
django.db.utils.ConnectionDoesNotExist: The connection db doesn't exist
I don't know how to mock the actual call to the database. Is there a standard way to do this? I really don't want to have to actually create a Pdb record in the test database because then the test won't be isolated.
Is there an established best practices way to do this?
Most of the SF and google results I get for this relate to mocking particular methods of a model. Any help would be appreciated.
You are not strictly unit testing here as you are involving the database, I would call that integration testing, but that is another very heated debate!
My suggestion would be to have your wrapping test class inherit from django.test.TestCase. If you are that concerned about each individual test case being completely isolated then you can just create multiple classes with a test method per class.
It might also be worth reconsidering if these tests need writing at all, as they appear to just be validating that the framework is working.
Oh, I managed to solve this with a library called 'mixer'...
from mixer.backend.django import mixer
def test_can_create_residue(self):
mock_pdb = mixer.blend(Pdb)
residue = Residue(pk="1RRRA1", name="VAL", pdb=mock_pdb)
residue.save()
Still think django should provide a native way to do this though. It already provides a lot of testing tools - this feels like a major part of proper unit testing.
I'm not sure exactly what you mean by mocking Django models. The simplest option for writing a test that requires some model objects is to use a test fixture. It's basically a YAML file that gets loaded into a database table before your test runs.
In your answer you mentioned mixer, which looks like a library for randomly generating those test fixtures.
Those are fine tools, but they still require database access, and they're a lot slower than pure unit tests. If you want to completely mock out the database access, try Django mock queries. It completely mocks out the database access layer, so it's very fast, and you don't have to worry about foreign keys. I use it when I want to test some complex code that has simple database access. If the database access has some complicated query conditions, then I stick with the real database.
Full disclosure: I'm a minor contributor to the Django mock queries project.

Django TestCase with fixtures causes IntegrityError due to duplicate keys

I'm having trouble moving away from django_nose.FastFixtureTestCase to django.test.TestCase (or even the more conservative django.test.TransactionTestCase). I'm using Django 1.7.11 and I'm testing against Postgres 9.2.
I have a Testcase class that loads three fixtures files. The class contains two tests. If I run each test individually as a single run (manage test test_file:TestClass.test_name), they each work. If I run them together, (manage test test_file:TestClass), I get
IntegrityError: Problem installing fixture '<path>/data.json': Could not load <app>.<Model>(pk=1): duplicate key value violates unique constraint "<app_model_field>_49810fc21046d2e2_uniq"
To me it looks like the db isn't actually getting flushed or rolled back between tests since it only happens when I run the tests in a single run.
I've stepped through the Django code and it looks like they are getting flushed or rolled back -- depending on whether I'm trying TestCase or TransactionTestCase.
(I'm moving away from FastFixtureTestCase because of https://github.com/django-nose/django-nose/issues/220)
What else should I be looking at? This seems like it should be a simple matter and is right within what django.test.TestCase and Django.test.TransactionTestCase are designed for.
Edit:
The test class more or less looks like this:
class MyTest(django.test.TransactionTestCase): # or django.test.TestCase
fixtures = ['data1.json', 'data2.json', 'data3.json']
def test1(self):
return # I simplified it to just this for now.
def test2(self):
return # I simplified it to just this for now.
Update:
I've managed to reproduce this a couple of times with a single test, so I suspect something in the fixture loading code.
One of my basic assumptions was that my db was clean for every TestCase. Tracing into the django core code I found instances where an object (in one case django.contrib.auth.User) already existed.
I temporarily overrode _fixture_setup() to assert the db was clean prior to loading fixtures. The assertion failed.
I was able to narrow the problem down to code that was in a TestCase.setUpClass() instead of TestCase.setUp(), and so the object was leaking out of the test and conflicting with other TestCase fixtures.
What I don't understand completely is I thought that the db was dropped and recreated between TestCases -- but perhaps that is not correct.
Update: Recent version of Django includes setUpTestData() that should be used instead of setUpClass()

Keep table data during Django tests

I have some tables containing a lot of data (imported from geonames using django-cities: https://github.com/coderholic/django-cities) that I want to preserve in tests (since loading them via fixtures will be very slow)… how can I keep those tables and their data during tests?
I think I have to write a custom TestRunner, but I have no idea about where to start :P
It's possible, here is a way :
1) Define your own test runner look here to see how.
2) For your custom test runner look in the default test runner, you can just copy and past the code and just comment this line : connection.creation.destroy_test_db(old_name, verbosity) which is responsible for destroying the test database, and i think you should put the connection.creation.create_test_db(..) line in a try except something like this maybe:
try:
# Create the database the first time.
connection.creation.create_test_db(verbosity, autoclobber=not interactive)
except ..: # Look at the error that this will raise when create a database that already exist
# Test database already created.
pass
3) Bound TEST_RUNNER in setting.py to your test runner.
4) Now run your test like this: ./manage.py test

django changes in tests.py don't reflect in models.py

I have a recurring problem when testing my app. whenever I change or create() any object from within tests.py, these changes can't be found in models.py - and that happens in same test.
pseudocode:
tests.py:
def test_something(self):
...
Norm.objects.create(...)
self.player_a.print_all_norms()
...
models.py:
def print_all_norms():
a = Norm.objects.all()
print a
# prints [], the Norm object created in tests.py wasn't found
return
EDIT:
Clarification - I can't find the object within the test that created it.
A Norm object is created inside test_something(), which calls a function inside models.py.
When the function tries to find the previously created object using Norm.objects.all(), it fails, the test resumes, and then test fails as well.
Testing uses temporary database as documented in the test database docs, so after the test is complete, you won't be able to find those objects through the model manager.
Is it not finding the object within the test or when you try to find it after executing the test?
If it's not finding it in the test, try making sure you have the proper permissions (as mentioned in test db docs)
If you want to load predetermined values into the database on some sort of consistent basis, outside of testing, you may want to at using fixtures