Django unit testing: Separating unit tests without querying the database multiple times - django

I have a pair of tests like this:
Make sure a task's deletion status initializes as None
def test_initial_task_deletion_status_is_none(self):
unfinished_task = Task.objects.get(tasked_with="Unfinished Test")
self.assertIsNone(unfinished_task.delete_status)
# Make sure a task's deletion status changes appropriately
def test_unfinished_task_deletion_status_updates_appropriately(self):
unfinished_task = Task.objects.get(tasked_with="Unfinished Test")
unfinished_task.timed_delete(delta=.1)
self.assertIs(unfinished_task.delete_status, "Marked for Deletion")
This will go on, but I'll have unfinished_task = Task.objects.get(tasked_with="Unfinished Test") at the beginning of every one. Is there a way to split these types of things into separate tests, but use the same query result?

Assuming you're using Django's testing framework, then you can do this using setUp().
More about unittest.TestCase.setUp() here
So your updated snippet would look like:
from django.test import TestCase
class MyTestCase(TestCase):
def setUp(self):
self.unfinished_task = Task.objects.get(tasked_with="Unfinished Test")
def test_initial_task_deletion_status_is_none(self):
self.assertIsNone(self.unfinished_task.delete_status)
# Make sure a task's deletion status changes appropriately
def test_unfinished_task_deletion_status_updates_appropriately(self):
self.unfinished_task.timed_delete(delta=.1)
self.assertIs(self.unfinished_task.delete_status, "Marked for Deletion")

You can place the repeated line in the setUp method, and that will make your code less repetitive, but as DanielRoseman pointed out, it will still be run for each test, so you won't be using the same query result.
You can place it in the setUpTestData method, and it will be run only once, before all the tests in MyTestCase, but then your unfinished_task object will be a class variable, shared across all the tests. In-memory modifications made to the object during one test will carry over into subsequent tests, and that is not what you want.
In read-only tests, using setUpTestData is a good way to cut out unnecessary queries, but if you're going to be modifying the objects, you'll want to start fresh each time.

Related

django test - how to avoid the ForeignKeyViolation

I am struggling to create tests with django with objects that have foreign keys. The tests run with no errors when run individually, but when running all, i get ForeignKeyViolation in the teardown phase (which I don't understand what it does and can't find any info on it).
When starting the testsuite I am creating a bunch of dummy data like this:
def setUp(self) -> None:
create_bulk_users(5)
create_categories()
create_tags()
create_technologies()
self.tags = Tag.objects.all()
self.categories = Category.objects.all()
self.technologies = Technology.objects.all()
The problems I need help figuring out are:
What exactly the teardown phase does? Are there any detailed docs on it?
How should I structure my tests so to avoid the ForeignKeyViolation issue?
What exactly the teardown phase does? Are there any detailed docs on it?
I couldn't find any detailed docs about it. All I find out was by reading the actual code. Teardown cancels any operations done to the DB during the tests.
The way TestCases work (broadly) is this:
1. SetUp method runs **before** every test
2. individual test method is run
3. TearDown method runs **after** every test
4. Test database is flushed
How should I structure my tests so to avoid the ForeignKeyViolation issue?
My mistake was that I was importing a function for creating dummy data.
The imported module was running a function to populate some primary data. The problem was that was not rerun each within the setUp method because it would run just one, when importing the file.
# imported module
# will delete all tags after running first test but
# the reference to the old tags will persist
tags = FakeTags.create_bulk();
def create_bulk_articles():
article = FakeArticle.create()
article.tags.add(tags)
I fixed it by just using the trouble function inside the imported function.
# imported module
# tags are created each time before running the tests
def create_bulk_articles():
tags = FakeTags.create_bulk();
article = FakeArticle.create()
article.tags.add(tags)

Is there a way to unitary test the arguments passed to Model or Q object?

In my unit tests, I understand how I can mock objects per context, to avoid interacting with any kind of persistent datastore.
I can even mock the Q object to test how many times it has been called, which is really useful.
But I'm still uncomfortable with the fact that while I'm mocking my interaction with the datastores, I'm still assuming that my code works©, that the datastore (or the ORM in this case) is receiving the data correctly, through the "proper channels" so to speak.
Case in point:
# code to test
def related_stuff():
return Stuff.objects.filter(
parent__user__city_name="Las Vegas"
)
# more code...
# testing above
#mock.patch(f"{path_to}.Stuff.objects")
def test_related_stuff(stuff_mock):
stuff_mock.filter.return_value = stuff_mock
stuff_mock.filter.assert_called_once_with(parent__user__city_name="Las Vegas")
How can I actually test that the parent__user__city_name lookup pattern is actually correct and wont result in an error? I'm assuming there's no way to test this without touching the datastore, but any opinions are appreciated.
You could either ensure the database connection(s) are to eg. a memory sqlite instance, or maybe write a Djangon database adapter that straight out errors (or always returns an empty dataset) when a query is attempted.
With an adapter that always returns nothing, you can at least test that a query would work.

Django DRF APITestCase chain test cases

For example I want to write several tests cases like this
class Test(APITestCase):
def setUp(self):
....some payloads
def test_create_user(self):
....create the object using payload from setUp
def test_update_user(self):
....update the object created in above test case
In the example above, the test_update_user failed because let's say cannot find the user object. Therefore, for that test case to work, I have to create the user instead test_update_user again.
One possible solution, I found is to run create user in setUp. However, I would like to know if there is a way to chain test cases to run one after another without deleting the object created from previous test case.
Rest framework tests include helper classes that extend Django's existing test framework and improve support for making API requests.
Therefore all tests for DRF calls are executed with Django's built in test framework.
An important principle of unit-testing is that each test should be independent of all others. If in your case the code in test_create_user must come before test_update_user, then you could combine both into one test:
def test_create_and_update_user(self):
....create and update user
Tests in Django are executed in a parallell manner to minimize the time it takes to run all tests.
As you said above if you want to share code between tests one has to set it up in the setUp method
def setUp(self):
pass

py.test and Django DB access except for one test class

To me it's clear "How can I give database access to all my tests without the django_db marker?"
But I would prefer/need to have several class tests without the DB access.
How can I exclude classes or methods when enable_db_access_for_all_tests is active for all tests?
Is there a decorator like #pytest.mark.no_django_db or other possible solutions?
Thanks!
D
The most flexible solution for marking your tests would be to use the pytest_collection_modifyitems hook in your conftest.py and selectively add a marker for those tests where you need db access. This is an example that traverses all the collected tests and add a marker to them.
def pytest_collection_modifyitems(config, items):
# Do some filtering to items
for item in items:
item.add_marker('django_db')
It's safe to use a import pdb; pdb.set_trace() or any other debugging tool at your disposal to check what item looks like.

django changes in tests.py don't reflect in models.py

I have a recurring problem when testing my app. whenever I change or create() any object from within tests.py, these changes can't be found in models.py - and that happens in same test.
pseudocode:
tests.py:
def test_something(self):
...
Norm.objects.create(...)
self.player_a.print_all_norms()
...
models.py:
def print_all_norms():
a = Norm.objects.all()
print a
# prints [], the Norm object created in tests.py wasn't found
return
EDIT:
Clarification - I can't find the object within the test that created it.
A Norm object is created inside test_something(), which calls a function inside models.py.
When the function tries to find the previously created object using Norm.objects.all(), it fails, the test resumes, and then test fails as well.
Testing uses temporary database as documented in the test database docs, so after the test is complete, you won't be able to find those objects through the model manager.
Is it not finding the object within the test or when you try to find it after executing the test?
If it's not finding it in the test, try making sure you have the proper permissions (as mentioned in test db docs)
If you want to load predetermined values into the database on some sort of consistent basis, outside of testing, you may want to at using fixtures