Django DRF APITestCase chain test cases - django

For example I want to write several tests cases like this
class Test(APITestCase):
def setUp(self):
....some payloads
def test_create_user(self):
....create the object using payload from setUp
def test_update_user(self):
....update the object created in above test case
In the example above, the test_update_user failed because let's say cannot find the user object. Therefore, for that test case to work, I have to create the user instead test_update_user again.
One possible solution, I found is to run create user in setUp. However, I would like to know if there is a way to chain test cases to run one after another without deleting the object created from previous test case.

Rest framework tests include helper classes that extend Django's existing test framework and improve support for making API requests.
Therefore all tests for DRF calls are executed with Django's built in test framework.
An important principle of unit-testing is that each test should be independent of all others. If in your case the code in test_create_user must come before test_update_user, then you could combine both into one test:
def test_create_and_update_user(self):
....create and update user
Tests in Django are executed in a parallell manner to minimize the time it takes to run all tests.
As you said above if you want to share code between tests one has to set it up in the setUp method
def setUp(self):
pass

Related

Django unit testing: Separating unit tests without querying the database multiple times

I have a pair of tests like this:
Make sure a task's deletion status initializes as None
def test_initial_task_deletion_status_is_none(self):
unfinished_task = Task.objects.get(tasked_with="Unfinished Test")
self.assertIsNone(unfinished_task.delete_status)
# Make sure a task's deletion status changes appropriately
def test_unfinished_task_deletion_status_updates_appropriately(self):
unfinished_task = Task.objects.get(tasked_with="Unfinished Test")
unfinished_task.timed_delete(delta=.1)
self.assertIs(unfinished_task.delete_status, "Marked for Deletion")
This will go on, but I'll have unfinished_task = Task.objects.get(tasked_with="Unfinished Test") at the beginning of every one. Is there a way to split these types of things into separate tests, but use the same query result?
Assuming you're using Django's testing framework, then you can do this using setUp().
More about unittest.TestCase.setUp() here
So your updated snippet would look like:
from django.test import TestCase
class MyTestCase(TestCase):
def setUp(self):
self.unfinished_task = Task.objects.get(tasked_with="Unfinished Test")
def test_initial_task_deletion_status_is_none(self):
self.assertIsNone(self.unfinished_task.delete_status)
# Make sure a task's deletion status changes appropriately
def test_unfinished_task_deletion_status_updates_appropriately(self):
self.unfinished_task.timed_delete(delta=.1)
self.assertIs(self.unfinished_task.delete_status, "Marked for Deletion")
You can place the repeated line in the setUp method, and that will make your code less repetitive, but as DanielRoseman pointed out, it will still be run for each test, so you won't be using the same query result.
You can place it in the setUpTestData method, and it will be run only once, before all the tests in MyTestCase, but then your unfinished_task object will be a class variable, shared across all the tests. In-memory modifications made to the object during one test will carry over into subsequent tests, and that is not what you want.
In read-only tests, using setUpTestData is a good way to cut out unnecessary queries, but if you're going to be modifying the objects, you'll want to start fresh each time.

py.test and Django DB access except for one test class

To me it's clear "How can I give database access to all my tests without the django_db marker?"
But I would prefer/need to have several class tests without the DB access.
How can I exclude classes or methods when enable_db_access_for_all_tests is active for all tests?
Is there a decorator like #pytest.mark.no_django_db or other possible solutions?
Thanks!
D
The most flexible solution for marking your tests would be to use the pytest_collection_modifyitems hook in your conftest.py and selectively add a marker for those tests where you need db access. This is an example that traverses all the collected tests and add a marker to them.
def pytest_collection_modifyitems(config, items):
# Do some filtering to items
for item in items:
item.add_marker('django_db')
It's safe to use a import pdb; pdb.set_trace() or any other debugging tool at your disposal to check what item looks like.

independence of individual test methods in APITest class in Django Rest Framework

I followed the tutorial for testing from APITestCase documentation in dry site. But I could find answers to some of my doubts in the drf document.
I have a APITestCase subclassed as below
class GroupTest(APITestCase):
def setUp(self):
.
.
def tearDown(self):
.
.
def test_case_A(self):
.
# I create a group here
# but I dont delete the group object in case A
.
def test_case_B(self):
.
# Will the group object from case A exist in case B ?
# are the different test methods in a APITestCase independent?
.
If I have two test cases in GroupTest class, are they independent? will a group object created in case A affect case B?
No, each test will run on clean database. If you need some entities in DB - add them in setUp (they will be awailable across all test cases in class), or directly in test case.
After testcase execution, all changes are rolled back. If you have some other changes to be undone (for example, you create some files) - do this in tearDown.
Tests are good place for experiments. It's easy and fun to make some temporary tests to check some assumptions.
For example, to get the answer to your question, you can make 2 simple tests cases, each of them should to create some instance and to check if the instance created in other test exists (use print() commands to see what's going on).

Google test framework - Dependency between test cases

I am new to using Google test framework and still going through lot of materials to utilize it to full extent.
Is there any way I can dictate/specify a relation between test cases so that it can be executed conditionally? Like lets say I have two tests; Can I run the second test only if the first succeeds? I am not really sure if it falls under the original rule of testing 'units' but was just wondering if its possible.
There no way to do it in source. Possible solution use shell scripts and run tests using filter.
Python example:
from subprocess import call
def runTest(pattern):
return call(['test', '--gtest_filter=%s' % pattern])
if runTest('FirstPriorityTestPattern') == 0:
return runTest('SecondPriorityTestPattern')
return 1

Django tests reliant on other pages/behaviour

I've started writing some tests for my Django app and I'm unsure how best to structure the code.
Say I have a register page and a page for logged in users only.
My first plan was to have an earlier method perform the register and a later method use that login to test the page:
def test_register_page(self):
//send request to register page and check user has been registered correctly
def test_restricted_page(self):
c = Client();
c.login("someUser","pass");
c.post("/someRestrictedPage/");
//Test response
However this means that now one of my tests rely on the other.
The alternatives I see are calling register in setUp() but this still means that the restricted page test relies on the register page working.
I could try creating a new user manually in setup which I also don't like because this isn't testing a user created by the system.
What is the usual pattern for testing this kind of situation?
You are trying to mix together a lot of different functionalities in one test case. A clean design would be having one test case
for user registration and
one for the view.
Having them depend on each other will introduce a lot of dependencies between them - and - if the test fails the error will be even harder to debug. The success of the registration test should be determined through the correct creation of the user instance (so check necessary attributes etc of the user) and not through being able to login on a certain page. Therefore you will need to set up a "correct" user instance for the view test case. This may seem a bit more complicated than necessary, but it will make future maintainance a lot easier.
What you are trying to do is more something like an integration test, which tests a whole system, but before that you should split up your system in functional units and do unit tests on this units!
The smaller and well-defined the single tests are, the easier will be their maintainance and debugging.