pytest.mark.parametrize with django.test.SimpleTestCase - django

I am using pytest 3.2.2 and Django 1.11.5 on Python 3.6.2 on Windows.
The following code
import django.test
import pytest
class ParametrizeTest:
#pytest.mark.parametrize("param", ["a", "b"])
def test_pytest(self, param):
print(param)
assert False
works as expected:
scratch_test.py::ParametrizeTest::test_pytest[a] FAILED
scratch_test.py::ParametrizeTest::test_pytest[b] FAILED
But as soon as I change it to use Django's SimpleTestCase,
like this:
class ParametrizeTest(django.test.SimpleTestCase):
...
it fails with
TypeError: test_pytest() missing 1 required positional argument: 'param'
Can anybody explain why? And what to do against it?
(I actually even need to use django.test.TestCase and access the database.)
I have the following pytest plugins installed:
plugins: random-0.2, mock-1.6.2, django-3.1.2, cov-2.5.1
but turning any one of them (or all of them) off via -p no:random etc. does not help.

The Django test class is a unittest.TestCase subclass.
Parametrization is unsupported and this is documented under the section pytest features in unittest.TestCase subclasses:
The following pytest features do not work, and probably never will due to different design philosophies:
Fixtures (except for autouse fixtures)
Parametrization
Custom hooks
If you need parametrized tests and pytest runner, your best bet is to abandon the unittest style - this means move the setup/teardown into fixtures (pytest-django plugin has already implemented the hard parts for you), and use module level functions for your tests.

Use #pytest.mark.django_db
Thanks, wim, for that helpful answer. RTFM, once again.
For clarity, here is the formulation that will work (equivalent to a test inheriting from TestCase, not just SimpleTestCase).
Make sure you have pytest-django installed and then do:
import pytest
#pytest.mark.django_db
class ParametrizeTest:
#pytest.mark.parametrize("param", ["a", "b"])
def test_pytest(self, param):
print(param)
assert False
(BTW: Funnily, one reason why I originally decided to use pytest was that
the idea of using plain test functions instead of test methods appealed to me;
I like lightweight approaches.
But now I almost exclusively use test classes and methods anyway,
because I prefer the explicit grouping of tests they provide.)

Related

assertHTMLEqual() via pytest

I prefer pytest-django to the Django way of testing.
It works fine, except that I don't know how to assertHTMLEqual() via pytest.
How to assert that the HTML snippets are almost equal?
Per the docs here it says:
All of Django’s TestCase Assertions are available in pytest_django.asserts
Taking a quick look at the source code here we can see that it imports everything from SimpleTestCase which assertHTMLEqual is a part of.

Pytest-Variables: how to use them without funcarg?

I wanna pass a json file for testbed definition to pytest.
My testcases are implemented inside a unittest class and need to use the json file send via pytest cli.
I tried to use pytest-variable to pass a json to pytest. Then I want to use the json as a dictionary inside my tests.
To be clearer, my test is launched with this command
pytest -s --variables ../testbeds/testbed_SQA_252.json TC_1418.py
I know unittest cannot accept external arguments but will be very useful a technique to unlock this constraint.
CASE 1 -- test implemented as functions --->OK
def test_variables(variables):
print(variables)
in this case the ouput is correct and the json is printed in the CLI
CASE 2-- test implemented as Unittest Class--->KO
class TC_1418(unittest.TestCase):
def setUp(self, variables):
print (variables)
....other functions
I obtain the following error:
TypeError: setUp() missing 1 required positional argument: 'variables'
Any Idea?
Your issue comes from mixing up concepts of pytest (e.g. injection of fixtures like variables) with concepts of unittest.TestCase. While pytest supports running tests based on unittest, I'm afraid that injection of plugins' fixtures into test methods is not supported.
There is a workaround though that takes advantage of fixtures being injected into other fixtures and making custom fixtures available in unittest.TestCase with #pytest.mark.usefixtures decorator:
# TC_1418.py
import pytest
import unittest
#pytest.fixture(scope="class")
def variables_injector(request, variables):
request.cls.variables = variables
#pytest.mark.usefixtures("variables_injector")
class Test1418(unittest.TestCase):
def test_something(self):
print(self.variables)
Notice that the name of the class starts with Test so as to follow conventions for test discovery.
If you don't want to go into this travesty, I propose you rather fully embrace Pytest and make your life easier with simple test functions that you have already discovered or properly structured test classes:
# TC_1418.py
class Test1418:
def test_something(self, variables):
print(variables)

Cause test failure from pytest autouse fixture

pytest allows the creation of fixtures that are automatically applied to every test in a test suite (via the autouse keyword argument). This is useful for implementing setup and teardown actions that affect every test case. More details can be found in the pytest documentation.
In theory, the same infrastructure would also be very useful for verifying post-conditions that are expected to exist after each test runs. For example, maybe a log file is created every time a test runs, and I want to make sure it exists when the test ends.
Don't get hung up on the details, but I hope you get the basic idea. The point is that it would be tedious and repetitive to add this code to each test function, especially when autouse fixtures already provide infrastructure for applying this action to every test. Furthermore, fixtures can be packaged into plugins, so my check could be used by other packages.
The problem is that it doesn't seem to be possible to cause a test failure from a fixture. Consider the following example:
#pytest.fixture(autouse=True)
def check_log_file():
# Yielding here runs the test itself
yield
# Now check whether the log file exists (as expected)
if not log_file_exists():
pytest.fail("Log file could not be found")
In the case where the log file does not exist, I don't get a test failure. Instead, I get a pytest error. If there are 10 tests in my test suite, and all of them pass, but 5 of them are missing a log file, I will get 10 passes and 5 errors. My goal is to get 5 passes and 5 failures.
So the first question is: is this possible? Am I just missing something? This answer suggests to me that it is probably not possible. If that's the case, the second question is: is there another way? If the answer to that question is also "no": why not? Is it a fundamental limitation of pytest infrastructure? If not, then are there any plans to support this kind of functionality?
In pytest, a yield-ing fixture has the first half of its definition executed during setup and the latter half executed during teardown. Further, setup and teardown aren't considered part of any individual test and thus don't contribute to its failure. This is why you see your exception reported as an additional error rather than a test failure.
On a philosophical note, as (cleverly) convenient as your attempted approach might be, I would argue that it violates the spirit of test setup and teardown and thus even if you could do it, you shouldn't. The setup and teardown stages exist to support the execution of the test—not to supplement its assertions of system behavior. If the behavior is important enough to assert, the assertions are important enough to reside in the body of one or more dedicated tests.
If you're simply trying to minimize the duplication of code, I'd recommend encapsulating the assertions in a helper method, e.g., assert_log_file_cleaned_up(), which can be called from the body of the appropriate tests. This will allow the test bodies to retain their descriptive power as specifications of system behavior.
AFAIK it isn't possible to tell pytest to treat errors in particular fixture as test failures.
I also have a case where I would like to use fixture to minimize test code duplication but in your case pytest-dependency may be a way to go.
Moreover, test dependencies aren't bad for non-unit tests and be careful with autouse because it makes tests harder to read and debug. Explicit fixtures in test function header give you at least some directions to find executed code.
I prefer using context managers for this purpose:
from contextlib import contextmanager
#contextmanager
def directory_that_must_be_clean_after_use():
directory = set()
yield directory
assert not directory
def test_foo():
with directory_that_must_be_clean_after_use() as directory:
directory.add("file")
If you absoulutely can't afford to add this one line for every test, it's easy enough to write this as a plugin.
Put this in your conftest.py:
import pytest
directory = set()
# register the marker so that pytest doesn't warn you about unknown markers
def pytest_configure(config):
config.addinivalue_line("markers",
"directory_must_be_clean_after_test: the name says it all")
# this is going to be run on every test
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item):
directory.clear()
yield
if item.get_closest_marker("directory_must_be_clean_after_test"):
assert not directory
And add the according marker to your tests:
# test.py
import pytest
from conftest import directory
def test_foo():
directory.add("foo file")
#pytest.mark.directory_must_be_clean_after_test
def test_bar():
directory.add("bar file")
Running this will give you:
fail.py::test_foo PASSED
fail.py::test_bar FAILED
...
> assert not directory
E AssertionError: assert not {'bar file'}
conftest.py:13: AssertionError
You don't have to use markers, of course, but these allow controlling the scope of the plugin. You can have the markers per-class or per-module as well.

Testing backend code in Django

I am writing an authentication back-end in Django to log only a few users.
It is in a folder called restrictedauthentification/ which is at the root of my Django Project. (I am written it down for a specific project.)
It has two files in it : backend.py and tests.py
In the last file, I have written down some tests for it.
But I can't run them with command ./manage.py test because it isn't an installed app.
Any ideas how I could run them ?
Okay, I found a solution that keep me from turning my backend into a module.
Somthing that I didn't understand and that could help some beginners : In python, a test cannot run itself. It need to be executed by a TestRunner.
Now, one could use the TextTestRunner bundled python that execute the tests and show the results on the standard output, but when testing with django, one need to do one thing before and after the test: calling the function setup_test_environment() and teardown_test_environment().
So I just created a class that inherit from TextTestRunner and redefine its methode run() in order that it execute the two functions provided by Django.
Here it is :
from restrictedauthentification.tests import TestRestrictedAuthentification
from django.test.utils import setup_test_environment, teardown_test_environment
from unittest import TextTestRunner
class DeadSimpleDjangoTestRunner(TextTestRunner):
def run(self, test):
setup_test_environment()
super().run(test)
teardown_test_environment()

Custom test suite for django app

I have a pretty complex django app which has following structure.
/myapp
/myapp/obj1/..
/myapp/obj1/views.py
/myapp/obj1/forms.py
/myapp/obj2/..
/myapp/obj2/views.py
/myapp/obj2/forms.py
/myapp/tests/..
/myapp/tests/__init__.py
/myapp/tests/test_obj1.py
/myapp/tests/test_obj2.py
I have a lot more objects. In /myapp/tests/__init__.py I import TestCase instances from test_obj1.py and test_obj2.py and it is enough to run all available test.
What I'm trying to do is to create a custom test suite. According to the documentation:
There is a second way to define the test suite for a module: if you
define a function called suite() in either models.py or tests.py, the
Django test runner will use that function to construct the test suite
for that module. This follows the suggested organization for unit
tests. See the Python documentation for more details on how to
construct a complex test suite.
So, i've created this function like this:
def suite():
suite = unittest.TestSuite()
suite.addTest(TestObj1Form())
suite.addTest(TestObj2Form())
return suite
However, when I run tests I get this error: ValueError: no such test method in <class 'myproject.myapp.tests.test_obj1.TestObj1Form'>: runTest. Of course I can define this method, but then if I run test it will invoke only this method and ignore all of the test* methods.
Any suggestions how to create a custom test suite for django app properly? I've googled and I found nothing about that.
You should add all your tests with a special function:
suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestObj1Form))