HTML report for django tests - django

I have a Django project containing an API (created with rest framework if that counts anywhere). I have added some tests for the API but in order to have an overall view of the tests, either passing, either failing or missing, I need to create an HTML report.
When the tests are finished a HTML table report should be generated which shows the endpoints and HTTP responses covered during tests, the results of the tests plus the combinations which are missing the tests.
Unfortunately I cannot understand how should I do that. I know that coverage can give me a detailed html report, but that's not what I need, I need something like this:
| Endpoint description | 200 | 400 | 403 | 404 |
| GET /endpoint1 | PASS | PASS |PASS | N/A |
| POST /endpoint1 | PASS | FAIL |MISSING| N/A |
Does anybody has any idea about that? Maybe some libs that could help out with that or what strategy should I use for that?
Thank you in advance

Late to the party, but this is my solution to outputting a HTML test report for Django tests. (based on HtmlTestRunner cannot be directly used with Django DiscoverRunner)
The following classes if placed in tests/html_test_reporter.py can be used as a DiscoverRunner which is patched to use HTMLTestRunner.
from django.test.runner import DiscoverRunner
from HtmlTestRunner import HTMLTestRunner
class MyHTMLTestRunner(HTMLTestRunner):
def __init__(self, **kwargs):
# Pass any required options to HTMLTestRunner
super().__init__(combine_reports=True, report_name='all_tests', add_timestamp=False, **kwargs)
class HtmlTestReporter(DiscoverRunner):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# Patch over the test_runner in the super class.
html_test_runner = MyHTMLTestRunner
self.test_runner=html_test_runner
Then this is run with:
python manage.py test -v 2 --testrunner tests.html_test_reporter.HtmlTestReporter
By default Django projects use django.test.runner.DiscoverRunner to search for tests and then use PyTest to run them. HTMLTestRunner can be used with PyTest to output a HTML test report, but it does seem possible to configure PyTest to use HTMLRunner through DiscoverRunner.
Hope this helps.

As Django uses the python's standard unittest library, you'll have to tweak some of its parts.
First, you'll need some way to specify which tests actually test which endpoint. A custom decorator is handy for that:
from functools import wraps
def endpoint(path, code):
"""
Mark some test as one which tests specific endpoint.
"""
def inner(func):
#wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
inner._endpoint_path = path
inner._endpoint_code = code
return inner
class MyTestCase(TestCase):
#endpoint(path='/path/one', code=200)
def test_my_path_is_ok(self):
response = self.client.get('/path/one?foo=bar')
self.assertEqual(response.status_code, 200)
#endpoint(path='/path/one', code=404)
def test_my_path_expected_errors(self):
response = self.client.get('/path/one?foo=qux')
self.assertEqual(response.status_code, 404)
def test_some_other_stuff(self):
# this one will not be included in our results grid.
pass
You could use a "magical" approach (e.g. special methods' names to guess the endpoint they are testing) instead, but explicit is better than implicit, right?
Then, you need a way to collect the results of your tests - specifically, of that which test the endpoints. Here we make a (very draft) subclass of unittest.TestResult to handle it:
class EndpointsTestResult(TestResult):
def __init__(self):
super(EndpointsTestResult, self).__init__()
self.endpoint_results = {}
def addError(self, test, err):
super(EndpointsTestResult, self).addError(test, err)
if hasattr(test, '_endpoint_path'):
branch = self.endpoint_results.setdefault(getattr(test, '_endpoint_path'), {})
branch[getattr(test, '_endpoint_code')] = 'MISSING'
def addFailure(self, test, err):
# similar as addError()
def addSuccess(self, test):
# similar as addError()
Finally it's time to actually output our results. Let's make a sublass of the unittest.TextTestRunner and specify it in our custom runner:
class EndpointsTestRunner(TextTestRunner):
def _makeResult(self):
self._result = EndpointsTestResult()
return self._result
def run(self, test):
super(EndpointsTestRunner).run(test)
# After running a test, print out the table
generate_a_nifty_table(self._result.endpoint_results)
class EndpointsDjangoRunner(django.test.runner.DiscoverRunner):
test_runner = EndpointsTestRunner
Now we have our custom EndpointsDjangoRunner, and we should specify it in the settings.py:
TEST_RUNNER = 'path.to.the.EndpointsDjangoRunner'
That's it. Please let me know if you spot any awkward errors in the code.

Related

pytest + django giving me a database error when fixture scope is 'module'

I have the following inside conftest.py
#pytest.mark.django_db
#pytest.fixture(scope='module')
def thing():
print('sleeping') # represents a very expensive function that i want to only ever once once per module
Thing.objects.create(thing='hello')
Thing.objects.create(thing='hello')
Thing.objects.create(thing='hello')
Inside tests.py
#pytest.mark.django_db
def test_thing(thing):
assert models.Thing.objects.count() > 1
#pytest.mark.django_db
def test_thing2(thing):
assert models.Thing.objects.count() > 1
#pytest.mark.django_db
#pytest.mark.usefixtures('thing')
def test_thing3():
assert models.Thing.objects.count() > 1
All three tests throw the same error: RuntimeError: Database access not allowed, use the "django_db" mark, or the "db" or "transactional_db" fixtures to enable it.
I've tried using scope='session' / scope='class' / scope='package' / scope='module' -- the only one that works is `scope='function' which defeats the purpose of what I'm trying to accomplish. I want to be able to create all these items ONCE per module, not once per test.
Note: I ran into this issue with a large code base and created a new django project with a single app to test and see if the problem was the existing test code, and it failed on a standalone test also. Tested it with both postgres and sqlite; doesn't seem like a database issue.
Not that it matters, but the models.py
class Thing(models.Model):
thing = models.CharField(max_length=100)
Ok, turns out this is a known limitation, and it's somewhat documented here. If you want to solve this issue, and get away from this bug:
#pytest.mark.django_db
#pytest.fixture(scope='module')
def thing(django_db_setup, django_db_blocker):
del django_db_setup # Cannot be used with usefixtures(..) it won't work
with django_db_blocker.unblock():
print('sleeping')
Thing.objects.create(thing='hello')
Thing.objects.create(thing='hello')
Thing.objects.create(thing='hello')
Thing.objects.create(thing='hello')
yield

Why does mock patch only work when running specific test and not whole test suite?

I'm using Django and Pytest specifically to run the test suite and am trying to test that a specific form shows up with expected data when a user hits the site (integration test).
This particular view uses a stored procedure, which I am mocking since the test would never have access to that.
My test code looks like this:
#test_integrations.py
from my_app.tests.data_setup import setup_data, setup_sb7_data
from unittest.mock import patch
...
# Setup to use a non-headless browser so we can see whats happening for debugging
#pytest.mark.usefixtures("standard_browser")
class SeniorPageTestCase(StaticLiveServerTestCase):
"""
These tests surround the senior form
"""
#classmethod
def setUpClass(cls):
cls.host = socket.gethostbyname(socket.gethostname())
super(SeniorPageTestCase, cls).setUpClass()
def setUp(self):
# setup the dummy data - this works fine
basic_setup(self)
# setup the 'results'
self.sb7_mock_data = setup_sb7_data(self)
#patch("my_app.utils.get_employee_sb7_data")
def test_senior_form_displays(self, mock_sb7_get):
# login the dummy user we created
login_user(self, "futureuser")
# setup the results
mock_sb7_get.return_value = self.sb7_mock_data
# hit the page for the form
self.browser.get(self.live_server_url + "/my_app/senior")
form_id = "SeniorForm"
# assert that the form displays on the page
self.assertTrue(self.browser.find_element_by_id(form_id))
# utils.py
from django.conf import settings
from django.db import connections
def get_employee_sb7_data(db_name, user_number, window):
"""
Executes the stored procedure for getting employee data
Args:
user_number: Takes the user_number
db (db connection): Takes a string of the DB to connect to
Returns:
"""
cursor = connections[db_name].cursor()
cursor.execute(
'exec sp_sb7 %s, "%s"' % (user_number, window.senior_close)
)
columns = [col[0] for col in cursor.description]
results = [dict(zip(columns, row)) for row in cursor.fetchall()]
return results
# views.py
from myapp.utils import (
get_employee_sb7_data,
)
...
###### Senior ######
#login_required
#group_required("user_senior")
def senior(request):
# Additional Logic / Getting Other Models here
# Execute stored procedure to get data for user
user_number = request.user.user_no
results = get_employee_sb7_data("production_db", user_number, window)
if not results:
return render(request, "users/senior_not_required.html")
# Additional view stuff
return render(
request,
"users/senior.html",
{
"data": data,
"form": form,
"results": results,
},
)
If I run this test itself with:
pytest my_app/tests/test_integrations.py::SeniorPageTestCase
The tests pass without issue. The browser shows up - the form shows up with the dummy data as we would expect and it all works.
However, if I run:
pytest my_app
All other tests run and pass - but all the tests in this class fail because it's not patching the function.
It tries to call the actual stored procedure (which fails because it's not on the production server yet) and it fails.
Why would it patch correctly when I call that TestCase specifically - but not patch correctly when I just run pytest on the app or project level?
I'm at a loss and not sure how to debug this very well. Any help is appreciated
So what's happening is that your views are imported before you're patching.
Let's first see the working case:
pytest imports the test_integrations file
the test is executed and patch decorator's inner function is run
there is no import of the utils yet and so patch imports and replaces the function
test body is executed, which passes a url to the test client
the test client imports the resolver and in turn it imports the views, which imports the utils.
Since the utils are already patched, everything works fine
If another test case runs first, that also imports the same views, then that import wins and patch cannot replace the import.
Your solution is to reference the same symbol. So in test_integrations.py:
#patch("myapp.views.get_employee_sb7_data")

Use pytest fixture in a function decorator

I want to build a decorator for my test functions which has several uses. One of them is helping to add properties to the generated junitxml.
I know there's a fixture built-in pytest for this called record_property that does exactly that. How can I use this fixture inside my decorator?
def my_decorator(arg1):
def test_decorator(func):
def func_wrapper():
# hopefully somehow use record_property with arg1 here
# do some other logic here
return func()
return func_wrapper
return test_decorator
#my_decorator('some_argument')
def test_this():
pass # do actual assertions etc.
I know I can pass the fixture directly into every test function and use it in the tests, but I have a lot of tests and it seems extremely redundant to do this.
Also, I know I can use conftest.py and create a custom marker and call it in the decorator, but I have a lot of conftest.py files and I don't manage all of them alone so I can't enforce it.
Lastly, trying to import the fixture directly in to my decorator module and then using it results in an error - so that's a no go also.
Thanks for the help
It's a bit late but I came across the same problem in our code base. I could find a solution to it but it is rather hacky, so I wouldn't give a guarantee that it works with older versions or will prevail in the future.
Hence I asked if there is a better solution. You can check it out here: How to use pytest fixtures in a decorator without having it as argument on the decorated function
The idea is to basically register the test functions which are decorated and then trick pytest into thinking they would require the fixture in their argument list:
class RegisterTestData:
# global testdata registry
testdata_identifier_map = {} # Dict[str, List[str]]
def __init__(self, testdata_identifier, direct_import = True):
self.testdata_identifier = testdata_identifier
self.direct_import = direct_import
self._always_pass_my_import_fixture = False
def __call__(self, func):
if func.__name__ in RegisterTestData.testdata_identifier_map:
RegisterTestData.testdata_identifier_map[func.__name__].append(self.testdata_identifier)
else:
RegisterTestData.testdata_identifier_map[func.__name__] = [self.testdata_identifier]
# We need to know if we decorate the original function, or if it was already
# decorated with another RegisterTestData decorator. This is necessary to
# determine if the direct_import fixture needs to be passed down or not
if getattr(func, "_decorated_with_register_testdata", False):
self._always_pass_my_import_fixture = True
setattr(func, "_decorated_with_register_testdata", True)
#functools.wraps(func)
#pytest.mark.usefixtures("my_import_fixture") # register the fixture to the test in case it doesn't have it as argument
def wrapper(*args: Any, my_import_fixture, **kwargs: Any):
# Because of the signature of the wrapper, my_import_fixture is not part
# of the kwargs which is passed to the decorated function. In case the
# decorated function has my_import_fixture in the signature we need to pack
# it back into the **kwargs. This is always and especially true for the
# wrapper itself even if the decorated function does not have
# my_import_fixture in its signature
if self._always_pass_my_import_fixture or any(
"my_import_fixture" in p.name for p in signature(func).parameters.values()
):
kwargs["my_import_fixture"] = my_import_fixture
if self.direct_import:
my_import_fixture.import_all()
return func(*args, **kwargs)
return wrapper
def pytest_collection_modifyitems(config: Config, items: List[Item]) -> None:
for item in items:
if item.name in RegisterTestData.testdata_identifier_map and "my_import_fixture" not in item._fixtureinfo.argnames:
# Hack to trick pytest into thinking the my_import_fixture is part of the argument list of the original function
# Only works because of #pytest.mark.usefixtures("my_import_fixture") in the decorator
item._fixtureinfo.argnames = item._fixtureinfo.argnames + ("my_import_fixture",)

Django unittest with legacy database connection

I have a Django project that pulls data from legacy database (read only connection) into its own database, and when I run integration tests, it tries to read from test_account on legacy connection.
(1049, "Unknown database 'test_account'")
Is there a way to tell Django to leave the legacy connection alone for reading from the test database?
I actually wrote something that lets you create integration test in djenga (available on pypi) if you want to take a look at how to create a separate integration test framework.
Here is the test runner I use when using the django unit test framework:
from django.test.runner import DiscoverRunner
from django.apps import apps
import sys
class UnManagedModelTestRunner(DiscoverRunner):
"""
Test runner that uses a legacy database connection for the duration of the test run.
Many thanks to the Caktus Group: https://www.caktusgroup.com/blog/2013/10/02/skipping-test-db-creation/
"""
def __init__(self, *args, **kwargs):
super(UnManagedModelTestRunner, self).__init__(*args, **kwargs)
self.unmanaged_models = None
self.test_connection = None
self.live_connection = None
self.old_names = None
def setup_databases(self, **kwargs):
# override keepdb so that we don't accidentally overwrite our existing legacy database
self.keepdb = True
# set the Test DB name to the current DB name, which makes this more of an
# integration test, but HEY, at least it's a start
DATABASES['legacy']['TEST'] = { 'NAME': DATABASES['legacy']['NAME'] }
result = super(UnManagedModelTestRunner, self).setup_databases(**kwargs)
return result
# Set Django's test runner to the custom class defined above
TEST_RUNNER = 'config.settings.test_settings.UnManagedModelTestRunner'
TEST_NON_SERIALIZED_APPS = [ 'legacy_app' ]
from django.test import TestCase, override_settings
#override_settings(LOGIN_URL='/other/login/')
class LoginTestCase(TestCase):
def test_login(self):
response = self.client.get('/sekrit/')
self.assertRedirects(response, '/other/login/?next=/sekrit/')
https://docs.djangoproject.com/en/1.10/topics/testing/tools/
You should theoretically be able to use the override settings here and switch to a dif

Unit testing twisted.web.client.Agent's without the network

I've not done any twisted now for a couple of years and have started using the newer Agent style of client http calls. Using Agent has been OK, but testing is confusing me (it's twisted after all).
I've been through the https://twistedmatrix.com/documents/current/core/howto/trial.html docs and the APIs on trial tools and Agent itself. Also numerous searches.
I've gone with faking out Agent, as I don't need to test that. But then because of the steps to handle the processing and response of an Agent request, my test code has got nasty, implementing the nested layers of the Agent, protocol, etc. Where should I draw the line here and are there some utils I haven't found?
Here's a minimal example (naive implementation of SUT):
from twisted.web.client import Agent, readBody
from twisted.internet import reactor
import json
class SystemUnderTest(object):
def __init__(self, url):
self.url = url
def action(self):
d = self._makeAgent().request("GET", self.url)
d.addCallback(self._cbSuccess)
return d
def _makeAgent(self):
''' It's own method so can be overridden in tests '''
return Agent(reactor)
def _cbSuccess(self, response):
d = readBody(response)
d.addCallback(self._cbParse)
return d
def _cbParse(self, data):
self.result = json.loads(data)
print self.result
with the test module:
from twisted.trial import unittest
from sut import SystemUnderTest
from twisted.internet import defer
from twisted.test import proto_helpers
class Test(unittest.TestCase):
def test1(self):
s_u_t = ExtendedSystemUnderTest(None)
d = s_u_t.action()
d.addCallback(self._checks, s_u_t)
return d
def _checks(self, result, s_u_t):
print result
self.assertEqual({'one':1}, s_u_t.result)
class ExtendedSystemUnderTest(SystemUnderTest):
def _makeAgent(self):
return FakeSuccessfulAgent("{'one':1}")
## Getting ridiculous below here...
class FakeReason(object):
def check(self, _):
return False
def __str__(self):
return "It's my reason"
class FakeResponse(object):
''' Implementation of IResponse '''
def __init__(self, content):
self.content = content
self.prot = proto_helpers.StringTransport()
self.code = 200
self.phrase = ''
def deliverBody(self, prot):
prot.makeConnection(self.prot)
prot.dataReceived(self.content)
# reason = FakeReason()
# prot.connectionLost(reason)
class FakeSuccessfulAgent(object):
''' Implementation of IAgent '''
def __init__(self, response):
self.response = response
def request(self, method, url):
return defer.succeed(FakeResponse(self.response))
but testing is confusing me (it's twisted after all).
Hilarious.
class ExtendedSystemUnderTest(SystemUnderTest):
def _makeAgent(self):
return FakeSuccessfulAgent("{'one':1}")
I suggest you make the agent to use a normal parameter. This is more convenient than a private method like _makeAgent. Composition is great. Inheritance is meh.
class FakeReason(object):
...
There's no reason to make a fake of this. Just use twisted.python.failure.Failure. You don't have to fake every object in the test. Just the ones that get in your way if you don't fake them.
class FakeResponse(object):
...
This is likely good and necessary.
class FakeSuccessfulAgent(object):
...
This is most likely necessary as well. You should make it actually be more like an IAgent implementation though - declare that it implements the interface, use zope.interface.verify.verify{Class,Object} to make sure you get the implementation write, etc (eg request has the wrong signature now).
There's actually a ticket for adding all of these testing tools to Twisted itself - https://twistedmatrix.com/trac/ticket/4024. So I don't think you're actually confused, you're basically on the same track as the project itself. You're just suffering from the fact that Twisted hasn't already done all of this work for you.
Also, note that instead of:
class Test(unittest.TestCase):
def test1(self):
s_u_t = ExtendedSystemUnderTest(None)
d = s_u_t.action()
d.addCallback(self._checks, s_u_t)
return d
You can write something like this instead (and it is preferable):
class Test(unittest.TestCase):
def test1(self):
s_u_t = ExtendedSystemUnderTest(None)
d = s_u_t.action()
self._checks(s_u_t, self.successResultOf(d))
This is because your fake implementation of IAgent is synchronous. You know it is synchronous. By the time request returns, the Deferred it is returning has a result already. Writing the test this way means you can simplify your code a bit (ie, you can ignore the asynchronousness of it to some extent - because it isn't) and it avoids running the global reactor which is what returning a Deferred from a test method in trial does.