pytest + django giving me a database error when fixture scope is 'module' - django

I have the following inside conftest.py
#pytest.mark.django_db
#pytest.fixture(scope='module')
def thing():
print('sleeping') # represents a very expensive function that i want to only ever once once per module
Thing.objects.create(thing='hello')
Thing.objects.create(thing='hello')
Thing.objects.create(thing='hello')
Inside tests.py
#pytest.mark.django_db
def test_thing(thing):
assert models.Thing.objects.count() > 1
#pytest.mark.django_db
def test_thing2(thing):
assert models.Thing.objects.count() > 1
#pytest.mark.django_db
#pytest.mark.usefixtures('thing')
def test_thing3():
assert models.Thing.objects.count() > 1
All three tests throw the same error: RuntimeError: Database access not allowed, use the "django_db" mark, or the "db" or "transactional_db" fixtures to enable it.
I've tried using scope='session' / scope='class' / scope='package' / scope='module' -- the only one that works is `scope='function' which defeats the purpose of what I'm trying to accomplish. I want to be able to create all these items ONCE per module, not once per test.
Note: I ran into this issue with a large code base and created a new django project with a single app to test and see if the problem was the existing test code, and it failed on a standalone test also. Tested it with both postgres and sqlite; doesn't seem like a database issue.
Not that it matters, but the models.py
class Thing(models.Model):
thing = models.CharField(max_length=100)

Ok, turns out this is a known limitation, and it's somewhat documented here. If you want to solve this issue, and get away from this bug:
#pytest.mark.django_db
#pytest.fixture(scope='module')
def thing(django_db_setup, django_db_blocker):
del django_db_setup # Cannot be used with usefixtures(..) it won't work
with django_db_blocker.unblock():
print('sleeping')
Thing.objects.create(thing='hello')
Thing.objects.create(thing='hello')
Thing.objects.create(thing='hello')
Thing.objects.create(thing='hello')
yield

Related

Problem with Django Tests and Trigram Similarity

I have a Django application that executes a full-text-search on a database. The view that executes this query is my search_view (I'm ommiting some parts for the sake of simplicity). It just retrieve the results of the search on my Post model and send to the template:
def search_view(request):
posts = m.Post.objects.all()
query = request.GET.get('q')
search_query = SearchQuery(query, config='english')
qs = Post.objects.annotate(
rank=SearchRank(F('vector_column'), search_query) + TrigramSimilarity('post_title', query)
).filter(rank__gte=0.15).order_by('-rank'), 15
)
context = {
results = qs
}
return render(request, 'core/search.html', context)
The application is working just fine. The problem is with a test I created. Here is my tests.py:
class SearchViewTests(TestCase):
def test_search_without_results(self):
"""
If the user's query did not retrieve anything
show him a message informing that
"""
response = self.client.get(reverse('core:search') + '?q=eksjeispowjskdjies')
self.assertEqual(response.status_code, 200)
self.assertContains(response.content, "We didn\'t find anything on our database. We\'re sorry")
This test raises an ProgrammingError exception:
django.db.utils.ProgrammingError: function similarity(character varying, unknown) does not exist
LINE 1: ...plainto_tsquery('english'::regconfig, 'eksjeispowjskdjies')) + SIMILARITY...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
I understand very well this exception, 'cause I got it sometimes. The SIMILARITY function in Postgres accepts two arguments, and both need to be of type TEXT. The exception is raising because the second argument (my query term) is of type UNKNOWN, therefore the function won't work and Django raises the exception. And I don't understand why, because the actual search is working! Even in the shell it works perfectly:
In [1]: from django.test import Client
In [2]: c = Client()
In [3]: response = c.get(reverse('core:search') + '?page=1&q=eksjeispowjskdjies')
In [4]: response
Out[4]: <HttpResponse status_code=200, "text/html; charset=utf-8">
Any ideas about why test doesn't work, but the actual execution of the app works and console test works too?
I had the same problem and this how I solved it in my case:
First of all, the problem was that when Django creates the test database that it is going to use for tests it does not actually run all of your migrations. It simply creates the tables based on your models.
This means that migrations that create some extension in your database, like pg_trgm do not run when creating the test database.
One way to overcome this is to use a fixture in your conftest.py file which will make create said extensions before any tests run.
So, in your conftest.py file add the following:
# the following fixture is used to add the pg_trgm extension to the test database
#pytest.fixture(scope="session", autouse=True)
def django_db_setup(django_db_setup, django_db_blocker):
"""Test session DB setup."""
with django_db_blocker.unblock():
with connection.cursor() as cursor:
cursor.execute("CREATE EXTENSION IF NOT EXISTS pg_trgm;")
You can of course replace pg_trgm with any other extension you require.
PS: You must make sure the extension you are trying to use works for the test database you have chosen. In order to change the database used by Django you can change the value of
DATABASES = {'default': env.db('your_database_connection_uri')}
in your application's settings.py.

How to test unmanaged models using pytest-django

In my django project, I have 5 applications, with total 15 models,and all of them are unmanaged. I've written some tests in pytest-django, and when I run them, they fail due to not being able to find tables.
How can I create database entries for all these models so that the tests don't fail?
I was trying to get this to work on Django==4.0.4, pytest-django==4.5.2 and none of the results I could find out there worked for me. This is what I could figure out:
# add this to conftest.py
#pytest.hookimpl(tryfirst=True)
def pytest_runtestloop():
from django.apps import apps
unmanaged_models = []
for app in apps.get_app_configs():
unmanaged_models += [m for m in app.get_models()
if not m._meta.managed]
for m in unmanaged_models:
m._meta.managed = True
It seems intuitive to think that we can achieve what we need by overriding django_db_setup, and that is the solution provided in other answers on SO. However I was not able to get this to work, that might have something to do with changes to the order of execution of these fixtures over the years, I am not sure.
Excerpt from the current pytest-django docs:
pytest-django calls django.setup() automatically. If you want to do anything before this, you have to create a pytest plugin and use the pytest_load_initial_conftests() hook
Read more
You can override the django_db_setup fixture in conftest.py file:
#pytest.fixture(scope="session")
def django_db_setup(django_db_blocker):
with django_db_blocker.unblock():
from django.apps import apps
models_list = apps.get_models()
for model in models_list:
with connection.schema_editor() as schema_editor:
schema_editor.create_model(model)
if model._meta.db_table not in connection.introspection.table_names():
raise ValueError(
"Table `{table_name}` is missing in test database.".format(
table_name=model._meta.db_table
)
)
yield
for model in models_list:
with connection.schema_editor() as schema_editor:
schema_editor.delete_model(model)
This will create the tables for unmanaged models before running tests, and delete those tables after test.

HTML report for django tests

I have a Django project containing an API (created with rest framework if that counts anywhere). I have added some tests for the API but in order to have an overall view of the tests, either passing, either failing or missing, I need to create an HTML report.
When the tests are finished a HTML table report should be generated which shows the endpoints and HTTP responses covered during tests, the results of the tests plus the combinations which are missing the tests.
Unfortunately I cannot understand how should I do that. I know that coverage can give me a detailed html report, but that's not what I need, I need something like this:
| Endpoint description | 200 | 400 | 403 | 404 |
| GET /endpoint1 | PASS | PASS |PASS | N/A |
| POST /endpoint1 | PASS | FAIL |MISSING| N/A |
Does anybody has any idea about that? Maybe some libs that could help out with that or what strategy should I use for that?
Thank you in advance
Late to the party, but this is my solution to outputting a HTML test report for Django tests. (based on HtmlTestRunner cannot be directly used with Django DiscoverRunner)
The following classes if placed in tests/html_test_reporter.py can be used as a DiscoverRunner which is patched to use HTMLTestRunner.
from django.test.runner import DiscoverRunner
from HtmlTestRunner import HTMLTestRunner
class MyHTMLTestRunner(HTMLTestRunner):
def __init__(self, **kwargs):
# Pass any required options to HTMLTestRunner
super().__init__(combine_reports=True, report_name='all_tests', add_timestamp=False, **kwargs)
class HtmlTestReporter(DiscoverRunner):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# Patch over the test_runner in the super class.
html_test_runner = MyHTMLTestRunner
self.test_runner=html_test_runner
Then this is run with:
python manage.py test -v 2 --testrunner tests.html_test_reporter.HtmlTestReporter
By default Django projects use django.test.runner.DiscoverRunner to search for tests and then use PyTest to run them. HTMLTestRunner can be used with PyTest to output a HTML test report, but it does seem possible to configure PyTest to use HTMLRunner through DiscoverRunner.
Hope this helps.
As Django uses the python's standard unittest library, you'll have to tweak some of its parts.
First, you'll need some way to specify which tests actually test which endpoint. A custom decorator is handy for that:
from functools import wraps
def endpoint(path, code):
"""
Mark some test as one which tests specific endpoint.
"""
def inner(func):
#wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
inner._endpoint_path = path
inner._endpoint_code = code
return inner
class MyTestCase(TestCase):
#endpoint(path='/path/one', code=200)
def test_my_path_is_ok(self):
response = self.client.get('/path/one?foo=bar')
self.assertEqual(response.status_code, 200)
#endpoint(path='/path/one', code=404)
def test_my_path_expected_errors(self):
response = self.client.get('/path/one?foo=qux')
self.assertEqual(response.status_code, 404)
def test_some_other_stuff(self):
# this one will not be included in our results grid.
pass
You could use a "magical" approach (e.g. special methods' names to guess the endpoint they are testing) instead, but explicit is better than implicit, right?
Then, you need a way to collect the results of your tests - specifically, of that which test the endpoints. Here we make a (very draft) subclass of unittest.TestResult to handle it:
class EndpointsTestResult(TestResult):
def __init__(self):
super(EndpointsTestResult, self).__init__()
self.endpoint_results = {}
def addError(self, test, err):
super(EndpointsTestResult, self).addError(test, err)
if hasattr(test, '_endpoint_path'):
branch = self.endpoint_results.setdefault(getattr(test, '_endpoint_path'), {})
branch[getattr(test, '_endpoint_code')] = 'MISSING'
def addFailure(self, test, err):
# similar as addError()
def addSuccess(self, test):
# similar as addError()
Finally it's time to actually output our results. Let's make a sublass of the unittest.TextTestRunner and specify it in our custom runner:
class EndpointsTestRunner(TextTestRunner):
def _makeResult(self):
self._result = EndpointsTestResult()
return self._result
def run(self, test):
super(EndpointsTestRunner).run(test)
# After running a test, print out the table
generate_a_nifty_table(self._result.endpoint_results)
class EndpointsDjangoRunner(django.test.runner.DiscoverRunner):
test_runner = EndpointsTestRunner
Now we have our custom EndpointsDjangoRunner, and we should specify it in the settings.py:
TEST_RUNNER = 'path.to.the.EndpointsDjangoRunner'
That's it. Please let me know if you spot any awkward errors in the code.

Django testing of neo4j database

I'm using django with neo4j as database and noemodel as OGM. How do I test it?
When I run python3 manage.py test all the changes, my tests make are left.
And also how do I make two databases, one for testing, another for working in production and specify which one to use how?
I assume the reason all of your changes are being retained is due to using the same neo4j database for testing as you are using in development. Since neomodel isn't integrated tightly with Django it doesn't act the same way Django's ORM does when testing. Django will do some helpful things when you run tests using its ORM, such as creating a test database that will be destroyed upon completion.
With neo4j and neomodel I'd recommend doing the following:
Create a Custom Test Runner
Django enables you to define a custom test runner by setting the TEST_RUNNER settings variable. An extremely simple version of this to get you going would be:
from time import sleep
from subprocess import call
from django.test.runner import DiscoverRunner
class MyTestRunner(DiscoverRunner):
def setup_databases(self, *args, **kwargs):
# Stop your development instance
call("sudo service neo4j-service stop", shell=True)
# Sleep to ensure the service has completely stopped
sleep(1)
# Start your test instance (see section below for more details)
success = call("/path/to/test/db/neo4j-community-2.2.2/bin/neo4j"
" start-no-wait", shell=True)
# Need to sleep to wait for the test instance to completely come up
sleep(10)
if success != 0:
return False
try:
# For neo4j 2.2.x you'll need to set a password or deactivate auth
# Nigel Small's py2neo gives us an easy way to accomplish this
call("source /path/to/virtualenv/bin/activate && "
"/path/to/virtualenv/bin/neoauth "
"neo4j neo4j my-p4ssword")
except OSError:
pass
# Don't import neomodel until we get here because we need to wait
# for the new db to be spawned
from neomodel import db
# Delete all previous entries in the db prior to running tests
query = "match (n)-[r]-() delete n,r"
db.cypher_query(query)
super(MyTestRunner, self).__init__(*args, **kwargs)
def teardown_databases(self, old_config, **kwargs):
from neomodel import db
# Delete all previous entries in the db after running tests
query = "match (n)-[r]-() delete n,r"
db.cypher_query(query)
sleep(1)
# Shut down test neo4j instance
success = call("/path/to/test/db/neo4j-community-2.2.2/bin/neo4j"
" stop", shell=True)
if success != 0:
return False
sleep(1)
# start back up development instance
call("sudo service neo4j-service start", shell=True)
Add a secondary neo4j database
This can be done in a couple ways but to follow along with the test runner above you can download a community distribution from neo4j's website. With this secondary instance you can now swap between which database you'd like to use utilizing the command line statements used in the calls within the test runner.
Wrap Up
This solution assume's you're on a linux box but should be portable to a different OS with minor modifications. Also I'd recommend checking out the Django's Test Runner Docs to expand upon what the test runner can do.
There currently isn't mechanism for working with test databases in neomodel as neo4j only has 1 schema per instance.
However you can override the environment variable NEO4J_REST_URL when running the tests like so
export NEO4J_REST_URL=http://localhost:7473/db/data python3 manage.py test
The way I went about this was to give in and use the existing database, but mark all test-related nodes and detach/delete them when finished. It's obviously not ideal; all your node classes must inherit from NodeBase or risk polluting the db with test data, and if you have unique constraints, those will still be enforced across both live/test data. But it works for my purposes, and I thought I'd share in case it helps someone else.
in myproject/base.py:
from neomodel.properties import Property, validator
from django.conf import settings
class TestModeProperty(Property):
"""
Boolean property that is only set during unit testing.
"""
#validator
def inflate(self, value):
return bool(value)
#validator
def deflate(self, value):
return bool(value)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.default = True
self.has_default = settings.UNIT_TESTING
class NodeBase(StructuredNode):
__abstract_node__ = True
test_mode = TestModeProperty()
in myproject/test_runner.py:
from django.test.runner import DiscoverRunner
from neomodel import db
class NeoDiscoverRunner(DiscoverRunner):
def teardown_databases(self, old_config, **kwargs):
db.cypher_query(
"""
MATCH (node {test_mode: true})
DETACH DELETE node
"""
)
return super().teardown_databases(old_config, **kwargs)
in settings.py:
UNIT_TESTING = sys.argv[1:2] == ["test"]
TEST_RUNNER = "myproject.test_runner.NeoDiscoverRunner"

Django test script to pre-populate DB

I'm trying to pre-populate the database with some test data for my Django project. Is there some easy way to do this with a script that's "outside" of Django?
Let's say I want to do this very simple task, creating 5 test users using the following code,
N = 10
i = 0
while i < N:
c = 'user' + str(i) + '#gmail.com'
u = lancer.models.CustomUser.objects.create_user(email=c, password="12345")
i = i + 1
The questions are,
WHERE do I put this test script file?
WHAT IMPORTS / COMMANDS do I need to put at the beginning of the file so it has access to all the Django environment & resources as if I were writing this inside the app?
I'm thinking you'd have to import and set up the settings file, and import the app's models, etc... but all my attempts have failed one way or another, so would appreciate some help =)
Thanks!
Providing another answer
The respondes below are excellent answers. I fiddled around and found an alternative way. I added the following to the top of the test data script,
from django.core.management import setup_environ
from project_lancer import settings
setup_environ(settings)
import lancer.models
Now my code above works.
I recommend you to use fixtures for these purposes:
https://docs.djangoproject.com/en/dev/howto/initial-data/
If you still want to use this initial code then read:
If you use south you can create migration and put this code there:
python manage.py schemamigration --empty my_data_migration
class Migration(SchemaMigration):
no_dry_run = False
def forwards(self, orm):
# more pythonic, you can also use bulk_insert here
for i in xrange(10):
email = "user{}#gmail.com".format(i)
u = orm.CustomUser.objects.create_user(email=email, password='12345)
You can put it to setUp method of your TestCase:
class MyTestCase(TestCase):
def setUp(self):
# more pythonic, you can also use bulk_insert here
for i in xrange(10):
email = "user{}#gmail.com".format(i)
u = lancer.models.CustomUser.objects.create_user(email=email,
password='12345')
def test_foo(self):
pass
Also you can define your BaseTestCase in which you override setUp method then you create your TestCase classes that inherit from BaseTestCase:
class BaseTestCase(TestCase):
def setUp(self):
'your initial logic here'
class MyFirstTestCase(BaseTestCase):
pase
class MySecondTestCase(BaseTestCase):
pase
But I think that fixtures is the best way:
class BaseTestCase(TestCase):
fixtures = ['users_for_test.json']
class MyFirstTestCase(BaseTestCase):
pase
class MySecondTestCase(BaseTestCase):
fixtures = ['special_users_for_only_this_test_case.json']
Updated:
python manage.py shell
from django.contrib.auth.hashers import make_password
make_password('12312312')
'pbkdf2_sha256$10000$9KQ15rVsxZ0t$xMEKUicxtRjfxHobZ7I9Lh56B6Pkw7K8cO0ow2qCKdc='
You can also use something like this or this to auto-populate your models for testing purposes.