Is there a possibility to write django unittests without setting up a db? I want to test business logic which doesn't require the db to set up. And while it is fast to setup a db, I really don't need it in some situations.
You can subclass DjangoTestSuiteRunner and override setup_databases and teardown_databases methods to pass.
Create a new settings file and set TEST_RUNNER to the new class you just created. Then when you're running your test, specify your new settings file with --settings flag.
Here is what I did:
Create a custom test suit runner similar to this:
from django.test.simple import DjangoTestSuiteRunner
class NoDbTestRunner(DjangoTestSuiteRunner):
""" A test runner to test without database creation """
def setup_databases(self, **kwargs):
""" Override the database creation defined in parent class """
pass
def teardown_databases(self, old_config, **kwargs):
""" Override the database teardown defined in parent class """
pass
Create a custom settings:
from mysite.settings import *
# Test runner with no database creation
TEST_RUNNER = 'mysite.scripts.testrunner.NoDbTestRunner'
When you're running your tests, run it like the following with --settings flag set to your new settings file:
python manage.py test myapp --settings='no_db_settings'
UPDATE: April/2018
Since Django 1.8, the module django.test.simple.DjangoTestSuiteRunner were moved to 'django.test.runner.DiscoverRunner'.
For more info check official doc section about custom test runners.
Generally tests in an application can be classified in to two categories
Unit tests, these test the individual snippets of code in insolation and do not require to go to the database
Integration test cases which actually go to the database and test the fully integrated logic.
Django supports both unit and integration tests.
Unit tests, do not require to setup and tear down database and these we should inherit from SimpleTestCase.
from django.test import SimpleTestCase
class ExampleUnitTest(SimpleTestCase):
def test_something_works(self):
self.assertTrue(True)
For integration test cases inherit from TestCase in turn inherits from TransactionTestCase and it will setup and tear down the database before running each test.
from django.test import TestCase
class ExampleIntegrationTest(TestCase):
def test_something_works(self):
#do something with database
self.assertTrue(True)
This strategy will ensure that database in created and destroyed only for the test cases that access the database and therefore tests will be more efficient
From django.test.simple
warnings.warn(
"The django.test.simple module and DjangoTestSuiteRunner are deprecated; "
"use django.test.runner.DiscoverRunner instead.",
RemovedInDjango18Warning)
So override DiscoverRunner instead of DjangoTestSuiteRunner.
from django.test.runner import DiscoverRunner
class NoDbTestRunner(DiscoverRunner):
""" A test runner to test without database creation/deletion """
def setup_databases(self, **kwargs):
pass
def teardown_databases(self, old_config, **kwargs):
pass
Use like that :
python manage.py test --testrunner=app.filename.NoDbTestRunner app
I chose to inherit from django.test.runner.DiscoverRunner and make a couple of additions to the run_tests method.
My first addition checks to see if setting up a db is necessary and allows the normal setup_databases functionality to kick in if a db is necessary. My second addition allows the normal teardown_databases to run if the setup_databases method was allowed to run.
My code assumes that any TestCase that inherits from django.test.TransactionTestCase (and thus django.test.TestCase) requires a database to be setup. I made this assumption because the Django docs say:
If you need any of the other more complex and heavyweight Django-specific features like ... Testing or using the ORM ... then you should use TransactionTestCase or TestCase instead.
https://docs.djangoproject.com/en/1.6/topics/testing/tools/#django.test.SimpleTestCase
mysite/scripts/settings.py
from django.test import TransactionTestCase
from django.test.runner import DiscoverRunner
class MyDiscoverRunner(DiscoverRunner):
def run_tests(self, test_labels, extra_tests=None, **kwargs):
"""
Run the unit tests for all the test labels in the provided list.
Test labels should be dotted Python paths to test modules, test
classes, or test methods.
A list of 'extra' tests may also be provided; these tests
will be added to the test suite.
If any of the tests in the test suite inherit from
``django.test.TransactionTestCase``, databases will be setup.
Otherwise, databases will not be set up.
Returns the number of tests that failed.
"""
self.setup_test_environment()
suite = self.build_suite(test_labels, extra_tests)
# ----------------- First Addition --------------
need_databases = any(isinstance(test_case, TransactionTestCase)
for test_case in suite)
old_config = None
if need_databases:
# --------------- End First Addition ------------
old_config = self.setup_databases()
result = self.run_suite(suite)
# ----------------- Second Addition -------------
if need_databases:
# --------------- End Second Addition -----------
self.teardown_databases(old_config)
self.teardown_test_environment()
return self.suite_result(suite, result)
Finally, I added the following line to my project's settings.py file.
mysite/settings.py
TEST_RUNNER = 'mysite.scripts.settings.MyDiscoverRunner'
Now, when running only non-db-dependent tests, my test suite runs an order of magnitude faster! :)
Updated: also see this answer for using a third-party tool pytest.
#Cesar is right. After accidentally running ./manage.py test --settings=no_db_settings, without specifying an app name, my development database was wiped out.
For a safer manner, use the same NoDbTestRunner, but in conjunction with the following mysite/no_db_settings.py:
from mysite.settings import *
# Test runner with no database creation
TEST_RUNNER = 'mysite.scripts.testrunner.NoDbTestRunner'
# Use an alternative database as a safeguard against accidents
DATABASES['default']['NAME'] = '_test_mysite_db'
You need to create a database called _test_mysite_db using an external database tool. Then run the following command to create the corresponding tables:
./manage.py syncdb --settings=mysite.no_db_settings
If you're using South, also run the following command:
./manage.py migrate --settings=mysite.no_db_settings
OK!
You can now run unit tests blazingly fast (and safe) by:
./manage.py test myapp --settings=mysite.no_db_settings
As an alternative to modifying your settings to make NoDbTestRunner "safe", here's a modified version of NoDbTestRunner that closes the current database connection and removes the connection information from settings and the connection object. Works for me, test it in your environment before relying on it :)
class NoDbTestRunner(DjangoTestSuiteRunner):
""" A test runner to test without database creation """
def __init__(self, *args, **kwargs):
# hide/disconnect databases to prevent tests that
# *do* require a database which accidentally get
# run from altering your data
from django.db import connections
from django.conf import settings
connections.databases = settings.DATABASES = {}
connections._connections['default'].close()
del connections._connections['default']
super(NoDbTestRunner,self).__init__(*args,**kwargs)
def setup_databases(self, **kwargs):
""" Override the database creation defined in parent class """
pass
def teardown_databases(self, old_config, **kwargs):
""" Override the database teardown defined in parent class """
pass
Another solution would be to have your test class simply inherit from unittest.TestCase instead of any of Django's test classes. The Django docs (https://docs.djangoproject.com/en/2.0/topics/testing/overview/#writing-tests) contain the following warning about this:
Using unittest.TestCase avoids the cost of running each test in a transaction and flushing the database, but if your tests interact with the database their behavior will vary based on the order that the test runner executes them. This can lead to unit tests that pass when run in isolation but fail when run in a suite.
However, if your test doesn't use the database, this warning needn't concern you and you can reap the benefits of not having to run each test case in a transaction.
The above solutions are fine too. But the following solution will also reduce the db creation time if there are more number of migrations.
During unit testing, running syncdb instead of running all the south migrations will be much faster.
SOUTH_TESTS_MIGRATE = False # To disable migrations and use syncdb
instead
My web host only allows creating and dropping databases from their Web GUI, so I was getting a "Got an error creating the test database: Permission denied" error when trying to run python manage.py test.
I'd hoped to use the --keepdb option to django-admin.py but it doesn't seem to be supported any longer as of Django 1.7.
What I ended up doing was modifying the Django code in .../django/db/backends/creation.py, specifically the _create_test_db and _destroy_test_db functions.
For _create_test_db I commented out the cursor.execute("CREATE DATABASE ... line and replaced it with pass so the try block wouldn't be empty.
For _destroy_test_db I just commented out cursor.execute("DROP DATABASE - I didn't need to replace it with anything because there was already another command in the block (time.sleep(1)).
After that my tests ran fine - though I did set up a test_ version of my regular database separately.
This isn't a great solution of course, because it will break if Django is upgraded, but I had a local copy of Django due to using virtualenv so at least I have control over when/if I upgrade to a newer version.
Another solution not mentioned: this was easy for me to implement because I already have multiple settings files (for local / staging / production) that inherit from base.py . So unlike other people I did not have to overwrite DATABASES['default'], as DATABASES isn't set in base.py
SimpleTestCase still tried to connect to my test database and run migrations. When I made a config/settings/test.py file that didn't set DATABASES to anything, then my unit tests ran without it. It allowed me to use models that had foreign key and unique constraint fields. (Reverse foreign key lookup, which requires a db lookup, fails.)
(Django 2.0.6)
PS code snippets
PROJECT_ROOT_DIR/config/settings/test.py:
from .base import *
#other test settings
#DATABASES = {
# 'default': {
# 'ENGINE': 'django.db.backends.sqlite3',
# 'NAME': 'PROJECT_ROOT_DIR/db.sqlite3',
# }
#}
cli, run from PROJECT_ROOT_DIR:
./manage.py test path.to.app.test --settings config.settings.test
path/to/app/test.py:
from django.test import SimpleTestCase
from .models import *
#^assume models.py imports User and defines Classified and UpgradePrice
class TestCaseWorkingTest(SimpleTestCase):
def test_case_working(self):
self.assertTrue(True)
def test_models_ok(self):
obj = UpgradePrice(title='test',price=1.00)
self.assertEqual(obj.title,'test')
def test_more_complex_model(self):
user = User(username='testuser',email='hi#hey.com')
self.assertEqual(user.username,'testuser')
def test_foreign_key(self):
user = User(username='testuser',email='hi#hey.com')
ad = Classified(user=user,headline='headline',body='body')
self.assertEqual(ad.user.username,'testuser')
#fails with error:
def test_reverse_foreign_key(self):
user = User(username='testuser',email='hi#hey.com')
ad = Classified(user=user,headline='headline',body='body')
print(user.classified_set.first())
self.assertTrue(True) #throws exception and never gets here
When using the nose test runner (django-nose), you can do something like this:
my_project/lib/nodb_test_runner.py:
from django_nose import NoseTestSuiteRunner
class NoDbTestRunner(NoseTestSuiteRunner):
"""
A test runner to test without database creation/deletion
Used for integration tests
"""
def setup_databases(self, **kwargs):
pass
def teardown_databases(self, old_config, **kwargs):
pass
In your settings.py you can specify the test runner there, i.e.
TEST_RUNNER = 'lib.nodb_test_runner.NoDbTestRunner' . # Was 'django_nose.NoseTestSuiteRunner'
OR
I wanted it for running specific tests only, so I run it like so:
python manage.py test integration_tests/integration_* --noinput --testrunner=lib.nodb_test_runner.NoDbTestRunner
You can set databases to an empty list inside the normal TestCase from django.test.
from django.test import TestCase
class NoDbTestCase(TestCase):
databases = []
Related
We use django-haystack as our search index. Generally great, but during tests it adds overhead to every model object creation and save, and for most tests it is not required. So I would like to avoid it. So I thought I'd use override_settings to use a dummy that did nothing. But I've now tried both the BaseSignalProcessor and the SimpleEngine and I can still see our search index (elasticsearch) getting hit a lot.
The two version I have tried are:
First using the SimpleEngine which does no data preparation:
from django.test import TestCase
from django.test.utils import override_settings
HAYSTACK_DUMMY_INDEX = {
'default': {
'ENGINE': 'haystack.backends.simple_backend.SimpleEngine',
}
}
#override_settings(HAYSTACK_CONNECTIONS=HAYSTACK_DUMMY_INDEX)
class TestAllTheThings(TestCase):
# ...
and then using the BaseSignalProcessor which should mean that the signals to save are not hooked up:
from django.test import TestCase
from django.test.utils import override_settings
#override_settings(HAYSTACK_SIGNAL_PROCESSOR='haystack.signals.BaseSignalProcessor')
class TestAllTheThings(TestCase):
# ...
I am using pytest as the test runner in case that matters.
Any idea if there is something I am missing?
The settings are only accessed once so overriding it after the fact won't change anything.
Instead, you can subclass the signal processor and stick in some logic to conditionally disable it like so:
from django.conf import settings
from haystack.signals import BaseSignalProcessor
class TogglableSignalProcessor(BaseSignalProcessor):
settings_key = 'HAYSTACK_DISABLE'
def handle_save(self, sender, instance, **kwargs):
if not getattr(settings, self.settings_key, False):
super().handle_save(sender, instance, **kwargs)
def handle_delete(self, sender, instance, **kwargs):
if not getattr(settings, self.settings_key, False):
super().handle_delete(sender, instance, **kwargs)
Now if you configure that as your signal processor then you can easily disable it in tests. The settings key can be set with an environment variable if you're just using manage.py test and not a custom runner. Otherwise you should know where to stick it.
import os
HAYSTACK_DISABLE = 'IS_TEST' in os.environ
And run it with
IS_TEST=1 python manage.py test
And for the few tests where you want it enabled, use override_settings() like you have already tried:
class MyTest(TestCase):
#override_settings(HAYSTACK_ENABLE=True)
def that_one_test_where_its_needed(self):
pass
Of course you can go even further and have conditional settings for the signal processor class so if you have a busy site then my conditional checks don't slow it down when it's running live.
When I run a unit test, Django 1.6 doesn't seem to be creating a blank database to test from and I don't understand why. The Django docs say that Django doesn't use your production database but instead creates a separate, blank database for testing. However, when I debug my test 'test_get_user_ids' and run the command 'UserProxy.objects.all()', I see all the users in my production database. Now I understand that this particular test will fail due to the fact that I'm not saving each UserProxy instance to the database and am therefore not generating ids to test for. But the fact remains that when I query UserProxy, I can still see all the users in my production database which I would expect to be empty. Why is this happening?
BTW, I'm running the test using nosetest: "nosetests -s apps.profile.tests.model_tests.py:UserProxyUT"
Thanks.
# settings.py
DATABASES = {
'default': {
# Enable PostGIS extensions
'ENGINE' : 'django.contrib.gis.db.backends.postgis',
'NAME' : 'myapp',
'USER' : 'myappuser',
'PASSWORD': 'myapppw',
'HOST' : 'localhost',
'PORT' : '',
}
}
# apps/profile/models.py
from django.contrib.auth.models import User
class UserProxy(User):
"""Proxy for the auth.models User class."""
class Meta:
proxy = True
#staticmethod
def get_user_ids(usernames):
"""Return the user ID of each username in a list."""
user_ids = []
for name in usernames:
try:
u = User.objects.get(username__exact=name)
user_ids.append(u.id)
except ObjectDoesNotExist:
logger.error("We were unable to find '%s' in a list of usernames." % name)
return user_ids
# apps/profile/tests/model_tests.py
from django.contrib.auth.models import User
from django.test import TestCase
from apps.profile.models import UserProxy
class UserProxyUT(TestCase):
def test_get_user_ids(self):
debug()
# UserProxy.objects.all() shows usernames from my production database!
u1 = UserProxy(username='user1')
u2 = UserProxy(username='user2')
u3 = UserProxy(username='user3')
usernames = [u1, u2, u3]
expected = [u1.id, u2.id, u3.id]
actual = UserProxy.get_user_ids(usernames)
self.assertEqual(expected, actual)
I'm going to take a stab and say that its because you are using nosetests instead of the Django test runner. Because you are using nosetests, Django's setup_test_environment isn't being called, which means the code doesn't know to use the test database correctly.
Here are the relevant parts of the Django documentation that shoudl help:
Finding data from your production database when running tests?
If your code attempts to access the database when its modules are compiled, this will occur before the test database is set up, with potentially unexpected results. For example, if you have a database query in module-level code and a real database exists, production data could pollute your tests. It is a bad idea to have such import-time database queries in your code anyway - rewrite your code so that it doesn’t do this.
And:
Running tests outside the test runner
If you want to run tests outside of ./manage.py test – for example, from a shell prompt – you will need to set up the test environment first. Django provides a convenience method to do this:
>>> from django.test.utils import setup_test_environment
>>> setup_test_environment()
I have some external services. My Django app is built on top of my external service APIs. In order to talk to my external service, I have to pass in an auth cookies, which I can get by reading User (that cookie != django cookies).
Using test tools like webtests, requests, I have trouble writing my tests.
class MyTestCase(WebTest):
def test_my_view(self):
#client = Client()
#response = client.get(reverse('create')).form
form = self.app.get(reverse('create'), user='dummy').form
print form.fields.values()
form['name'] = 'omghell0'
print form
response = form.submit()
I need to submit a form, which creates, say, a user on my external service. But to do that, I normally would pass in request.user (in order to authenticate my privilege to external service). But I don't have request.user.
What options do I have for this kind of stuff?
Thanks...
Suppose this is my tests.py
import unittest
from django.test.client import Client
from django.core.urlresolvers import reverse
from django_webtest import WebTest
from django.contrib.auth.models import User
class SimpleTest(unittest.TestCase):
def setUp(self):
self.usr = User.objects.get(username='dummy')
print self.usr
.......
I get
Traceback (most recent call last):
File "/var/lib/graphyte-webclient/webclient/apps/codebundles/tests.py", line 10, in setUp
self.usr = User.objects.get(username='dummy')
File "/var/lib/graphyte-webclient/graphyte-webenv/lib/python2.6/site-packages/django/db/models/manager.py", line 132, in get
return self.get_query_set().get(*args, **kwargs)
File "/var/lib/graphyte-webclient/graphyte-webenv/lib/python2.6/site-packages/django/db/models/query.py", line 341, in get
% self.model._meta.object_name)
DoesNotExist: User matching query does not exist
But if I test the User.objects in views, I am okay.
You need to use the setUp() method to create test users for testing - testing never uses live data, but creates a temporary test database to run your unit tests. Read this for more information: https://docs.djangoproject.com/en/dev/topics/testing/?from=olddocs#writing-unit-tests
EDIT:
Here's an example:
from django.utils import unittest
from django.contrib.auth.models import User
from myapp.models import ThisModel, ThatModel
class ModelTest(unittest.TestCase):
def setUp(self):
# Create some users
self.user_1 = User.objects.create_user('Chevy Chase', 'chevy#chase.com', 'chevyspassword')
self.user_2 = User.objects.create_user('Jim Carrey', 'jim#carrey.com', 'jimspassword')
self.user_3 = User.objects.create_user('Dennis Leary', 'dennis#leary.com', 'denisspassword')
Also note that, if you are going to use more than one method to test different functionality, you should use the tearDown method to destroy objects before reinstantiating them for the next test. This is something that took me a while to finally figure out, so I'll save you the trouble.
def tearDown(self):
# Clean up after each test
self.user_1.delete()
self.user_2.delete()
self.user_3.delete()
Django recommends using either unit tests or doc tests, as described here. You can put these tests into tests.py in each apps directory, and they will run when the command `python manage.py test" is used.
Django provides very helpful classes and functions for unit testing, as described here. In particular, the class django.test.Client is very convenient, and lets you control things like users.
https://docs.djangoproject.com/en/1.4/topics/testing/#module-django.test.client
Use the django test client to simulate requests. If you need to test the behavior of the returned result then use Selenium.
I like to run tests before I commit.
I'm new to Selenium and I don't understand how to run the tests and not change the database.
My local database has dozens of identical posted questions.
Is there any way that I can have have these tests run and not have the database restored to it's original state on tearDown?
from selenium import webdriver
from django.utils import unittest
from selenium.webdriver.support.ui import WebDriverWait
class TestAuthentication(unittest.TestCase):
scheme = 'http'
host = 'localhost'
port = '4444'
def setUp(self):
self._driver = webdriver.Firefox()
self._driver.implicitly_wait(5)
def login_as_Bryan(self):
self._driver.get('http://localhost:8000/account/signin/')
user = self._driver.find_element_by_id('id_username')
user.send_keys("Bryan")
password = self._driver.find_element_by_id('id_password')
password.send_keys('***************')
submit = self._driver.find_element_by_id('blogin')
submit.click()
def test_user_should_be_able_to_login_manually(self):
self.login_as_Bryan(self)
message = self._driver.find_element_by_class_name('darkred')
self.assertEqual("Welcome back Bryan, you are now logged in", message.text)
def test_Bryan_can_post_question(self):
self.login_as_Bryan()
self._driver.find_element_by_link_text("ask a question").click()
self._driver.find_element_by_id('id_title').send_keys("Question should succeed")
self._driver.find_element_by_id('editor').send_keys("This is the body text.")
self._driver.find_element_by_id('id_tags').send_keys("test")
self._driver.find_element_by_class_name("submit").click()
self.assertTrue(self._driver.find_element_by_link_text("Question should succeed"))
def tearDown(self):
self._driver.quit()
The issue is not so much Selenium as it is your execution environment. It depends on how you fire up your application.
In general, you need to bootstrap your application launch so that it points to a temporary database for use only during that test. After the test execution, you should delete that database.
Alternately, you can provide a UI mechanism in your actual website to clear / refresh the test database. In that case, you still need a test database, but you don't need to delete/recreate it with every test execution.
You can use django-selenium, it runs tests on test database
In unit tests I need to load fixtures, as below:
class TestQuestionBankViews(TestCase):
# Load fixtures
fixtures = ['qbank']
def setUp(self):
login = self.client.login(email="mail#gmail.com",password="welcome")
def test_starting_an_exam_view(self):
candidate = Candidate.objects.get(email="mail#gmail.com")
.......etc
def test_review_view(self):
self.assertTrue(True)
.........
def test_review_view2(self):
self.assertTrue(True)
.........
Problem:
These fixtures are loading for every test, i.e. before test_review_view, test_review_view2, etc., as Django flushes the database after every test.
This behaviour is causing tests to take a long time to complete.
How can I prevent this redundant fixture loading?
Is there a way to load fixtures in setUp and flush them when the test class is finished, instead of flushing between every test?
Using django-nose and a bit of code, you can do exactly what you asked for. With django-nose, you can have per-package, per-module and per-class setup and teardown functions. That allows you to load your fixtures in one of the higher-up setup functions and disable the django.test.TestCase's resetting of the fixtures between tests.
Here is an example test file:
from django.test import TestCase
from django.core import management
def setup():
management.call_command('loaddata', 'MyFixture.json', verbosity=0)
def teardown():
management.call_command('flush', verbosity=0, interactive=False)
class MyTestCase(TestCase):
def _fixture_setup(self):
pass
def test_something(self):
self.assertEqual(1, 1)
Notice that setup and teardown are outside of the class. The setup will be run before all the test classes in this file, and the teardown will be run after all test classes.
Inside the class, you will notice the def _fixture_setup(self) method. This overrides the function that resets the database in between each test.
Keep in mind that if your tests write anything to the database, this could invalidate your tests. So any other tests that need fixtures reloaded for each test should be put in a different test file.
Or use setUpModule:
def setUpModule():
print 'Module setup...'
def tearDownModule():
print 'Module teardown...'
class Test(unittest.TestCase):
def setUp(self):
print 'Class setup...'
def tearDown(self):
print 'Class teardown...'
def test_one(self):
print 'One'
def test_two(self):
print 'Two'
prints:
Creating test database for alias 'default'...
Module setup...
Class setup...
One
Class teardown...
Class setup...
Two
Class teardown...
Module teardown...
For what it's worth, and since there's no accepted answer, Django 1.8 now provides this functionality out of the box - provided you're using a database backend that supports transactions.
It also adds the TestCase.setUpTestData() method for the manual creation of test data once per TestCase class.
See the Django 1.8 release notes.
If you don't feel like installing a new package just for this purpose, you can combine Tom Wainwright's solution and mhost's solution.
In your testfile, add these functions outside of any classes:
from django.core.management import call_command
def setUpModule():
call_command(
'loaddata',
'path_to_fixture.json',
verbosity=0
)
def tearDownModule():
call_command('flush', interactive=False, verbosity=0)
If you don't want to have these fixtures loaded into the database for all test cases, split the test into multiple files by creating a new directory in the app called tests, add an empty __init__.py file to tell Python that this is a package, and add your test files with file names that begin with test, since the runner will look for files matching the pattern test*.py
I've ran into the same problem. In general, there isn't a really good way to do that using django's test runner. You might be interested in this thread
With that being said, if all the testcases use the same fixture, and they don't modify the data in any way, then using initial_data would work.
I had a similar problem once and ended up writing my own test runner. In my case initial_data was not the right place as initial_data would be loaded during syncdb, something I did not want. I overrode setup_ and teardown_test_environment methods to load my custom fixture before the test suite was run and to remove it once done.
django-nose provides a readymade solution to this problem: simply subclass django_nose.FastFixtureTestCase.
Additionally, django-nose supports fixture bundling, which can speed up your test runs even more by only loading each unique set of fixtures once per test run. After having subclassed FastFixtureTestCase where appropriate, run the django-nose test runner using the --with-fixture-bundling option.
See django-nose on pypi for more information.