Related
We use django-haystack as our search index. Generally great, but during tests it adds overhead to every model object creation and save, and for most tests it is not required. So I would like to avoid it. So I thought I'd use override_settings to use a dummy that did nothing. But I've now tried both the BaseSignalProcessor and the SimpleEngine and I can still see our search index (elasticsearch) getting hit a lot.
The two version I have tried are:
First using the SimpleEngine which does no data preparation:
from django.test import TestCase
from django.test.utils import override_settings
HAYSTACK_DUMMY_INDEX = {
'default': {
'ENGINE': 'haystack.backends.simple_backend.SimpleEngine',
}
}
#override_settings(HAYSTACK_CONNECTIONS=HAYSTACK_DUMMY_INDEX)
class TestAllTheThings(TestCase):
# ...
and then using the BaseSignalProcessor which should mean that the signals to save are not hooked up:
from django.test import TestCase
from django.test.utils import override_settings
#override_settings(HAYSTACK_SIGNAL_PROCESSOR='haystack.signals.BaseSignalProcessor')
class TestAllTheThings(TestCase):
# ...
I am using pytest as the test runner in case that matters.
Any idea if there is something I am missing?
The settings are only accessed once so overriding it after the fact won't change anything.
Instead, you can subclass the signal processor and stick in some logic to conditionally disable it like so:
from django.conf import settings
from haystack.signals import BaseSignalProcessor
class TogglableSignalProcessor(BaseSignalProcessor):
settings_key = 'HAYSTACK_DISABLE'
def handle_save(self, sender, instance, **kwargs):
if not getattr(settings, self.settings_key, False):
super().handle_save(sender, instance, **kwargs)
def handle_delete(self, sender, instance, **kwargs):
if not getattr(settings, self.settings_key, False):
super().handle_delete(sender, instance, **kwargs)
Now if you configure that as your signal processor then you can easily disable it in tests. The settings key can be set with an environment variable if you're just using manage.py test and not a custom runner. Otherwise you should know where to stick it.
import os
HAYSTACK_DISABLE = 'IS_TEST' in os.environ
And run it with
IS_TEST=1 python manage.py test
And for the few tests where you want it enabled, use override_settings() like you have already tried:
class MyTest(TestCase):
#override_settings(HAYSTACK_ENABLE=True)
def that_one_test_where_its_needed(self):
pass
Of course you can go even further and have conditional settings for the signal processor class so if you have a busy site then my conditional checks don't slow it down when it's running live.
I'm writing the tests for django views, Some of the views are making the external HTTP requests. While running the tests i dont want to execute these HTTP requests. Since during tests , data is being used is dummy and these HTTP requests will not behave as expected.
What could be the possible options for this ?
You could override settings in your tests and then check for that setting in your view. Here are the docs to override settings.
from django.conf import settings
if not settings.TEST_API:
# api call here
Then your test would look something like this
from django.test import TestCase, override_settings
class LoginTestCase(TestCase):
#override_settings(TEST_API=True)
def test_api_func(self):
# Do test here
Since it would be fairly messy to have those all over the place I would recommend creating a mixin that would look something like this.
class SensitiveAPIMixin(object):
def api_request(self, url, *args, **kwargs):
from django.conf import settings
if not settings.TEST_API:
request = api_call(url)
# Do api request in here
return request
Then, through the power of multiple inheritence, your views that you need to make a request to this api call you could do something similar to this.
class View(generic.ListView, SensitiveAPIMixin):
def get(self, request, *args, **kwargs):
data = self.api_request('http://example.com/api1')
This is where mocking comes in. In your tests, you can use libraries to patch the parts of the code you are testing to return the results you expect for the test, bypassing what that code actually does.
You can read a good blog post about mocking in Python here.
If you are on Python 3.3 or later, the mock library is included in Python. If not, you can download it from PyPI.
The exact details of how to mock the calls you're making will depend on what exactly your view code looks like.
Ben is right on, but here's some psuedo-ish code that might help. The patch here assumes you're using requests, but change the path as necessary to mock out what you need.
from unittest import mock
from django.test import TestCase
from django.core.urlresolvers import reverse
class MyTestCase(TestCase):
#mock.patch('requests.post') # this is all you need to stop the API call
def test_my_view_that_posts_to_an_api(self, mock_get):
response = self.client.get(reverse('my-view-name'))
self.assertEqual('my-value', response.data['my-key'])
# other assertions as necessary
I realize there are many other questions related to custom django signals that don't work, and believe me, I have read all of them several times with no luck for getting my personal situation to work.
Here's the deal: I'm using django-rq to manage a lengthy background process that is set off by a particular http request. When that background process is done, I want it to fire off a custom Django signal so that the django-rq can be checked for any job failure/exceptions.
Two applications, both on the INSTALLED_APPS list, are at the same level. Inside of app1 there is a file:
signals.py
import django.dispatch
file_added = django.dispatch.Signal(providing_args=["issueKey", "file"])
fm_job_done = django.dispatch.Signal(providing_args=["jobId"])
and also a file jobs.py
from app1 import signals
from django.conf import settings
jobId = 23
issueKey = "fake"
fileObj = "alsoFake"
try:
pass
finally:
signals.file_added.send(sender=settings.SIGNAL_SENDER,issueKey=issueKey,fileName=fileObj)
signals.fm_job_done.send(sender=settings.SIGNAL_SENDER,jobId=jobId)
then inside of app2, in views.py
from app1.signals import file_added, fm_job_done
from django.conf import settings
#Setup signal handlers
def fm_job_done_callback(sender, **kwargs):
print "hellooooooooooooooooooooooooooooooooooo"
logging.info("file manager job done signal fired")
def file_added_callback(sender, **kwargs):
print "hellooooooooooooooooooooooooooooooooooo"
logging.info("file added signal fired")
file_added.connect(file_added_callback,sender=settings.SIGNAL_SENDER,weak=False)
fm_job_done.connect(fm_job_done_callback,sender=settings.SIGNAL_SENDER,weak=False)
I don't get any feedback whatsoever though and am at a total loss. I know for fact that jobs.py is executing, and therefore also that the block of code that should be firing the signals is executing as well since it is in a finally block (no the try is not actually empty - I just put pass there for simplicity) Please feel free to ask for more information - I'll respond asap.
here is the solution for django > 2.0
settings.py:
change name of your INSTALLED_APPS from 'app2' to
'app2.apps.App2Config'
app2 -> apps.py:
from app1.signals import file_added, fm_job_done
Class App2Config(AppConfig):
name = 'app2'
def ready(self):
from .views import fm_job_done_callback, file_added_callback
file_added.connect(file_added_callback)
fm_job_done.connect(fm_job_done_callback)
use django receiver decorator
from django.dispatch import receiver
from app1.signals import file_added, fm_job_done
#receiver(fm_job_done)
def fm_job_done_callback(sender, **kwargs):
print "helloooooooooooooo"
#receiver(file_added)
def file_added_callback(sender, **kwargs):
print "helloooooooooooooo"
Also, I prefer to handle signals in models.py
Is there a possibility to write django unittests without setting up a db? I want to test business logic which doesn't require the db to set up. And while it is fast to setup a db, I really don't need it in some situations.
You can subclass DjangoTestSuiteRunner and override setup_databases and teardown_databases methods to pass.
Create a new settings file and set TEST_RUNNER to the new class you just created. Then when you're running your test, specify your new settings file with --settings flag.
Here is what I did:
Create a custom test suit runner similar to this:
from django.test.simple import DjangoTestSuiteRunner
class NoDbTestRunner(DjangoTestSuiteRunner):
""" A test runner to test without database creation """
def setup_databases(self, **kwargs):
""" Override the database creation defined in parent class """
pass
def teardown_databases(self, old_config, **kwargs):
""" Override the database teardown defined in parent class """
pass
Create a custom settings:
from mysite.settings import *
# Test runner with no database creation
TEST_RUNNER = 'mysite.scripts.testrunner.NoDbTestRunner'
When you're running your tests, run it like the following with --settings flag set to your new settings file:
python manage.py test myapp --settings='no_db_settings'
UPDATE: April/2018
Since Django 1.8, the module django.test.simple.DjangoTestSuiteRunner were moved to 'django.test.runner.DiscoverRunner'.
For more info check official doc section about custom test runners.
Generally tests in an application can be classified in to two categories
Unit tests, these test the individual snippets of code in insolation and do not require to go to the database
Integration test cases which actually go to the database and test the fully integrated logic.
Django supports both unit and integration tests.
Unit tests, do not require to setup and tear down database and these we should inherit from SimpleTestCase.
from django.test import SimpleTestCase
class ExampleUnitTest(SimpleTestCase):
def test_something_works(self):
self.assertTrue(True)
For integration test cases inherit from TestCase in turn inherits from TransactionTestCase and it will setup and tear down the database before running each test.
from django.test import TestCase
class ExampleIntegrationTest(TestCase):
def test_something_works(self):
#do something with database
self.assertTrue(True)
This strategy will ensure that database in created and destroyed only for the test cases that access the database and therefore tests will be more efficient
From django.test.simple
warnings.warn(
"The django.test.simple module and DjangoTestSuiteRunner are deprecated; "
"use django.test.runner.DiscoverRunner instead.",
RemovedInDjango18Warning)
So override DiscoverRunner instead of DjangoTestSuiteRunner.
from django.test.runner import DiscoverRunner
class NoDbTestRunner(DiscoverRunner):
""" A test runner to test without database creation/deletion """
def setup_databases(self, **kwargs):
pass
def teardown_databases(self, old_config, **kwargs):
pass
Use like that :
python manage.py test --testrunner=app.filename.NoDbTestRunner app
I chose to inherit from django.test.runner.DiscoverRunner and make a couple of additions to the run_tests method.
My first addition checks to see if setting up a db is necessary and allows the normal setup_databases functionality to kick in if a db is necessary. My second addition allows the normal teardown_databases to run if the setup_databases method was allowed to run.
My code assumes that any TestCase that inherits from django.test.TransactionTestCase (and thus django.test.TestCase) requires a database to be setup. I made this assumption because the Django docs say:
If you need any of the other more complex and heavyweight Django-specific features like ... Testing or using the ORM ... then you should use TransactionTestCase or TestCase instead.
https://docs.djangoproject.com/en/1.6/topics/testing/tools/#django.test.SimpleTestCase
mysite/scripts/settings.py
from django.test import TransactionTestCase
from django.test.runner import DiscoverRunner
class MyDiscoverRunner(DiscoverRunner):
def run_tests(self, test_labels, extra_tests=None, **kwargs):
"""
Run the unit tests for all the test labels in the provided list.
Test labels should be dotted Python paths to test modules, test
classes, or test methods.
A list of 'extra' tests may also be provided; these tests
will be added to the test suite.
If any of the tests in the test suite inherit from
``django.test.TransactionTestCase``, databases will be setup.
Otherwise, databases will not be set up.
Returns the number of tests that failed.
"""
self.setup_test_environment()
suite = self.build_suite(test_labels, extra_tests)
# ----------------- First Addition --------------
need_databases = any(isinstance(test_case, TransactionTestCase)
for test_case in suite)
old_config = None
if need_databases:
# --------------- End First Addition ------------
old_config = self.setup_databases()
result = self.run_suite(suite)
# ----------------- Second Addition -------------
if need_databases:
# --------------- End Second Addition -----------
self.teardown_databases(old_config)
self.teardown_test_environment()
return self.suite_result(suite, result)
Finally, I added the following line to my project's settings.py file.
mysite/settings.py
TEST_RUNNER = 'mysite.scripts.settings.MyDiscoverRunner'
Now, when running only non-db-dependent tests, my test suite runs an order of magnitude faster! :)
Updated: also see this answer for using a third-party tool pytest.
#Cesar is right. After accidentally running ./manage.py test --settings=no_db_settings, without specifying an app name, my development database was wiped out.
For a safer manner, use the same NoDbTestRunner, but in conjunction with the following mysite/no_db_settings.py:
from mysite.settings import *
# Test runner with no database creation
TEST_RUNNER = 'mysite.scripts.testrunner.NoDbTestRunner'
# Use an alternative database as a safeguard against accidents
DATABASES['default']['NAME'] = '_test_mysite_db'
You need to create a database called _test_mysite_db using an external database tool. Then run the following command to create the corresponding tables:
./manage.py syncdb --settings=mysite.no_db_settings
If you're using South, also run the following command:
./manage.py migrate --settings=mysite.no_db_settings
OK!
You can now run unit tests blazingly fast (and safe) by:
./manage.py test myapp --settings=mysite.no_db_settings
As an alternative to modifying your settings to make NoDbTestRunner "safe", here's a modified version of NoDbTestRunner that closes the current database connection and removes the connection information from settings and the connection object. Works for me, test it in your environment before relying on it :)
class NoDbTestRunner(DjangoTestSuiteRunner):
""" A test runner to test without database creation """
def __init__(self, *args, **kwargs):
# hide/disconnect databases to prevent tests that
# *do* require a database which accidentally get
# run from altering your data
from django.db import connections
from django.conf import settings
connections.databases = settings.DATABASES = {}
connections._connections['default'].close()
del connections._connections['default']
super(NoDbTestRunner,self).__init__(*args,**kwargs)
def setup_databases(self, **kwargs):
""" Override the database creation defined in parent class """
pass
def teardown_databases(self, old_config, **kwargs):
""" Override the database teardown defined in parent class """
pass
Another solution would be to have your test class simply inherit from unittest.TestCase instead of any of Django's test classes. The Django docs (https://docs.djangoproject.com/en/2.0/topics/testing/overview/#writing-tests) contain the following warning about this:
Using unittest.TestCase avoids the cost of running each test in a transaction and flushing the database, but if your tests interact with the database their behavior will vary based on the order that the test runner executes them. This can lead to unit tests that pass when run in isolation but fail when run in a suite.
However, if your test doesn't use the database, this warning needn't concern you and you can reap the benefits of not having to run each test case in a transaction.
The above solutions are fine too. But the following solution will also reduce the db creation time if there are more number of migrations.
During unit testing, running syncdb instead of running all the south migrations will be much faster.
SOUTH_TESTS_MIGRATE = False # To disable migrations and use syncdb
instead
My web host only allows creating and dropping databases from their Web GUI, so I was getting a "Got an error creating the test database: Permission denied" error when trying to run python manage.py test.
I'd hoped to use the --keepdb option to django-admin.py but it doesn't seem to be supported any longer as of Django 1.7.
What I ended up doing was modifying the Django code in .../django/db/backends/creation.py, specifically the _create_test_db and _destroy_test_db functions.
For _create_test_db I commented out the cursor.execute("CREATE DATABASE ... line and replaced it with pass so the try block wouldn't be empty.
For _destroy_test_db I just commented out cursor.execute("DROP DATABASE - I didn't need to replace it with anything because there was already another command in the block (time.sleep(1)).
After that my tests ran fine - though I did set up a test_ version of my regular database separately.
This isn't a great solution of course, because it will break if Django is upgraded, but I had a local copy of Django due to using virtualenv so at least I have control over when/if I upgrade to a newer version.
Another solution not mentioned: this was easy for me to implement because I already have multiple settings files (for local / staging / production) that inherit from base.py . So unlike other people I did not have to overwrite DATABASES['default'], as DATABASES isn't set in base.py
SimpleTestCase still tried to connect to my test database and run migrations. When I made a config/settings/test.py file that didn't set DATABASES to anything, then my unit tests ran without it. It allowed me to use models that had foreign key and unique constraint fields. (Reverse foreign key lookup, which requires a db lookup, fails.)
(Django 2.0.6)
PS code snippets
PROJECT_ROOT_DIR/config/settings/test.py:
from .base import *
#other test settings
#DATABASES = {
# 'default': {
# 'ENGINE': 'django.db.backends.sqlite3',
# 'NAME': 'PROJECT_ROOT_DIR/db.sqlite3',
# }
#}
cli, run from PROJECT_ROOT_DIR:
./manage.py test path.to.app.test --settings config.settings.test
path/to/app/test.py:
from django.test import SimpleTestCase
from .models import *
#^assume models.py imports User and defines Classified and UpgradePrice
class TestCaseWorkingTest(SimpleTestCase):
def test_case_working(self):
self.assertTrue(True)
def test_models_ok(self):
obj = UpgradePrice(title='test',price=1.00)
self.assertEqual(obj.title,'test')
def test_more_complex_model(self):
user = User(username='testuser',email='hi#hey.com')
self.assertEqual(user.username,'testuser')
def test_foreign_key(self):
user = User(username='testuser',email='hi#hey.com')
ad = Classified(user=user,headline='headline',body='body')
self.assertEqual(ad.user.username,'testuser')
#fails with error:
def test_reverse_foreign_key(self):
user = User(username='testuser',email='hi#hey.com')
ad = Classified(user=user,headline='headline',body='body')
print(user.classified_set.first())
self.assertTrue(True) #throws exception and never gets here
When using the nose test runner (django-nose), you can do something like this:
my_project/lib/nodb_test_runner.py:
from django_nose import NoseTestSuiteRunner
class NoDbTestRunner(NoseTestSuiteRunner):
"""
A test runner to test without database creation/deletion
Used for integration tests
"""
def setup_databases(self, **kwargs):
pass
def teardown_databases(self, old_config, **kwargs):
pass
In your settings.py you can specify the test runner there, i.e.
TEST_RUNNER = 'lib.nodb_test_runner.NoDbTestRunner' . # Was 'django_nose.NoseTestSuiteRunner'
OR
I wanted it for running specific tests only, so I run it like so:
python manage.py test integration_tests/integration_* --noinput --testrunner=lib.nodb_test_runner.NoDbTestRunner
You can set databases to an empty list inside the normal TestCase from django.test.
from django.test import TestCase
class NoDbTestCase(TestCase):
databases = []
In unit tests I need to load fixtures, as below:
class TestQuestionBankViews(TestCase):
# Load fixtures
fixtures = ['qbank']
def setUp(self):
login = self.client.login(email="mail#gmail.com",password="welcome")
def test_starting_an_exam_view(self):
candidate = Candidate.objects.get(email="mail#gmail.com")
.......etc
def test_review_view(self):
self.assertTrue(True)
.........
def test_review_view2(self):
self.assertTrue(True)
.........
Problem:
These fixtures are loading for every test, i.e. before test_review_view, test_review_view2, etc., as Django flushes the database after every test.
This behaviour is causing tests to take a long time to complete.
How can I prevent this redundant fixture loading?
Is there a way to load fixtures in setUp and flush them when the test class is finished, instead of flushing between every test?
Using django-nose and a bit of code, you can do exactly what you asked for. With django-nose, you can have per-package, per-module and per-class setup and teardown functions. That allows you to load your fixtures in one of the higher-up setup functions and disable the django.test.TestCase's resetting of the fixtures between tests.
Here is an example test file:
from django.test import TestCase
from django.core import management
def setup():
management.call_command('loaddata', 'MyFixture.json', verbosity=0)
def teardown():
management.call_command('flush', verbosity=0, interactive=False)
class MyTestCase(TestCase):
def _fixture_setup(self):
pass
def test_something(self):
self.assertEqual(1, 1)
Notice that setup and teardown are outside of the class. The setup will be run before all the test classes in this file, and the teardown will be run after all test classes.
Inside the class, you will notice the def _fixture_setup(self) method. This overrides the function that resets the database in between each test.
Keep in mind that if your tests write anything to the database, this could invalidate your tests. So any other tests that need fixtures reloaded for each test should be put in a different test file.
Or use setUpModule:
def setUpModule():
print 'Module setup...'
def tearDownModule():
print 'Module teardown...'
class Test(unittest.TestCase):
def setUp(self):
print 'Class setup...'
def tearDown(self):
print 'Class teardown...'
def test_one(self):
print 'One'
def test_two(self):
print 'Two'
prints:
Creating test database for alias 'default'...
Module setup...
Class setup...
One
Class teardown...
Class setup...
Two
Class teardown...
Module teardown...
For what it's worth, and since there's no accepted answer, Django 1.8 now provides this functionality out of the box - provided you're using a database backend that supports transactions.
It also adds the TestCase.setUpTestData() method for the manual creation of test data once per TestCase class.
See the Django 1.8 release notes.
If you don't feel like installing a new package just for this purpose, you can combine Tom Wainwright's solution and mhost's solution.
In your testfile, add these functions outside of any classes:
from django.core.management import call_command
def setUpModule():
call_command(
'loaddata',
'path_to_fixture.json',
verbosity=0
)
def tearDownModule():
call_command('flush', interactive=False, verbosity=0)
If you don't want to have these fixtures loaded into the database for all test cases, split the test into multiple files by creating a new directory in the app called tests, add an empty __init__.py file to tell Python that this is a package, and add your test files with file names that begin with test, since the runner will look for files matching the pattern test*.py
I've ran into the same problem. In general, there isn't a really good way to do that using django's test runner. You might be interested in this thread
With that being said, if all the testcases use the same fixture, and they don't modify the data in any way, then using initial_data would work.
I had a similar problem once and ended up writing my own test runner. In my case initial_data was not the right place as initial_data would be loaded during syncdb, something I did not want. I overrode setup_ and teardown_test_environment methods to load my custom fixture before the test suite was run and to remove it once done.
django-nose provides a readymade solution to this problem: simply subclass django_nose.FastFixtureTestCase.
Additionally, django-nose supports fixture bundling, which can speed up your test runs even more by only loading each unique set of fixtures once per test run. After having subclassed FastFixtureTestCase where appropriate, run the django-nose test runner using the --with-fixture-bundling option.
See django-nose on pypi for more information.