Is it possible to disable django haystack for some tests? - django

We use django-haystack as our search index. Generally great, but during tests it adds overhead to every model object creation and save, and for most tests it is not required. So I would like to avoid it. So I thought I'd use override_settings to use a dummy that did nothing. But I've now tried both the BaseSignalProcessor and the SimpleEngine and I can still see our search index (elasticsearch) getting hit a lot.
The two version I have tried are:
First using the SimpleEngine which does no data preparation:
from django.test import TestCase
from django.test.utils import override_settings
HAYSTACK_DUMMY_INDEX = {
'default': {
'ENGINE': 'haystack.backends.simple_backend.SimpleEngine',
}
}
#override_settings(HAYSTACK_CONNECTIONS=HAYSTACK_DUMMY_INDEX)
class TestAllTheThings(TestCase):
# ...
and then using the BaseSignalProcessor which should mean that the signals to save are not hooked up:
from django.test import TestCase
from django.test.utils import override_settings
#override_settings(HAYSTACK_SIGNAL_PROCESSOR='haystack.signals.BaseSignalProcessor')
class TestAllTheThings(TestCase):
# ...
I am using pytest as the test runner in case that matters.
Any idea if there is something I am missing?

The settings are only accessed once so overriding it after the fact won't change anything.
Instead, you can subclass the signal processor and stick in some logic to conditionally disable it like so:
from django.conf import settings
from haystack.signals import BaseSignalProcessor
class TogglableSignalProcessor(BaseSignalProcessor):
settings_key = 'HAYSTACK_DISABLE'
def handle_save(self, sender, instance, **kwargs):
if not getattr(settings, self.settings_key, False):
super().handle_save(sender, instance, **kwargs)
def handle_delete(self, sender, instance, **kwargs):
if not getattr(settings, self.settings_key, False):
super().handle_delete(sender, instance, **kwargs)
Now if you configure that as your signal processor then you can easily disable it in tests. The settings key can be set with an environment variable if you're just using manage.py test and not a custom runner. Otherwise you should know where to stick it.
import os
HAYSTACK_DISABLE = 'IS_TEST' in os.environ
And run it with
IS_TEST=1 python manage.py test
And for the few tests where you want it enabled, use override_settings() like you have already tried:
class MyTest(TestCase):
#override_settings(HAYSTACK_ENABLE=True)
def that_one_test_where_its_needed(self):
pass
Of course you can go even further and have conditional settings for the signal processor class so if you have a busy site then my conditional checks don't slow it down when it's running live.

Related

How to check for `--keepdb` flag in Django

I've created an AppConfig that uses the post_migrate signal to run some extra SQL every time I do a migration. In my tests sometimes I use --keepdb to speed up running the tests but it's still triggering the post_migrate signal. How can I check to see if the --keepdb flag was used so I can skip running my extra SQL commands? I've looked in the Django documentation and the source code and I can't seem to find any way to do that.
I was able to solve this by creating a TestRunner that stores the --keepdb flag on the settings like this:
from django.conf import settings
from django.test.runner import DiscoverRunner
class KeepDBTestRunner(DiscoverRunner):
def __init__(self, *args, **kwargs):
settings.KEEP_DB = kwargs.get('keepdb', False)
super().__init__(*args, **kwargs)

Django: Avoid HTTP API calls while testing from django views

I'm writing the tests for django views, Some of the views are making the external HTTP requests. While running the tests i dont want to execute these HTTP requests. Since during tests , data is being used is dummy and these HTTP requests will not behave as expected.
What could be the possible options for this ?
You could override settings in your tests and then check for that setting in your view. Here are the docs to override settings.
from django.conf import settings
if not settings.TEST_API:
# api call here
Then your test would look something like this
from django.test import TestCase, override_settings
class LoginTestCase(TestCase):
#override_settings(TEST_API=True)
def test_api_func(self):
# Do test here
Since it would be fairly messy to have those all over the place I would recommend creating a mixin that would look something like this.
class SensitiveAPIMixin(object):
def api_request(self, url, *args, **kwargs):
from django.conf import settings
if not settings.TEST_API:
request = api_call(url)
# Do api request in here
return request
Then, through the power of multiple inheritence, your views that you need to make a request to this api call you could do something similar to this.
class View(generic.ListView, SensitiveAPIMixin):
def get(self, request, *args, **kwargs):
data = self.api_request('http://example.com/api1')
This is where mocking comes in. In your tests, you can use libraries to patch the parts of the code you are testing to return the results you expect for the test, bypassing what that code actually does.
You can read a good blog post about mocking in Python here.
If you are on Python 3.3 or later, the mock library is included in Python. If not, you can download it from PyPI.
The exact details of how to mock the calls you're making will depend on what exactly your view code looks like.
Ben is right on, but here's some psuedo-ish code that might help. The patch here assumes you're using requests, but change the path as necessary to mock out what you need.
from unittest import mock
from django.test import TestCase
from django.core.urlresolvers import reverse
class MyTestCase(TestCase):
#mock.patch('requests.post') # this is all you need to stop the API call
def test_my_view_that_posts_to_an_api(self, mock_get):
response = self.client.get(reverse('my-view-name'))
self.assertEqual('my-value', response.data['my-key'])
# other assertions as necessary

Custom Django Signals Not Working

I realize there are many other questions related to custom django signals that don't work, and believe me, I have read all of them several times with no luck for getting my personal situation to work.
Here's the deal: I'm using django-rq to manage a lengthy background process that is set off by a particular http request. When that background process is done, I want it to fire off a custom Django signal so that the django-rq can be checked for any job failure/exceptions.
Two applications, both on the INSTALLED_APPS list, are at the same level. Inside of app1 there is a file:
signals.py
import django.dispatch
file_added = django.dispatch.Signal(providing_args=["issueKey", "file"])
fm_job_done = django.dispatch.Signal(providing_args=["jobId"])
and also a file jobs.py
from app1 import signals
from django.conf import settings
jobId = 23
issueKey = "fake"
fileObj = "alsoFake"
try:
pass
finally:
signals.file_added.send(sender=settings.SIGNAL_SENDER,issueKey=issueKey,fileName=fileObj)
signals.fm_job_done.send(sender=settings.SIGNAL_SENDER,jobId=jobId)
then inside of app2, in views.py
from app1.signals import file_added, fm_job_done
from django.conf import settings
#Setup signal handlers
def fm_job_done_callback(sender, **kwargs):
print "hellooooooooooooooooooooooooooooooooooo"
logging.info("file manager job done signal fired")
def file_added_callback(sender, **kwargs):
print "hellooooooooooooooooooooooooooooooooooo"
logging.info("file added signal fired")
file_added.connect(file_added_callback,sender=settings.SIGNAL_SENDER,weak=False)
fm_job_done.connect(fm_job_done_callback,sender=settings.SIGNAL_SENDER,weak=False)
I don't get any feedback whatsoever though and am at a total loss. I know for fact that jobs.py is executing, and therefore also that the block of code that should be firing the signals is executing as well since it is in a finally block (no the try is not actually empty - I just put pass there for simplicity) Please feel free to ask for more information - I'll respond asap.
here is the solution for django > 2.0
settings.py:
change name of your INSTALLED_APPS from 'app2' to
'app2.apps.App2Config'
app2 -> apps.py:
from app1.signals import file_added, fm_job_done
Class App2Config(AppConfig):
name = 'app2'
def ready(self):
from .views import fm_job_done_callback, file_added_callback
file_added.connect(file_added_callback)
fm_job_done.connect(fm_job_done_callback)
use django receiver decorator
from django.dispatch import receiver
from app1.signals import file_added, fm_job_done
#receiver(fm_job_done)
def fm_job_done_callback(sender, **kwargs):
print "helloooooooooooooo"
#receiver(file_added)
def file_added_callback(sender, **kwargs):
print "helloooooooooooooo"
Also, I prefer to handle signals in models.py

django unit tests without a db

Is there a possibility to write django unittests without setting up a db? I want to test business logic which doesn't require the db to set up. And while it is fast to setup a db, I really don't need it in some situations.
You can subclass DjangoTestSuiteRunner and override setup_databases and teardown_databases methods to pass.
Create a new settings file and set TEST_RUNNER to the new class you just created. Then when you're running your test, specify your new settings file with --settings flag.
Here is what I did:
Create a custom test suit runner similar to this:
from django.test.simple import DjangoTestSuiteRunner
class NoDbTestRunner(DjangoTestSuiteRunner):
""" A test runner to test without database creation """
def setup_databases(self, **kwargs):
""" Override the database creation defined in parent class """
pass
def teardown_databases(self, old_config, **kwargs):
""" Override the database teardown defined in parent class """
pass
Create a custom settings:
from mysite.settings import *
# Test runner with no database creation
TEST_RUNNER = 'mysite.scripts.testrunner.NoDbTestRunner'
When you're running your tests, run it like the following with --settings flag set to your new settings file:
python manage.py test myapp --settings='no_db_settings'
UPDATE: April/2018
Since Django 1.8, the module django.test.simple.DjangoTestSuiteRunner were moved to 'django.test.runner.DiscoverRunner'.
For more info check official doc section about custom test runners.
Generally tests in an application can be classified in to two categories
Unit tests, these test the individual snippets of code in insolation and do not require to go to the database
Integration test cases which actually go to the database and test the fully integrated logic.
Django supports both unit and integration tests.
Unit tests, do not require to setup and tear down database and these we should inherit from SimpleTestCase.
from django.test import SimpleTestCase
class ExampleUnitTest(SimpleTestCase):
def test_something_works(self):
self.assertTrue(True)
For integration test cases inherit from TestCase in turn inherits from TransactionTestCase and it will setup and tear down the database before running each test.
from django.test import TestCase
class ExampleIntegrationTest(TestCase):
def test_something_works(self):
#do something with database
self.assertTrue(True)
This strategy will ensure that database in created and destroyed only for the test cases that access the database and therefore tests will be more efficient
From django.test.simple
warnings.warn(
"The django.test.simple module and DjangoTestSuiteRunner are deprecated; "
"use django.test.runner.DiscoverRunner instead.",
RemovedInDjango18Warning)
So override DiscoverRunner instead of DjangoTestSuiteRunner.
from django.test.runner import DiscoverRunner
class NoDbTestRunner(DiscoverRunner):
""" A test runner to test without database creation/deletion """
def setup_databases(self, **kwargs):
pass
def teardown_databases(self, old_config, **kwargs):
pass
Use like that :
python manage.py test --testrunner=app.filename.NoDbTestRunner app
I chose to inherit from django.test.runner.DiscoverRunner and make a couple of additions to the run_tests method.
My first addition checks to see if setting up a db is necessary and allows the normal setup_databases functionality to kick in if a db is necessary. My second addition allows the normal teardown_databases to run if the setup_databases method was allowed to run.
My code assumes that any TestCase that inherits from django.test.TransactionTestCase (and thus django.test.TestCase) requires a database to be setup. I made this assumption because the Django docs say:
If you need any of the other more complex and heavyweight Django-specific features like ... Testing or using the ORM ... then you should use TransactionTestCase or TestCase instead.
https://docs.djangoproject.com/en/1.6/topics/testing/tools/#django.test.SimpleTestCase
mysite/scripts/settings.py
from django.test import TransactionTestCase
from django.test.runner import DiscoverRunner
class MyDiscoverRunner(DiscoverRunner):
def run_tests(self, test_labels, extra_tests=None, **kwargs):
"""
Run the unit tests for all the test labels in the provided list.
Test labels should be dotted Python paths to test modules, test
classes, or test methods.
A list of 'extra' tests may also be provided; these tests
will be added to the test suite.
If any of the tests in the test suite inherit from
``django.test.TransactionTestCase``, databases will be setup.
Otherwise, databases will not be set up.
Returns the number of tests that failed.
"""
self.setup_test_environment()
suite = self.build_suite(test_labels, extra_tests)
# ----------------- First Addition --------------
need_databases = any(isinstance(test_case, TransactionTestCase)
for test_case in suite)
old_config = None
if need_databases:
# --------------- End First Addition ------------
old_config = self.setup_databases()
result = self.run_suite(suite)
# ----------------- Second Addition -------------
if need_databases:
# --------------- End Second Addition -----------
self.teardown_databases(old_config)
self.teardown_test_environment()
return self.suite_result(suite, result)
Finally, I added the following line to my project's settings.py file.
mysite/settings.py
TEST_RUNNER = 'mysite.scripts.settings.MyDiscoverRunner'
Now, when running only non-db-dependent tests, my test suite runs an order of magnitude faster! :)
Updated: also see this answer for using a third-party tool pytest.
#Cesar is right. After accidentally running ./manage.py test --settings=no_db_settings, without specifying an app name, my development database was wiped out.
For a safer manner, use the same NoDbTestRunner, but in conjunction with the following mysite/no_db_settings.py:
from mysite.settings import *
# Test runner with no database creation
TEST_RUNNER = 'mysite.scripts.testrunner.NoDbTestRunner'
# Use an alternative database as a safeguard against accidents
DATABASES['default']['NAME'] = '_test_mysite_db'
You need to create a database called _test_mysite_db using an external database tool. Then run the following command to create the corresponding tables:
./manage.py syncdb --settings=mysite.no_db_settings
If you're using South, also run the following command:
./manage.py migrate --settings=mysite.no_db_settings
OK!
You can now run unit tests blazingly fast (and safe) by:
./manage.py test myapp --settings=mysite.no_db_settings
As an alternative to modifying your settings to make NoDbTestRunner "safe", here's a modified version of NoDbTestRunner that closes the current database connection and removes the connection information from settings and the connection object. Works for me, test it in your environment before relying on it :)
class NoDbTestRunner(DjangoTestSuiteRunner):
""" A test runner to test without database creation """
def __init__(self, *args, **kwargs):
# hide/disconnect databases to prevent tests that
# *do* require a database which accidentally get
# run from altering your data
from django.db import connections
from django.conf import settings
connections.databases = settings.DATABASES = {}
connections._connections['default'].close()
del connections._connections['default']
super(NoDbTestRunner,self).__init__(*args,**kwargs)
def setup_databases(self, **kwargs):
""" Override the database creation defined in parent class """
pass
def teardown_databases(self, old_config, **kwargs):
""" Override the database teardown defined in parent class """
pass
Another solution would be to have your test class simply inherit from unittest.TestCase instead of any of Django's test classes. The Django docs (https://docs.djangoproject.com/en/2.0/topics/testing/overview/#writing-tests) contain the following warning about this:
Using unittest.TestCase avoids the cost of running each test in a transaction and flushing the database, but if your tests interact with the database their behavior will vary based on the order that the test runner executes them. This can lead to unit tests that pass when run in isolation but fail when run in a suite.
However, if your test doesn't use the database, this warning needn't concern you and you can reap the benefits of not having to run each test case in a transaction.
The above solutions are fine too. But the following solution will also reduce the db creation time if there are more number of migrations.
During unit testing, running syncdb instead of running all the south migrations will be much faster.
SOUTH_TESTS_MIGRATE = False # To disable migrations and use syncdb
instead
My web host only allows creating and dropping databases from their Web GUI, so I was getting a "Got an error creating the test database: Permission denied" error when trying to run python manage.py test.
I'd hoped to use the --keepdb option to django-admin.py but it doesn't seem to be supported any longer as of Django 1.7.
What I ended up doing was modifying the Django code in .../django/db/backends/creation.py, specifically the _create_test_db and _destroy_test_db functions.
For _create_test_db I commented out the cursor.execute("CREATE DATABASE ... line and replaced it with pass so the try block wouldn't be empty.
For _destroy_test_db I just commented out cursor.execute("DROP DATABASE - I didn't need to replace it with anything because there was already another command in the block (time.sleep(1)).
After that my tests ran fine - though I did set up a test_ version of my regular database separately.
This isn't a great solution of course, because it will break if Django is upgraded, but I had a local copy of Django due to using virtualenv so at least I have control over when/if I upgrade to a newer version.
Another solution not mentioned: this was easy for me to implement because I already have multiple settings files (for local / staging / production) that inherit from base.py . So unlike other people I did not have to overwrite DATABASES['default'], as DATABASES isn't set in base.py
SimpleTestCase still tried to connect to my test database and run migrations. When I made a config/settings/test.py file that didn't set DATABASES to anything, then my unit tests ran without it. It allowed me to use models that had foreign key and unique constraint fields. (Reverse foreign key lookup, which requires a db lookup, fails.)
(Django 2.0.6)
PS code snippets
PROJECT_ROOT_DIR/config/settings/test.py:
from .base import *
#other test settings
#DATABASES = {
# 'default': {
# 'ENGINE': 'django.db.backends.sqlite3',
# 'NAME': 'PROJECT_ROOT_DIR/db.sqlite3',
# }
#}
cli, run from PROJECT_ROOT_DIR:
./manage.py test path.to.app.test --settings config.settings.test
path/to/app/test.py:
from django.test import SimpleTestCase
from .models import *
#^assume models.py imports User and defines Classified and UpgradePrice
class TestCaseWorkingTest(SimpleTestCase):
def test_case_working(self):
self.assertTrue(True)
def test_models_ok(self):
obj = UpgradePrice(title='test',price=1.00)
self.assertEqual(obj.title,'test')
def test_more_complex_model(self):
user = User(username='testuser',email='hi#hey.com')
self.assertEqual(user.username,'testuser')
def test_foreign_key(self):
user = User(username='testuser',email='hi#hey.com')
ad = Classified(user=user,headline='headline',body='body')
self.assertEqual(ad.user.username,'testuser')
#fails with error:
def test_reverse_foreign_key(self):
user = User(username='testuser',email='hi#hey.com')
ad = Classified(user=user,headline='headline',body='body')
print(user.classified_set.first())
self.assertTrue(True) #throws exception and never gets here
When using the nose test runner (django-nose), you can do something like this:
my_project/lib/nodb_test_runner.py:
from django_nose import NoseTestSuiteRunner
class NoDbTestRunner(NoseTestSuiteRunner):
"""
A test runner to test without database creation/deletion
Used for integration tests
"""
def setup_databases(self, **kwargs):
pass
def teardown_databases(self, old_config, **kwargs):
pass
In your settings.py you can specify the test runner there, i.e.
TEST_RUNNER = 'lib.nodb_test_runner.NoDbTestRunner' . # Was 'django_nose.NoseTestSuiteRunner'
OR
I wanted it for running specific tests only, so I run it like so:
python manage.py test integration_tests/integration_* --noinput --testrunner=lib.nodb_test_runner.NoDbTestRunner
You can set databases to an empty list inside the normal TestCase from django.test.
from django.test import TestCase
class NoDbTestCase(TestCase):
databases = []

Django UrlResolver, adding urls at runtime for testing

I'm looking to do some tests and I'm not really familiar with the URLResolver quite yet but I'd like to solve this issue quickly.
In a TestCase, I'd like to add a URL to the resolver so that I can then use Client.get('/url/') and keep it separate from urls.py.
Since Django 1.8 using of django.test.TestCase.urls is deprecated. You can use django.test.utils.override_settings instead:
from django.test import TestCase
from django.test.utils import override_settings
urlpatterns = [
# custom urlconf
]
#override_settings(ROOT_URLCONF=__name__)
class MyTestCase(TestCase):
pass
override_settings can be applied either to a whole class or to a particular method.
https://docs.djangoproject.com/en/2.1/topics/testing/tools/#urlconf-configuration
In your test:
class TestMyViews(TestCase):
urls = 'myapp.test_urls'
This will use myapp/test_urls.py as the ROOT_URLCONF.
I know this was asked a while ago, but I thought I'd answer it again to offer something more complete and up-to-date.
You have two options to solve this, one is to provide your own urls file, as suggested by SystemParadox's answer:
class MyTestCase(TestCase):
urls = 'my_app.test_urls'
The other is to monkey patch your urls. This is NOT the recommended way to deal with overriding urls but you might get into a situation where you still need it. To do this for a single test case without affecting the rest you should do it in your setUp() method and then cleanup in your tearDown() method.
import my_app.urls
from django.conf.urls import patterns
class MyTestCase(TestCase):
urls = 'my_app.urls'
def setUp(self):
super(MyTestCase, self).setUp()
self.original_urls = my_app.urls.urlpatterns
my_app.urls.urlpatterns += patterns(
'',
(r'^my/test/url/pattern$', my_view),
)
def tearDown(self):
super(MyTestCase, self).tearDown()
my_app.urls.urlpatterns = self.original_urls
Please note that this will not work if you omit the urls class attribute. This is because the urls will otherwise be cached and your monkey patching will not take effect if you run your test together with other test cases.
Couldn't get it running with the answers above. Not even with the override_settings.
Found a solution which works for me. My usecase was to write some integration tests where I want to test put/post methods where I needed the urls from my app.
The main clue here is to use the set_urlconf function of django.urls instead of overwriting it in the class or using override_settings.
from django.test import TestCase
from django.urls import reverse, set_urlconf
class MyTests(TestCase):
#classmethod
def setUpClass(cls):
super().setUpClass()
set_urlconf('yourapp.urls') # yourapp is the folder where you define your root urlconf.
def test_url_resolving_with_app_urlconf(self):
response = self.client.put(
path=reverse('namespace:to:your:view-name'), data=test_data
)