When I run tests in parallel, I get random failures because one test interferes with the cache of another test.
I can work around the problem with
#override_settings(
CACHES={
"default": {
"BACKEND": "django.core.cache.backends.locmem.LocMemCache",
"LOCATION": "[random_string]",
}
},
)
Actually to make that smaller I created a #isolate_cache decorator that is a wrapper around override_settings.
But still I need to go and decorate a large number of test cases. This is error prone, because, as I said, the failures are random. I might run the test suite 100 times without error and think that everything is OK, but I still might have forgotten to decorate a test case, and at some point it will fail randomly.
I've also thought about creating my own TestCase subclass and use only that for all my test cases. This presents a similar problem: at some point someone would inherit from django.test.TestCase out of habit, and it might not fail for a long time. Besides, some of my tests inherit from rest_framework.test.APITestCase (or other classes), so there isn't a single test case subclass.
Is there any way to tell Django to run each test case in an isolated section of the cache once and for all?
You don't need "an isolated section of the cache", just to clear cache between tests.
Here are a few ways.
1. Subclass TestCase
The question mentions this is not desired, but I should still mention this proper way.
from django.core.cache import cache
from django.test import TestCase
class CacheClearTestCase(TestCase):
def tearDown(self):
# super().tearDown()
cache.clear()
2. Patch TestCase.tearDown
Assuming subclasses that override tearDown call super().tearDown(), you could do this.
Add this in manage.py before execute_from_command_line(sys.argv):
if sys.argv[1] == 'test':
from django.test import TestCase
from django.core.cache import cache
TestCase.tearDown = cache.clear
3. Subclass TestSuite
You can clear the cache after each test by subclassing TestSuite to override _removeTestAtIndex and setting DiscoverRunner.test_suite to that subclass.
Add this in manage.py before execute_from_command_line(sys.argv):
if sys.argv[1] == 'test':
from unittest import TestSuite
from django.core.cache import cache
from django.test.runner import DiscoverRunner
class CacheClearTestSuite(TestSuite):
def _removeTestAtIndex(self, index):
super()._removeTestAtIndex(index)
cache.clear()
DiscoverRunner.test_suite = CacheClearTestSuite
Why you don't need an isolated section of the cache
To be clear, this is not a problem caused by running tests in parallel.
From https://docs.djangoproject.com/en/4.0/ref/django-admin/#cmdoption-test-parallel:
--parallel [N]
Runs tests in separate parallel processes.
From https://docs.djangoproject.com/en/4.0/topics/cache/#local-memory-caching-1:
Note that each process will have its own private cache instance, which means no cross-process caching is possible.
The easiest solution would be to have a separate settings file for tests that you can load in manage.py. This can also import all of your default settings.
manage.py
settings = 'my_project.test_settings' if 'test' in sys.argv else 'my_project.settings'
os.environ.setdefault("DJANGO_SETTINGS_MODULE", settings)
test_settings.py
from .settings import * # import default settings
# setting overrides here
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.locmem.LocMemCache",
"LOCATION": "[random_string]",
}
}
If you need to do more settings overrides, especially for multiple environments, I would recommend using something like django-configurations.
I am facing an issue when I run the tests of my django app with the command
python manage.py test app_name OR
python manage.py test
All the test cases where I am fetching some data by calling the GET API, they seem to fail because there is no data in the response in spite of there being in the test data. The structure which I have followed in my test suite is there is a base class of django rest framework's APITestCase and a set_up method which creates test objects of different models used in the APIs and I inherit this class in my app's test_views class for any particular API
such as
class BaseTest(APITestCase):
def set_up(self):
'''
create the test objects which can be accessed by the main test
class.
'''
self.person1= Person.objects.create(.......)
class SomeViewTestCase(BaseTest):
def setUp(self):
self.set_up()
def test_some_api(self):
url='/xyz/'
self.client.login(username='testusername3',password='testpassword3')
response=self.client.get(url,{'person_id':self.person3.id})
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(len(response.data),6)
So whenever I run the test as
./manage.py test abc.tests.test_views.SomeViewTestCase
it works fine, but when I run as
./manage.py test abc
The test above response.data has 0 entries and similarly, with the other tests within the same class the data is just not fetched and hence all the asserts fail.
How can I ensure the successful run of the test when they are run as a whole because during deployment they have to go through CI?
The versions of the packages and system configuration are as follows:
Django Version -1.6
Django Rest Framework - 3.1.1
Python -2.7
Operating System - Mac OS(Sierra)
Appreciate the help.Thanks.
Your test methods are executed in arbitrary order... And after each test, there's a tearDown() method that takes care to "rollback to initial state" so you have isolation between tests execution.
The only part that is shared among them is your setUp() method. that is invoked each time a test runs.
This means that if the runner start from the second test method and you only declare your response.data in your first test, all tests are gonna fail apart the posted one.
Hope it helps...
I seted the config data like db_name in setUp()
def setUp(self):
config = tools.config
config['db_name'] = 'test'
config['db_user'] = 'admin'
...
and got a "AttributeError: environments" for "return cls._local.environments" in setUp() super
I must say I haven't really used odoo to know what it needs configured, but apparently there's a pytest plugin for it:
https://pypi.python.org/pypi/pytest-odoo
So, my suggestion would be to try to use pytest instead of unittest.TestCase along with that plugin (which should take care of making the proper setup) -- the only thing in PyDev in this case is ask it to use the pytest runner (see: http://www.pydev.org/manual_adv_pyunit.html for details on how to configure that).
I want to create a script to automatically run the tests in django. So I want to perform the equivalent of python manage.py test myapp within python and store wether the test failed or not as a variable.
So if the test works:
variable = True
if the test doesn't work:
variable = False
You can do this with your own test runner using suite_result. Related code here
I have problems unit testing views of an application that uses django-pipeline? Whenever I perform a client.get() on any URL, it produces the following exception:
ValueError: The file 'css/bootstrap.css' could not be found with <pipeline.storage.PipelineCachedStorage object at 0x10d544950>.
The fact that it is bootstrap.css is of course not important, but that I'm unable to execute view rendering due to this exception.
Any guide / tips are welcome!
I ran into a similar problem. However, setting
STATICFILES_STORAGE='pipeline.storage.NonPackagingPipelineStorage'
when running the tests only partly solved my issue. I also had to disable the pipeline completely if you want to run LiverServerTestCase tests without having to calling 'collecstatic' before running the tests:
PIPELINE_ENABLED=False
Since django 1.4 it's fairly easy to modify settings for tests - there is a handy decorator that works for methods or TestCase classes:
https://docs.djangoproject.com/en/1.6/topics/testing/tools/#overriding-settings
e.g.
from django.test.utils import override_settings
#override_settings(STATICFILES_STORAGE='pipeline.storage.NonPackagingPipelineStorage', PIPELINE_ENABLED=False)
class BaseTestCase(LiveServerTestCase):
"""
A base test case for Selenium
"""
def setUp(self):
...
However this produced inconsistent results as #jrothenbuhler describes in his answer. Regardless, this is less than ideal if you are running integration tests - you should mimic production as much as possible to catch any potential issues. It appears django 1.7 has a solution for this in the form of a new test case "StaticLiveServerTestCase". From the docs:
https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#django.contrib.staticfiles.testing.StaticLiveServerCase
This unittest TestCase subclass extends
django.test.LiveServerTestCase.
Just like its parent, you can use it to write tests that involve
running the code under test and consuming it with testing tools
through HTTP (e.g. Selenium, PhantomJS, etc.), because of which it’s
needed that the static assets are also published.
I haven't tested this, but sounds promising. For now I'm doing what #jrothenbuhler in his solution using a custom test runner, which doesn't require you to run collectstatic. If you really, really wanted it to run collectstatic you could do something like this:
from django.conf import settings
from django.test.simple import DjangoTestSuiteRunner
from django.core.management import call_command
class CustomTestRunner(DjangoTestSuiteRunner):
"""
Custom test runner to get around pipeline and static file issues
"""
def setup_test_environment(self):
super(CustomTestRunner, self).setup_test_environment()
settings.STATICFILES_STORAGE = 'pipeline.storage.NonPackagingPipelineStorage'
call_command('collectstatic', interactive=False)
In settings.py
TEST_RUNNER = 'path.to.CustomTestRunner'
I've been running into the same problem. I tackled it using a custom test runner:
from django.conf import settings
from django.test.simple import DjangoTestSuiteRunner
from pipeline.conf import settings as pipeline_settings
class PipelineOverrideRunner(DjangoTestSuiteRunner):
def setup_test_environment(self):
'''Override STATICFILES_STORAGE and pipeline DEBUG.'''
super(PipelineOverrideRunner, self).setup_test_environment()
settings.STATICFILES_STORAGE = 'pipeline.storage.PipelineFinderStorage'
pipeline_settings.DEBUG = True
Then in your settings.py:
TEST_RUNNER = 'path.to.PipelineOverrideRunner'
Setting pipeline's DEBUG setting to True ensures that the static files are not packaged. This prevents the need to run collectstatic before running the tests. Note that it's pipeline's DEBUG setting, not Django's, which is overridden here. The reason for this is that you want Django's DEBUG to be False when testing to best simulate the production environment.
Setting STATICFILES_STORAGE to PipelineFinderStorage makes it so that the static files are found when Django's DEBUG setting is set to False, as it is when running tests.
The reason I decided to override these settings in a custom test runner instead of in a custom TestCase is because certain things, such as the django.contrib.staticfiles.storage.staticfiles_storage object, get set up once based on these and other settings. When using a custom TestCase, I was running into problems where tests would pass and fail inconsistently depending on whether the override happened to be in effect when modules such as django.contrib.staticfiles.storage were loaded.
I ran into the same problem. I managed to get around it by using a different STATIC_FILES_STORAGE when I'm testing:
STATICFILES_STORAGE = 'pipeline.storage.NonPackagingPipelineStorage'
I have separate settings files for production and testing, so I just put it in my test version, but if you don't, you could probably wrap it in if DEBUG.
--EDIT
It took a little more effort, because this can only present during the unittesting. To address that, I used the snippet at http://djangosnippets.org/snippets/1011/ and created a UITestCase class:
class UITestCase(SettingsTestCase):
'''
UITestCase handles setting the Pipeline settings correctly.
'''
def __init__(self, *args, **kwargs):
super(UITestCase, self).__init__(*args, **kwargs)
def setUp(self):
self.settings_manager.set(
STATICFILES_STORAGE='pipeline.storage.NonPackagingPipelineStorage')
Now all of my tests that need to render UI that incude compressed_css tags use UITestCase instead of django.test.TestCase.
I ran into the same problem, and it turned out to be because I had
TEST_RUNNER = 'djcelery.contrib.test_runner.CeleryTestSuiteRunner'
I don't understand how, but it must somehow have interacted with Pipeline. Once I removed that setting, the problem went away.
I still needed to force Celery to be eager during testing, so I used override_settings for the tests that needed it:
from django.test.utils import override_settings
…
class RegisterTestCase(TestCase):
#override_settings(CELERY_EAGER_PROPAGATES_EXCEPTIONS=True,
CELERY_ALWAYS_EAGER=True,
BROKER_BACKEND='memory')
def test_new(self):
…
Same here. Refers to this issues: https://github.com/cyberdelia/django-pipeline/issues/277
As I use py.test, I put this in conftest.py as a workaround:
import pytest
from django.conf import settings
def pytest_configure():
# workaround to avoid django pipeline issue
# refers to
settings.STATICFILES_STORAGE = 'pipeline.storage.PipelineStorage'
i've tried #jrothenbuhler workaround and it helps at first..
but then, for some reason it starts fail again with same error
after hours of debugging i've figured out that the only things that helps is to set
STATICFILES_STORAGE = 'pipeline.storage.NonPackagingPipelineStorage'
directly in settings...
dunno why, but it works.