When I run tests in parallel, I get random failures because one test interferes with the cache of another test.
I can work around the problem with
#override_settings(
CACHES={
"default": {
"BACKEND": "django.core.cache.backends.locmem.LocMemCache",
"LOCATION": "[random_string]",
}
},
)
Actually to make that smaller I created a #isolate_cache decorator that is a wrapper around override_settings.
But still I need to go and decorate a large number of test cases. This is error prone, because, as I said, the failures are random. I might run the test suite 100 times without error and think that everything is OK, but I still might have forgotten to decorate a test case, and at some point it will fail randomly.
I've also thought about creating my own TestCase subclass and use only that for all my test cases. This presents a similar problem: at some point someone would inherit from django.test.TestCase out of habit, and it might not fail for a long time. Besides, some of my tests inherit from rest_framework.test.APITestCase (or other classes), so there isn't a single test case subclass.
Is there any way to tell Django to run each test case in an isolated section of the cache once and for all?
You don't need "an isolated section of the cache", just to clear cache between tests.
Here are a few ways.
1. Subclass TestCase
The question mentions this is not desired, but I should still mention this proper way.
from django.core.cache import cache
from django.test import TestCase
class CacheClearTestCase(TestCase):
def tearDown(self):
# super().tearDown()
cache.clear()
2. Patch TestCase.tearDown
Assuming subclasses that override tearDown call super().tearDown(), you could do this.
Add this in manage.py before execute_from_command_line(sys.argv):
if sys.argv[1] == 'test':
from django.test import TestCase
from django.core.cache import cache
TestCase.tearDown = cache.clear
3. Subclass TestSuite
You can clear the cache after each test by subclassing TestSuite to override _removeTestAtIndex and setting DiscoverRunner.test_suite to that subclass.
Add this in manage.py before execute_from_command_line(sys.argv):
if sys.argv[1] == 'test':
from unittest import TestSuite
from django.core.cache import cache
from django.test.runner import DiscoverRunner
class CacheClearTestSuite(TestSuite):
def _removeTestAtIndex(self, index):
super()._removeTestAtIndex(index)
cache.clear()
DiscoverRunner.test_suite = CacheClearTestSuite
Why you don't need an isolated section of the cache
To be clear, this is not a problem caused by running tests in parallel.
From https://docs.djangoproject.com/en/4.0/ref/django-admin/#cmdoption-test-parallel:
--parallel [N]
Runs tests in separate parallel processes.
From https://docs.djangoproject.com/en/4.0/topics/cache/#local-memory-caching-1:
Note that each process will have its own private cache instance, which means no cross-process caching is possible.
The easiest solution would be to have a separate settings file for tests that you can load in manage.py. This can also import all of your default settings.
manage.py
settings = 'my_project.test_settings' if 'test' in sys.argv else 'my_project.settings'
os.environ.setdefault("DJANGO_SETTINGS_MODULE", settings)
test_settings.py
from .settings import * # import default settings
# setting overrides here
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.locmem.LocMemCache",
"LOCATION": "[random_string]",
}
}
If you need to do more settings overrides, especially for multiple environments, I would recommend using something like django-configurations.
Related
I have a pattern that comes up fairly often in a Django app with some task management lib (say celery or dramatiq) that looks as follows:
my_app/
executors/
my_executor.py
tasks/
my_task.py
The circular dependency comes from the fact that the executor may schedule the task, while the task is instantiating an executor.
Executor module:
# my_executor.py
import my_app.tasks.my_task
class MyExecutor():
def some_method(self):
my_app.tasks.my_task.some_actual_task.send()
Task module:
# my_task.py
import my_app.executors.my_executor
#decorating_as_task
def some_actual_task():
executor = my_app.executors.my_executor.MyExecutor()
executor.execute()
There is no issue actually running the app (or tests), but pylint complains about a R0401: Cyclic import between the modules.
But it is not clear to me whether this type of dependency is acceptable or not (Python does not seem to be complaining) and whether there are way of actually improving the code structure (having a higher level module my_app.task_management importing both executors and tasks?) while pleasing pylint on this.
I have seen many similar questions but could not find a satisfactory solution from them.
In the end, through inheritance, I isolated the circular code as follows:
my_app/
abstract_executors/
my_abstract_executor.py
tasks/
my_task.py
# my_abtract_executor.py
class MyAbstractExecutor():
def some_method(self):
self._task_to_schedule.send()
#property
def _task_to_schedule(self):
raise NotImplementedError('_task_to_schedule')
# my_task.py
import my_app.abstract_executors.my_abstract_executor
#decorating_as_task
def some_actual_task():
executor = MyExecutor()
executor.execute()
class MyExecutor(MyAbstractExecutor):
#property
def _task_to_schedule(self):
return some_actual_task
This avoids circular imports while allowing to define most of the execution logic in its own module. Maybe there is something better out there, but this fits the bill (avoid circular imports, please pylint, keep things isolated).
Is there any way to make running tests compulsory before running the server in django? I have a project on which many people will be working on so i want to make the testing compulsory before running it and all tests must pass before it runs. So basically lock the runserver command until all the tests pass successfully. This implementation will be just for some time and not for long.
Add this line execute_from_command_line([sys.argv[0], 'test']) before execute_from_command_line(sys.argv) in function main() in module manage.py. It can solve your problem. The main() will look like this:
def main():
# settings, import execute_from_command_line in 'try except' block
if (os.environ.get('RUN_MAIN') != 'true') & (sys.argv[1] == 'runserver'): # just run once when execute command 'manage.py runserver' but not other commands
execute_from_command_line([sys.argv[0], 'test']) # run ALL the test first
execute_from_command_line(sys.argv)
or you can specify the module for testing: execute_from_command_line([sys.argv[0], 'test', 'specific_module'])
or with file pattern:
execute_from_command_line([sys.argv[0], 'test', '--pattern=tests*.py'])
I agree with #LFDMR that this is probably a bad idea and will make your development process really inefficient. Even when with test-driven development, it is perfectly sensible to use the development server, for example, to figure out why your tests don't pass. I think you would be better served with a Git pre-commit or pre-push hook or the equivalent in your version control system.
That being said, here is how you can achieve what you are after:
You can overwrite an existing management command by adding a management command of the same name to one of your apps.
So you have to create file management/commands/runserver.py in one of your apps which looks like this:
from django.core import management
from django.core.management.commands.runserver import BaseRunserverCommand
class Command(BaseRunserverCommand):
def handle(self, *args, **kwargs):
call_command('test') # calls `sys.exit(1)` on test failure
super().handle(*args, **kwargs)
If I were a developer in your team, the first thing I would do is deleting this file ;)
In my experience it will be a terrible idea.
What you should really look into is Continuous integration
It is whenever someone push something all tests should run and a email will be send to the user who have pushed is something fail.
I have problems unit testing views of an application that uses django-pipeline? Whenever I perform a client.get() on any URL, it produces the following exception:
ValueError: The file 'css/bootstrap.css' could not be found with <pipeline.storage.PipelineCachedStorage object at 0x10d544950>.
The fact that it is bootstrap.css is of course not important, but that I'm unable to execute view rendering due to this exception.
Any guide / tips are welcome!
I ran into a similar problem. However, setting
STATICFILES_STORAGE='pipeline.storage.NonPackagingPipelineStorage'
when running the tests only partly solved my issue. I also had to disable the pipeline completely if you want to run LiverServerTestCase tests without having to calling 'collecstatic' before running the tests:
PIPELINE_ENABLED=False
Since django 1.4 it's fairly easy to modify settings for tests - there is a handy decorator that works for methods or TestCase classes:
https://docs.djangoproject.com/en/1.6/topics/testing/tools/#overriding-settings
e.g.
from django.test.utils import override_settings
#override_settings(STATICFILES_STORAGE='pipeline.storage.NonPackagingPipelineStorage', PIPELINE_ENABLED=False)
class BaseTestCase(LiveServerTestCase):
"""
A base test case for Selenium
"""
def setUp(self):
...
However this produced inconsistent results as #jrothenbuhler describes in his answer. Regardless, this is less than ideal if you are running integration tests - you should mimic production as much as possible to catch any potential issues. It appears django 1.7 has a solution for this in the form of a new test case "StaticLiveServerTestCase". From the docs:
https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#django.contrib.staticfiles.testing.StaticLiveServerCase
This unittest TestCase subclass extends
django.test.LiveServerTestCase.
Just like its parent, you can use it to write tests that involve
running the code under test and consuming it with testing tools
through HTTP (e.g. Selenium, PhantomJS, etc.), because of which it’s
needed that the static assets are also published.
I haven't tested this, but sounds promising. For now I'm doing what #jrothenbuhler in his solution using a custom test runner, which doesn't require you to run collectstatic. If you really, really wanted it to run collectstatic you could do something like this:
from django.conf import settings
from django.test.simple import DjangoTestSuiteRunner
from django.core.management import call_command
class CustomTestRunner(DjangoTestSuiteRunner):
"""
Custom test runner to get around pipeline and static file issues
"""
def setup_test_environment(self):
super(CustomTestRunner, self).setup_test_environment()
settings.STATICFILES_STORAGE = 'pipeline.storage.NonPackagingPipelineStorage'
call_command('collectstatic', interactive=False)
In settings.py
TEST_RUNNER = 'path.to.CustomTestRunner'
I've been running into the same problem. I tackled it using a custom test runner:
from django.conf import settings
from django.test.simple import DjangoTestSuiteRunner
from pipeline.conf import settings as pipeline_settings
class PipelineOverrideRunner(DjangoTestSuiteRunner):
def setup_test_environment(self):
'''Override STATICFILES_STORAGE and pipeline DEBUG.'''
super(PipelineOverrideRunner, self).setup_test_environment()
settings.STATICFILES_STORAGE = 'pipeline.storage.PipelineFinderStorage'
pipeline_settings.DEBUG = True
Then in your settings.py:
TEST_RUNNER = 'path.to.PipelineOverrideRunner'
Setting pipeline's DEBUG setting to True ensures that the static files are not packaged. This prevents the need to run collectstatic before running the tests. Note that it's pipeline's DEBUG setting, not Django's, which is overridden here. The reason for this is that you want Django's DEBUG to be False when testing to best simulate the production environment.
Setting STATICFILES_STORAGE to PipelineFinderStorage makes it so that the static files are found when Django's DEBUG setting is set to False, as it is when running tests.
The reason I decided to override these settings in a custom test runner instead of in a custom TestCase is because certain things, such as the django.contrib.staticfiles.storage.staticfiles_storage object, get set up once based on these and other settings. When using a custom TestCase, I was running into problems where tests would pass and fail inconsistently depending on whether the override happened to be in effect when modules such as django.contrib.staticfiles.storage were loaded.
I ran into the same problem. I managed to get around it by using a different STATIC_FILES_STORAGE when I'm testing:
STATICFILES_STORAGE = 'pipeline.storage.NonPackagingPipelineStorage'
I have separate settings files for production and testing, so I just put it in my test version, but if you don't, you could probably wrap it in if DEBUG.
--EDIT
It took a little more effort, because this can only present during the unittesting. To address that, I used the snippet at http://djangosnippets.org/snippets/1011/ and created a UITestCase class:
class UITestCase(SettingsTestCase):
'''
UITestCase handles setting the Pipeline settings correctly.
'''
def __init__(self, *args, **kwargs):
super(UITestCase, self).__init__(*args, **kwargs)
def setUp(self):
self.settings_manager.set(
STATICFILES_STORAGE='pipeline.storage.NonPackagingPipelineStorage')
Now all of my tests that need to render UI that incude compressed_css tags use UITestCase instead of django.test.TestCase.
I ran into the same problem, and it turned out to be because I had
TEST_RUNNER = 'djcelery.contrib.test_runner.CeleryTestSuiteRunner'
I don't understand how, but it must somehow have interacted with Pipeline. Once I removed that setting, the problem went away.
I still needed to force Celery to be eager during testing, so I used override_settings for the tests that needed it:
from django.test.utils import override_settings
…
class RegisterTestCase(TestCase):
#override_settings(CELERY_EAGER_PROPAGATES_EXCEPTIONS=True,
CELERY_ALWAYS_EAGER=True,
BROKER_BACKEND='memory')
def test_new(self):
…
Same here. Refers to this issues: https://github.com/cyberdelia/django-pipeline/issues/277
As I use py.test, I put this in conftest.py as a workaround:
import pytest
from django.conf import settings
def pytest_configure():
# workaround to avoid django pipeline issue
# refers to
settings.STATICFILES_STORAGE = 'pipeline.storage.PipelineStorage'
i've tried #jrothenbuhler workaround and it helps at first..
but then, for some reason it starts fail again with same error
after hours of debugging i've figured out that the only things that helps is to set
STATICFILES_STORAGE = 'pipeline.storage.NonPackagingPipelineStorage'
directly in settings...
dunno why, but it works.
Django-celery expects me to set
TEST_RUNNER = 'djcelery.contrib.test_runner.CeleryTestSuiteRunner'
and django-selenium expects me to set
TEST_RUNNER = 'django_selenium.selenium_runner.SeleniumTestRunner'.
How can I have both, i.e., both tests that run celery tasks locally and tests that use selenium to control a browser?
you could probably define your own test runner that inherits from them both
(looking at the source for the two, the celery one actually just sets some settings)
so make some file e.g. myapp.test_runner, with
from djcelery.contrib.test_runner import CeleryTestSuiteRunner
django_selenium.selenium_runner import SeleniumTestRunner
class MyRunner(CeleryTestSuiteRunner, SeleniumTestRunner):
pass
and then set
TEST_RUNNER = 'myproject.myapp.test_runner.MyRunner'
I want to perform some exhaustive testing against one of my test-cases (say, create a document, to debug some weird things I am encountering..)
My brutal force was to fire python manage.py test myapp in a loop either using Popen or os.system, but now I am back to pure way ?.....
def SimpleTest(unittest.TestCase):
def setUp(self):
def test_01(self):
def tearDown(self):
def suite():
suite = unittest.TestCase()
suite.add(SimpleTest("setUp"))
suite.add(SimpleTest("test_01"))
suite.add(SimpleTest("tearDown"))
return suite
def main():
for i in range(n):
suite().run("runTest")
I ran python manage.py test myapp and I got
File "/var/lib/system-webclient/webclient/apps/myapps/tests.py", line 46, in suite
suite = unittest.TestCase()
File "/usr/lib/python2.6/unittest.py", line 216, in __init__
(self.__class__, methodName)
ValueError: no such test method in <class 'unittest.TestCase'>: runTest
I've googled the error, but I still clueless (I was told to add an empty runTest method, but that doesn't sound right at all...)
Well, according to python's unittest.TestCase:
The simplest TestCase subclass will simply override the runTest()
method in order to perform specific testing code
As you can see, my whole goal is to run my SimpleTest N times. I need to keep track of pass, failure against N.
What option do I have?
Thanks.
Tracking race conditions via unit tests is tricky. Sometimes you're better off hitting your frontend with automated testing tool like Selenium -- unlike unit test, environment is the same and there's no need for extra work to ensure concurrency. Here's one way to run concurrent code in tests when there're no better option: http://www.caktusgroup.com/blog/2009/05/26/testing-django-views-for-concurrency-issues/
Just keep in mind that concurrent test is no definite proof you're free from race conditions -- there's no guarantee it'll recreate all possible combinations of execution order among processes.