Is there any way to make running tests compulsory before running the server in django? I have a project on which many people will be working on so i want to make the testing compulsory before running it and all tests must pass before it runs. So basically lock the runserver command until all the tests pass successfully. This implementation will be just for some time and not for long.
Add this line execute_from_command_line([sys.argv[0], 'test']) before execute_from_command_line(sys.argv) in function main() in module manage.py. It can solve your problem. The main() will look like this:
def main():
# settings, import execute_from_command_line in 'try except' block
if (os.environ.get('RUN_MAIN') != 'true') & (sys.argv[1] == 'runserver'): # just run once when execute command 'manage.py runserver' but not other commands
execute_from_command_line([sys.argv[0], 'test']) # run ALL the test first
execute_from_command_line(sys.argv)
or you can specify the module for testing: execute_from_command_line([sys.argv[0], 'test', 'specific_module'])
or with file pattern:
execute_from_command_line([sys.argv[0], 'test', '--pattern=tests*.py'])
I agree with #LFDMR that this is probably a bad idea and will make your development process really inefficient. Even when with test-driven development, it is perfectly sensible to use the development server, for example, to figure out why your tests don't pass. I think you would be better served with a Git pre-commit or pre-push hook or the equivalent in your version control system.
That being said, here is how you can achieve what you are after:
You can overwrite an existing management command by adding a management command of the same name to one of your apps.
So you have to create file management/commands/runserver.py in one of your apps which looks like this:
from django.core import management
from django.core.management.commands.runserver import BaseRunserverCommand
class Command(BaseRunserverCommand):
def handle(self, *args, **kwargs):
call_command('test') # calls `sys.exit(1)` on test failure
super().handle(*args, **kwargs)
If I were a developer in your team, the first thing I would do is deleting this file ;)
In my experience it will be a terrible idea.
What you should really look into is Continuous integration
It is whenever someone push something all tests should run and a email will be send to the user who have pushed is something fail.
Related
In my Django files, I make some logging entries pretty simply this way:
# myapp/view.py
import logging
logger = logging.getLogger(__name__)
...
# somewhere in a method
logger.warning("display some warning")
Then, suppose I want to test that this warning is logged. In a test suite, I would normally do:
# myapp/test_view.py
...
# somewhere in a test class
def test_logger(self):
with self.assertLogs("myapp.view") as logger:
# call the view
self.assertListEqual(logger.output, [
"WARNING:myapp.view:display some warning"
])
This way, the output of the logger is silenced, and I can test it. This works fine when I run tests for this view only with:
./manage.py test myapp.test_view
but not when I run all tests:
./manage.py test
where I get this error:
Traceback (most recent call last):
File "/home/neraste/myproject/myapp/test_view.py", line 34, in test_logger
# the call of the view
AssertionError: no logs of level INFO or higher triggered on myapp.view
So, what should I do? I can use unittest.mock.patch to mock the calls to logger but I find this way ugly, especially if you pass arguments to your logger. Moreover, assertLogs is simply designed for that, so I wonder what is wrong.
The problem was on my side. In my other test files, I explicitly shut down logging (filtering to critical level), this is why no logs were recorded when running all tests.
I seted the config data like db_name in setUp()
def setUp(self):
config = tools.config
config['db_name'] = 'test'
config['db_user'] = 'admin'
...
and got a "AttributeError: environments" for "return cls._local.environments" in setUp() super
I must say I haven't really used odoo to know what it needs configured, but apparently there's a pytest plugin for it:
https://pypi.python.org/pypi/pytest-odoo
So, my suggestion would be to try to use pytest instead of unittest.TestCase along with that plugin (which should take care of making the proper setup) -- the only thing in PyDev in this case is ask it to use the pytest runner (see: http://www.pydev.org/manual_adv_pyunit.html for details on how to configure that).
I'm actually trying to run the unittests I've created thanks to Odoo's documentation.
I've built my module like this :
module_test
- __init__.py
__openerp.py__
...
- tests
__init__.py
test_1.py
Inside 'module_test/tests/init.py', I do have "import test_1"
Inside, 'module_test/tests/test_1.py", I do have : "import tests + a test scenario I've written.
Then I launch the command line to run server, and I add :
'-u module_test --log-level=test --test-enable' to update the module and activate the tests run
The shell returns : "All post-tested in 0.00s, 0 queries".
So in fact, no tests are run.
I then added a syntax error, so the file can't be compiled by the server, but shell returned the same sentence. It looks like the file is ignored, and the server is not even trying to compile my file... I do not understand why ?
I've checked some Odoo source module, the 'sale' one for example.
I've tried to run sale tests, shell returned the same value than before.
I added syntax error inside sale tests, shell returned the same value again, and again.
Does anyone have an idea about this unexpected behavior ?
You should try using post_install decorator for test class:
Example:
from openerp.tests import common
#common.post_install(True)
class TestPost(common.TransactionCase):
def test_post_method(self):
response = self.env['my_module.my_model'].create_post('hello')
self.assertEqual(response['success'], True)
To make the tests perform faster without updating your module, you should be able to run tests without
-u module_test
if you use
--load=module_test
I have to admit that odoo testing documentation is really bad. It took me a week to figure out how to make unit testing work in odoo.
I have problems unit testing views of an application that uses django-pipeline? Whenever I perform a client.get() on any URL, it produces the following exception:
ValueError: The file 'css/bootstrap.css' could not be found with <pipeline.storage.PipelineCachedStorage object at 0x10d544950>.
The fact that it is bootstrap.css is of course not important, but that I'm unable to execute view rendering due to this exception.
Any guide / tips are welcome!
I ran into a similar problem. However, setting
STATICFILES_STORAGE='pipeline.storage.NonPackagingPipelineStorage'
when running the tests only partly solved my issue. I also had to disable the pipeline completely if you want to run LiverServerTestCase tests without having to calling 'collecstatic' before running the tests:
PIPELINE_ENABLED=False
Since django 1.4 it's fairly easy to modify settings for tests - there is a handy decorator that works for methods or TestCase classes:
https://docs.djangoproject.com/en/1.6/topics/testing/tools/#overriding-settings
e.g.
from django.test.utils import override_settings
#override_settings(STATICFILES_STORAGE='pipeline.storage.NonPackagingPipelineStorage', PIPELINE_ENABLED=False)
class BaseTestCase(LiveServerTestCase):
"""
A base test case for Selenium
"""
def setUp(self):
...
However this produced inconsistent results as #jrothenbuhler describes in his answer. Regardless, this is less than ideal if you are running integration tests - you should mimic production as much as possible to catch any potential issues. It appears django 1.7 has a solution for this in the form of a new test case "StaticLiveServerTestCase". From the docs:
https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#django.contrib.staticfiles.testing.StaticLiveServerCase
This unittest TestCase subclass extends
django.test.LiveServerTestCase.
Just like its parent, you can use it to write tests that involve
running the code under test and consuming it with testing tools
through HTTP (e.g. Selenium, PhantomJS, etc.), because of which it’s
needed that the static assets are also published.
I haven't tested this, but sounds promising. For now I'm doing what #jrothenbuhler in his solution using a custom test runner, which doesn't require you to run collectstatic. If you really, really wanted it to run collectstatic you could do something like this:
from django.conf import settings
from django.test.simple import DjangoTestSuiteRunner
from django.core.management import call_command
class CustomTestRunner(DjangoTestSuiteRunner):
"""
Custom test runner to get around pipeline and static file issues
"""
def setup_test_environment(self):
super(CustomTestRunner, self).setup_test_environment()
settings.STATICFILES_STORAGE = 'pipeline.storage.NonPackagingPipelineStorage'
call_command('collectstatic', interactive=False)
In settings.py
TEST_RUNNER = 'path.to.CustomTestRunner'
I've been running into the same problem. I tackled it using a custom test runner:
from django.conf import settings
from django.test.simple import DjangoTestSuiteRunner
from pipeline.conf import settings as pipeline_settings
class PipelineOverrideRunner(DjangoTestSuiteRunner):
def setup_test_environment(self):
'''Override STATICFILES_STORAGE and pipeline DEBUG.'''
super(PipelineOverrideRunner, self).setup_test_environment()
settings.STATICFILES_STORAGE = 'pipeline.storage.PipelineFinderStorage'
pipeline_settings.DEBUG = True
Then in your settings.py:
TEST_RUNNER = 'path.to.PipelineOverrideRunner'
Setting pipeline's DEBUG setting to True ensures that the static files are not packaged. This prevents the need to run collectstatic before running the tests. Note that it's pipeline's DEBUG setting, not Django's, which is overridden here. The reason for this is that you want Django's DEBUG to be False when testing to best simulate the production environment.
Setting STATICFILES_STORAGE to PipelineFinderStorage makes it so that the static files are found when Django's DEBUG setting is set to False, as it is when running tests.
The reason I decided to override these settings in a custom test runner instead of in a custom TestCase is because certain things, such as the django.contrib.staticfiles.storage.staticfiles_storage object, get set up once based on these and other settings. When using a custom TestCase, I was running into problems where tests would pass and fail inconsistently depending on whether the override happened to be in effect when modules such as django.contrib.staticfiles.storage were loaded.
I ran into the same problem. I managed to get around it by using a different STATIC_FILES_STORAGE when I'm testing:
STATICFILES_STORAGE = 'pipeline.storage.NonPackagingPipelineStorage'
I have separate settings files for production and testing, so I just put it in my test version, but if you don't, you could probably wrap it in if DEBUG.
--EDIT
It took a little more effort, because this can only present during the unittesting. To address that, I used the snippet at http://djangosnippets.org/snippets/1011/ and created a UITestCase class:
class UITestCase(SettingsTestCase):
'''
UITestCase handles setting the Pipeline settings correctly.
'''
def __init__(self, *args, **kwargs):
super(UITestCase, self).__init__(*args, **kwargs)
def setUp(self):
self.settings_manager.set(
STATICFILES_STORAGE='pipeline.storage.NonPackagingPipelineStorage')
Now all of my tests that need to render UI that incude compressed_css tags use UITestCase instead of django.test.TestCase.
I ran into the same problem, and it turned out to be because I had
TEST_RUNNER = 'djcelery.contrib.test_runner.CeleryTestSuiteRunner'
I don't understand how, but it must somehow have interacted with Pipeline. Once I removed that setting, the problem went away.
I still needed to force Celery to be eager during testing, so I used override_settings for the tests that needed it:
from django.test.utils import override_settings
…
class RegisterTestCase(TestCase):
#override_settings(CELERY_EAGER_PROPAGATES_EXCEPTIONS=True,
CELERY_ALWAYS_EAGER=True,
BROKER_BACKEND='memory')
def test_new(self):
…
Same here. Refers to this issues: https://github.com/cyberdelia/django-pipeline/issues/277
As I use py.test, I put this in conftest.py as a workaround:
import pytest
from django.conf import settings
def pytest_configure():
# workaround to avoid django pipeline issue
# refers to
settings.STATICFILES_STORAGE = 'pipeline.storage.PipelineStorage'
i've tried #jrothenbuhler workaround and it helps at first..
but then, for some reason it starts fail again with same error
after hours of debugging i've figured out that the only things that helps is to set
STATICFILES_STORAGE = 'pipeline.storage.NonPackagingPipelineStorage'
directly in settings...
dunno why, but it works.
I want to perform some exhaustive testing against one of my test-cases (say, create a document, to debug some weird things I am encountering..)
My brutal force was to fire python manage.py test myapp in a loop either using Popen or os.system, but now I am back to pure way ?.....
def SimpleTest(unittest.TestCase):
def setUp(self):
def test_01(self):
def tearDown(self):
def suite():
suite = unittest.TestCase()
suite.add(SimpleTest("setUp"))
suite.add(SimpleTest("test_01"))
suite.add(SimpleTest("tearDown"))
return suite
def main():
for i in range(n):
suite().run("runTest")
I ran python manage.py test myapp and I got
File "/var/lib/system-webclient/webclient/apps/myapps/tests.py", line 46, in suite
suite = unittest.TestCase()
File "/usr/lib/python2.6/unittest.py", line 216, in __init__
(self.__class__, methodName)
ValueError: no such test method in <class 'unittest.TestCase'>: runTest
I've googled the error, but I still clueless (I was told to add an empty runTest method, but that doesn't sound right at all...)
Well, according to python's unittest.TestCase:
The simplest TestCase subclass will simply override the runTest()
method in order to perform specific testing code
As you can see, my whole goal is to run my SimpleTest N times. I need to keep track of pass, failure against N.
What option do I have?
Thanks.
Tracking race conditions via unit tests is tricky. Sometimes you're better off hitting your frontend with automated testing tool like Selenium -- unlike unit test, environment is the same and there's no need for extra work to ensure concurrency. Here's one way to run concurrent code in tests when there're no better option: http://www.caktusgroup.com/blog/2009/05/26/testing-django-views-for-concurrency-issues/
Just keep in mind that concurrent test is no definite proof you're free from race conditions -- there's no guarantee it'll recreate all possible combinations of execution order among processes.