Efficient way to test logs with Django - django

In my Django files, I make some logging entries pretty simply this way:
# myapp/view.py
import logging
logger = logging.getLogger(__name__)
...
# somewhere in a method
logger.warning("display some warning")
Then, suppose I want to test that this warning is logged. In a test suite, I would normally do:
# myapp/test_view.py
...
# somewhere in a test class
def test_logger(self):
with self.assertLogs("myapp.view") as logger:
# call the view
self.assertListEqual(logger.output, [
"WARNING:myapp.view:display some warning"
])
This way, the output of the logger is silenced, and I can test it. This works fine when I run tests for this view only with:
./manage.py test myapp.test_view
but not when I run all tests:
./manage.py test
where I get this error:
Traceback (most recent call last):
File "/home/neraste/myproject/myapp/test_view.py", line 34, in test_logger
# the call of the view
AssertionError: no logs of level INFO or higher triggered on myapp.view
So, what should I do? I can use unittest.mock.patch to mock the calls to logger but I find this way ugly, especially if you pass arguments to your logger. Moreover, assertLogs is simply designed for that, so I wonder what is wrong.

The problem was on my side. In my other test files, I explicitly shut down logging (filtering to critical level), this is why no logs were recorded when running all tests.

Related

Make running tests mandatory before runserver in django

Is there any way to make running tests compulsory before running the server in django? I have a project on which many people will be working on so i want to make the testing compulsory before running it and all tests must pass before it runs. So basically lock the runserver command until all the tests pass successfully. This implementation will be just for some time and not for long.
Add this line execute_from_command_line([sys.argv[0], 'test']) before execute_from_command_line(sys.argv) in function main() in module manage.py. It can solve your problem. The main() will look like this:
def main():
# settings, import execute_from_command_line in 'try except' block
if (os.environ.get('RUN_MAIN') != 'true') & (sys.argv[1] == 'runserver'): # just run once when execute command 'manage.py runserver' but not other commands
execute_from_command_line([sys.argv[0], 'test']) # run ALL the test first
execute_from_command_line(sys.argv)
or you can specify the module for testing: execute_from_command_line([sys.argv[0], 'test', 'specific_module'])
or with file pattern:
execute_from_command_line([sys.argv[0], 'test', '--pattern=tests*.py'])
I agree with #LFDMR that this is probably a bad idea and will make your development process really inefficient. Even when with test-driven development, it is perfectly sensible to use the development server, for example, to figure out why your tests don't pass. I think you would be better served with a Git pre-commit or pre-push hook or the equivalent in your version control system.
That being said, here is how you can achieve what you are after:
You can overwrite an existing management command by adding a management command of the same name to one of your apps.
So you have to create file management/commands/runserver.py in one of your apps which looks like this:
from django.core import management
from django.core.management.commands.runserver import BaseRunserverCommand
class Command(BaseRunserverCommand):
def handle(self, *args, **kwargs):
call_command('test') # calls `sys.exit(1)` on test failure
super().handle(*args, **kwargs)
If I were a developer in your team, the first thing I would do is deleting this file ;)
In my experience it will be a terrible idea.
What you should really look into is Continuous integration
It is whenever someone push something all tests should run and a email will be send to the user who have pushed is something fail.

Django unit tests fail when ran as a whole and there is GET call to the API

I am facing an issue when I run the tests of my django app with the command
python manage.py test app_name OR
python manage.py test
All the test cases where I am fetching some data by calling the GET API, they seem to fail because there is no data in the response in spite of there being in the test data. The structure which I have followed in my test suite is there is a base class of django rest framework's APITestCase and a set_up method which creates test objects of different models used in the APIs and I inherit this class in my app's test_views class for any particular API
such as
class BaseTest(APITestCase):
def set_up(self):
'''
create the test objects which can be accessed by the main test
class.
'''
self.person1= Person.objects.create(.......)
class SomeViewTestCase(BaseTest):
def setUp(self):
self.set_up()
def test_some_api(self):
url='/xyz/'
self.client.login(username='testusername3',password='testpassword3')
response=self.client.get(url,{'person_id':self.person3.id})
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(len(response.data),6)
So whenever I run the test as
./manage.py test abc.tests.test_views.SomeViewTestCase
it works fine, but when I run as
./manage.py test abc
The test above response.data has 0 entries and similarly, with the other tests within the same class the data is just not fetched and hence all the asserts fail.
How can I ensure the successful run of the test when they are run as a whole because during deployment they have to go through CI?
The versions of the packages and system configuration are as follows:
Django Version -1.6
Django Rest Framework - 3.1.1
Python -2.7
Operating System - Mac OS(Sierra)
Appreciate the help.Thanks.
Your test methods are executed in arbitrary order... And after each test, there's a tearDown() method that takes care to "rollback to initial state" so you have isolation between tests execution.
The only part that is shared among them is your setUp() method. that is invoked each time a test runs.
This means that if the runner start from the second test method and you only declare your response.data in your first test, all tests are gonna fail apart the posted one.
Hope it helps...

ODOO [V8] Unit Tests

I'm actually trying to run the unittests I've created thanks to Odoo's documentation.
I've built my module like this :
module_test
- __init__.py
__openerp.py__
...
- tests
__init__.py
test_1.py
Inside 'module_test/tests/init.py', I do have "import test_1"
Inside, 'module_test/tests/test_1.py", I do have : "import tests + a test scenario I've written.
Then I launch the command line to run server, and I add :
'-u module_test --log-level=test --test-enable' to update the module and activate the tests run
The shell returns : "All post-tested in 0.00s, 0 queries".
So in fact, no tests are run.
I then added a syntax error, so the file can't be compiled by the server, but shell returned the same sentence. It looks like the file is ignored, and the server is not even trying to compile my file... I do not understand why ?
I've checked some Odoo source module, the 'sale' one for example.
I've tried to run sale tests, shell returned the same value than before.
I added syntax error inside sale tests, shell returned the same value again, and again.
Does anyone have an idea about this unexpected behavior ?
You should try using post_install decorator for test class:
Example:
from openerp.tests import common
#common.post_install(True)
class TestPost(common.TransactionCase):
def test_post_method(self):
response = self.env['my_module.my_model'].create_post('hello')
self.assertEqual(response['success'], True)
To make the tests perform faster without updating your module, you should be able to run tests without
-u module_test
if you use
--load=module_test
I have to admit that odoo testing documentation is really bad. It took me a week to figure out how to make unit testing work in odoo.

Django: How to hide Traceback in unit tests for readability?

I find it a bit irritating getting so much details for a simple failed unit test. Is it possible to suppress everything but the actual defined assert message?
Creating test database for alias 'default'...
.F
======================================================================
FAIL: test_get_sales_item_for_company (my_app.tests.SalesItemModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/kave/projects/my/my_app/tests.py", line 61, in test_get_sales_item_for_company
self.assertEqual(sales_items.count(), 1, 'Expected one sales item for this company, but got %s' % sales_items.count())
AssertionError: Expected one sales item for this company, but got 2
----------------------------------------------------------------------
Ran 2 tests in 0.313s
FAILED (failures=1)
Destroying test database for alias 'default'...
I find this bit unnecessary. I need to know the test name (method) that failed and the assert message. No need for traceback really..
Traceback (most recent call last):
File "/home/kave/projects/my/my_app/tests.py", line 61, in test_get_sales_item_for_company
self.assertEqual(sales_items.count(), 1, 'Expected one sales item for this company, but got %s' % sales_items.count())
Monkey patching to the rescue. You can get rid of the traceback for failures without touching your Django installation by subclassing Django's TestCase as follows:
import types
from django.utils.unittest.result import failfast
from django.test import TestCase
#failfast
def addFailureSansTraceback(self, test, err):
err_sans_tb = (err[0], err[1], None)
self.failures.append((test, self._exc_info_to_string(err_sans_tb, test)))
self._mirrorOutput = True
class NoTraceTestCase(TestCase):
def run(self, result=None):
result.addFailure = types.MethodType(addFailureSansTraceback, result)
super(NoTraceTestCase, self).run(result)
Now just make your test cases subclasses of NoTraceTestCase instead of TestCase and you are good to go. No more tracebacks for failures. (Note exceptions will still print tracebacks. You could monkey-patch those away similarly if you wanted to.)
Here's how it works (with thanks to Jason Pratt for the quick lesson on monkey patching):
Django's test runner calls TestCase's run method for each test run. The result parameter is an instance of the django.utils.unittest.result.TestResult class, which handles showing test results to the user. Whenever a test fails, run makes the following call: result.addFailure(self, sys.exc_info()). That's where the traceback comes from -- as the third item in the tuple returned by sys.exc_info().
Now, simply overriding run with a copy of the original code and tweaking it as needed would work. But the run method is a good 75 lines long, and all that needs to be changed is that one line, and in any case why miss out the chance for some fun with monkey-patching?
The result.addFailure assignment changes the addFailure method in the result object that is passed to NoTraceTestCase's run method to the newly defined addFailureSansTraceback function -- which is first transformed into a result-object compatible method with types.MethodType.
The super call invokes Django's existing TestCase run. Now, when the existing code runs, the call to addFailure will actually call the new version, i.e. addFailureSansTraceback.
addFailureSansTraceback does what the original version of addFailure does -- copying over two lines of code -- except adds a line that replaces the traceback with None (the assignment to err_sans_tb which is used instead of err in the next line). That's it.
Note the original addFailure has a failfast decorator, so that is imported and used. To be honest, I haven't looked at what it does!
Disclaimer: I haven't studied Django's test code thoroughly. This is just a quick patch to get it to work in the common case. Use at your own risk!

How do I run my Django testcase multiple times?

I want to perform some exhaustive testing against one of my test-cases (say, create a document, to debug some weird things I am encountering..)
My brutal force was to fire python manage.py test myapp in a loop either using Popen or os.system, but now I am back to pure way ?.....
def SimpleTest(unittest.TestCase):
def setUp(self):
def test_01(self):
def tearDown(self):
def suite():
suite = unittest.TestCase()
suite.add(SimpleTest("setUp"))
suite.add(SimpleTest("test_01"))
suite.add(SimpleTest("tearDown"))
return suite
def main():
for i in range(n):
suite().run("runTest")
I ran python manage.py test myapp and I got
File "/var/lib/system-webclient/webclient/apps/myapps/tests.py", line 46, in suite
suite = unittest.TestCase()
File "/usr/lib/python2.6/unittest.py", line 216, in __init__
(self.__class__, methodName)
ValueError: no such test method in <class 'unittest.TestCase'>: runTest
I've googled the error, but I still clueless (I was told to add an empty runTest method, but that doesn't sound right at all...)
Well, according to python's unittest.TestCase:
The simplest TestCase subclass will simply override the runTest()
method in order to perform specific testing code
As you can see, my whole goal is to run my SimpleTest N times. I need to keep track of pass, failure against N.
What option do I have?
Thanks.
Tracking race conditions via unit tests is tricky. Sometimes you're better off hitting your frontend with automated testing tool like Selenium -- unlike unit test, environment is the same and there's no need for extra work to ensure concurrency. Here's one way to run concurrent code in tests when there're no better option: http://www.caktusgroup.com/blog/2009/05/26/testing-django-views-for-concurrency-issues/
Just keep in mind that concurrent test is no definite proof you're free from race conditions -- there's no guarantee it'll recreate all possible combinations of execution order among processes.