Django: How to hide Traceback in unit tests for readability? - django

I find it a bit irritating getting so much details for a simple failed unit test. Is it possible to suppress everything but the actual defined assert message?
Creating test database for alias 'default'...
.F
======================================================================
FAIL: test_get_sales_item_for_company (my_app.tests.SalesItemModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/kave/projects/my/my_app/tests.py", line 61, in test_get_sales_item_for_company
self.assertEqual(sales_items.count(), 1, 'Expected one sales item for this company, but got %s' % sales_items.count())
AssertionError: Expected one sales item for this company, but got 2
----------------------------------------------------------------------
Ran 2 tests in 0.313s
FAILED (failures=1)
Destroying test database for alias 'default'...
I find this bit unnecessary. I need to know the test name (method) that failed and the assert message. No need for traceback really..
Traceback (most recent call last):
File "/home/kave/projects/my/my_app/tests.py", line 61, in test_get_sales_item_for_company
self.assertEqual(sales_items.count(), 1, 'Expected one sales item for this company, but got %s' % sales_items.count())

Monkey patching to the rescue. You can get rid of the traceback for failures without touching your Django installation by subclassing Django's TestCase as follows:
import types
from django.utils.unittest.result import failfast
from django.test import TestCase
#failfast
def addFailureSansTraceback(self, test, err):
err_sans_tb = (err[0], err[1], None)
self.failures.append((test, self._exc_info_to_string(err_sans_tb, test)))
self._mirrorOutput = True
class NoTraceTestCase(TestCase):
def run(self, result=None):
result.addFailure = types.MethodType(addFailureSansTraceback, result)
super(NoTraceTestCase, self).run(result)
Now just make your test cases subclasses of NoTraceTestCase instead of TestCase and you are good to go. No more tracebacks for failures. (Note exceptions will still print tracebacks. You could monkey-patch those away similarly if you wanted to.)
Here's how it works (with thanks to Jason Pratt for the quick lesson on monkey patching):
Django's test runner calls TestCase's run method for each test run. The result parameter is an instance of the django.utils.unittest.result.TestResult class, which handles showing test results to the user. Whenever a test fails, run makes the following call: result.addFailure(self, sys.exc_info()). That's where the traceback comes from -- as the third item in the tuple returned by sys.exc_info().
Now, simply overriding run with a copy of the original code and tweaking it as needed would work. But the run method is a good 75 lines long, and all that needs to be changed is that one line, and in any case why miss out the chance for some fun with monkey-patching?
The result.addFailure assignment changes the addFailure method in the result object that is passed to NoTraceTestCase's run method to the newly defined addFailureSansTraceback function -- which is first transformed into a result-object compatible method with types.MethodType.
The super call invokes Django's existing TestCase run. Now, when the existing code runs, the call to addFailure will actually call the new version, i.e. addFailureSansTraceback.
addFailureSansTraceback does what the original version of addFailure does -- copying over two lines of code -- except adds a line that replaces the traceback with None (the assignment to err_sans_tb which is used instead of err in the next line). That's it.
Note the original addFailure has a failfast decorator, so that is imported and used. To be honest, I haven't looked at what it does!
Disclaimer: I haven't studied Django's test code thoroughly. This is just a quick patch to get it to work in the common case. Use at your own risk!

Related

Efficient way to test logs with Django

In my Django files, I make some logging entries pretty simply this way:
# myapp/view.py
import logging
logger = logging.getLogger(__name__)
...
# somewhere in a method
logger.warning("display some warning")
Then, suppose I want to test that this warning is logged. In a test suite, I would normally do:
# myapp/test_view.py
...
# somewhere in a test class
def test_logger(self):
with self.assertLogs("myapp.view") as logger:
# call the view
self.assertListEqual(logger.output, [
"WARNING:myapp.view:display some warning"
])
This way, the output of the logger is silenced, and I can test it. This works fine when I run tests for this view only with:
./manage.py test myapp.test_view
but not when I run all tests:
./manage.py test
where I get this error:
Traceback (most recent call last):
File "/home/neraste/myproject/myapp/test_view.py", line 34, in test_logger
# the call of the view
AssertionError: no logs of level INFO or higher triggered on myapp.view
So, what should I do? I can use unittest.mock.patch to mock the calls to logger but I find this way ugly, especially if you pass arguments to your logger. Moreover, assertLogs is simply designed for that, so I wonder what is wrong.
The problem was on my side. In my other test files, I explicitly shut down logging (filtering to critical level), this is why no logs were recorded when running all tests.

Make running tests mandatory before runserver in django

Is there any way to make running tests compulsory before running the server in django? I have a project on which many people will be working on so i want to make the testing compulsory before running it and all tests must pass before it runs. So basically lock the runserver command until all the tests pass successfully. This implementation will be just for some time and not for long.
Add this line execute_from_command_line([sys.argv[0], 'test']) before execute_from_command_line(sys.argv) in function main() in module manage.py. It can solve your problem. The main() will look like this:
def main():
# settings, import execute_from_command_line in 'try except' block
if (os.environ.get('RUN_MAIN') != 'true') & (sys.argv[1] == 'runserver'): # just run once when execute command 'manage.py runserver' but not other commands
execute_from_command_line([sys.argv[0], 'test']) # run ALL the test first
execute_from_command_line(sys.argv)
or you can specify the module for testing: execute_from_command_line([sys.argv[0], 'test', 'specific_module'])
or with file pattern:
execute_from_command_line([sys.argv[0], 'test', '--pattern=tests*.py'])
I agree with #LFDMR that this is probably a bad idea and will make your development process really inefficient. Even when with test-driven development, it is perfectly sensible to use the development server, for example, to figure out why your tests don't pass. I think you would be better served with a Git pre-commit or pre-push hook or the equivalent in your version control system.
That being said, here is how you can achieve what you are after:
You can overwrite an existing management command by adding a management command of the same name to one of your apps.
So you have to create file management/commands/runserver.py in one of your apps which looks like this:
from django.core import management
from django.core.management.commands.runserver import BaseRunserverCommand
class Command(BaseRunserverCommand):
def handle(self, *args, **kwargs):
call_command('test') # calls `sys.exit(1)` on test failure
super().handle(*args, **kwargs)
If I were a developer in your team, the first thing I would do is deleting this file ;)
In my experience it will be a terrible idea.
What you should really look into is Continuous integration
It is whenever someone push something all tests should run and a email will be send to the user who have pushed is something fail.

Mock.patch not reset between django test runs

I have 2 tests which are testing a view that makes a call to an external module. I've mocked it with mock.patch. I'm calling the view by using django's test client.
The first test (a test for 404 being returned) completes successfully and the correct mock is called.
When the second test runs, everything runs as normal, but the mock that the code-under-test has access to is the mock from the previous test.
You can see in this example https://dpaste.de/7zT8 that the ids in the test output are incorrect (around line 91).
Where is this getting cached? My initial thought was that the import of the main module is somehow cached between test runs due to urlconf stuff. Tracing through the source code, I couldn't find that as the case.
Expected: Both tests pass.
Actual: Second test fails due to stale mocked import.
If I comment out the 404 test, the other test passes.
The view is registered in the url conf as the string-y version 'repos.views.github_webhook'.
I do not fully understand what causes the exact behaviour you are seeing, especially not why the mock is seemingly working correctly in the first test. But according to the mock docs, you should patch in the namespace under test, i.e. patch("views.tasks").
http://www.voidspace.org.uk/python/mock/patch.html#where-to-patch

How do I run my Django testcase multiple times?

I want to perform some exhaustive testing against one of my test-cases (say, create a document, to debug some weird things I am encountering..)
My brutal force was to fire python manage.py test myapp in a loop either using Popen or os.system, but now I am back to pure way ?.....
def SimpleTest(unittest.TestCase):
def setUp(self):
def test_01(self):
def tearDown(self):
def suite():
suite = unittest.TestCase()
suite.add(SimpleTest("setUp"))
suite.add(SimpleTest("test_01"))
suite.add(SimpleTest("tearDown"))
return suite
def main():
for i in range(n):
suite().run("runTest")
I ran python manage.py test myapp and I got
File "/var/lib/system-webclient/webclient/apps/myapps/tests.py", line 46, in suite
suite = unittest.TestCase()
File "/usr/lib/python2.6/unittest.py", line 216, in __init__
(self.__class__, methodName)
ValueError: no such test method in <class 'unittest.TestCase'>: runTest
I've googled the error, but I still clueless (I was told to add an empty runTest method, but that doesn't sound right at all...)
Well, according to python's unittest.TestCase:
The simplest TestCase subclass will simply override the runTest()
method in order to perform specific testing code
As you can see, my whole goal is to run my SimpleTest N times. I need to keep track of pass, failure against N.
What option do I have?
Thanks.
Tracking race conditions via unit tests is tricky. Sometimes you're better off hitting your frontend with automated testing tool like Selenium -- unlike unit test, environment is the same and there's no need for extra work to ensure concurrency. Here's one way to run concurrent code in tests when there're no better option: http://www.caktusgroup.com/blog/2009/05/26/testing-django-views-for-concurrency-issues/
Just keep in mind that concurrent test is no definite proof you're free from race conditions -- there's no guarantee it'll recreate all possible combinations of execution order among processes.

Django testing - InternalError: current transaction is aborted, commands ignored until end of transaction block

In my tests I do not only test for the perfect case, but especially for edge cases and error conditions. So I wanted to ensure some uniqueness constraints work.
While my test and test fixtures are pretty complicated I was able to track the problem down to the following example, which does not use any custom models. To reproduce the behaviour just save the code into tests.py and run the django test runner.
from django.contrib.auth.models import User
from django.db import IntegrityError
from django.test import TransactionTestCase
class TransProblemTest(TransactionTestCase):
def test_uniqueness1(self):
User.objects.create_user(username='user1', email='user1#example.com', password='secret')
self.assertRaises(IntegrityError, lambda :
User.objects.create_user(username='user1', email='user1#example.com', password='secret'))
def test_uniqueness2(self):
User.objects.create_user(username='user1', email='user1#example.com', password='secret')
self.assertRaises(IntegrityError, lambda :
User.objects.create_user(username='user1', email='user1#example.com', password='secret'))
A test class with a single test method works, but fails with two identical method implementations.
The first test throwing exception breaks the Django testing environment and makes all the following tests fail.
I am using Django 1.1 with Ubuntu 10.04, Postgres 8.4 and psycopg2.
Does the problem still exist in Django 1.2?
Is it a known bug or am I missing something?
Django has two flavors of TestCase: "plain" TestCase and TransactionTestCase. The documentation has the following to say about the difference between the two of them:
TransactionTestCase and TestCase are identical except for the manner in which the database is reset to a known state and the ability for test code to test the effects of commit and rollback. A TransactionTestCase resets the database before the test runs by truncating all tables and reloading initial data. A TransactionTestCase may call commit and rollback and observe the effects of these calls on the database.
A TestCase, on the other hand, does not truncate tables and reload initial data at the beginning of a test. Instead, it encloses the test code in a database transaction that is rolled back at the end of the test.
You are using TransactionTestCase to execute these tests. Switch to plain TestCase and you will see the problem vanish provided you maintain the existing test code.
Why does this happen? TestCase executes test methods inside a transaction block. Which means that each test method in your test class will be run inside a separate transaction rather than the same transaction. When the assertion (or rather the lambda inside) raises an error it dies with the transaction. The next test method is executed in a new transaction and therefore you don't see the error you've been getting.
However if you were to add another identical assertion in the same test method you would see the error again:
class TransProblemTest(django.test.TestCase):
def test_uniqueness1(self):
User.objects.create_user(username='user1', email='user1#example.com', password='secret')
self.assertRaises(IntegrityError, lambda :
User.objects.create_user(username='user1', email='user1#example.com', password='secret'))
# Repeat the test condition.
self.assertRaises(IntegrityError, lambda :
User.objects.create_user(username='user1', email='user1#example.com', password='secret'))
This is triggered because the first assertion will create an error that causes the transaction to abort. The second cannot therefore execute. Since both assertions happen inside the same test method a new transaction has not been initiated unlike the previous case.
Hope this helps.
I'm assuming when you say "a single test method works", you mean that it fails, raises the exception, but doesn't break the testing environment.
That said, you are running with AutoCommit turned off. In that mode, everything on the shared database connection is a single transaction by default and failures require the transaction to be aborted via ROLLBACK before a new one can be started. I recommend turning AutoCommit on, if possible--unless you have a need for wrapping multiple write operations into a single unit, having it off is overkill.