I am learning test driven development...
I wrote a test that should fail but it's not...
(env)glitch:ipals nathann$ ./manage.py test npage/
Creating test database for alias 'default'...
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Destroying test database for alias 'default'...
in npage/ I have tests.py:
from django.test import TestCase
from npage.models import Tip
import datetime
# Example
class TipTester(TestCase):
def setUp(self):
print dir(self)
Tip.objects.create(pk=1,
text='Testing',
es_text='Probando')
def tips_in_spanish(self):
my_tip = Tip.objects.get(pk=1)
my_tip.set_language('es')
self.assertEqual(my_tip.text, 'this does not just say \'Probando\'')
What am I doing wrong? I've read this but I still can't figure out what is going wrong here.
Your test functions need to start with test:
def test_tips_in_spanish(self):
Docs here
"When you run your tests, the default behavior of the test utility is to find all the test cases (that is, subclasses of unittest.TestCase) in any file whose name begins with test, automatically build a test suite out of those test cases, and run that suite."
Related
I'm checking how Django's settings module is built and how the override_settings decorator deals with the settings when testing and I just can't see how the implementation of this decorator avoid problems when running the tests in parallel.
I see that it in the enable method it assigns to the settings' _wrapped attribute the settings values with the changes applied and that it stores a copy of the previous values that is then restored in the disable method. This works OK with me when executing it secuentially. But when running tests in parallel I can't see how this works without affecting other tests that also use the decorator, let's say to overwrite the same value. What I see is that the value set by the latest executed test will be returned everywhere when accessing settings.OVERRIDDEN_SETTING. In fact, this settings overriding should also affect the values returned in other tests even if they are not decorated.
I mean, if we have these two tests:
#override_settings(SETTING=1):
def test_1(self):
...
...
print(settings.SETTING)
#override_settings(SETTING=2):
def test_2(self):
...
...
print(settings.SETTING)
def test_3(self):
...
...
print(settings.SETTING)
If they are run in parallel, and let's say test_1 is executed, starts executing it's code and in the meanwhile test_2 is called before the print statement in test_1 has been executed, by checking the decorator implementation, I would expect both of them to print 2 as the result of their print. And depending on when it gets executed, test_3 would return the original value, 1, or 2 if it's also run in parallel.
There must be something that I'm not taking into account because I don't think that this code is prone to this race condition after so much time being there.
Any help to understand this would be appreciated.
Parallel tests are run in separate processes, which each access its own copy of settings.
Therefore, Django's override_settings does not need to handle parallel tests specifically.
We can empirically verify that with unsafe direct modification instead of override_settings (note the sleep to ensure that test_3 runs after test_1 and test_2 modify the value):
from time import sleep
from django.conf import settings
from django.test import TestCase
class TestOverrideSettings1(TestCase):
# #override_settings(SETTING=1)
def test_1(self):
settings.SETTING = 1
print(settings.SETTING)
class TestOverrideSettings2(TestCase):
# #override_settings(SETTING=2)
def test_2(self):
settings.SETTING = 2
print(settings.SETTING)
class TestOverrideSettings3(TestCase):
def test_3(self):
sleep(1)
print(settings.SETTING)
Running tests:
$ python manage.py test
1
.2
.2
.
Running tests in parallel:
$ python manage.py test --parallel
1
2
..0
.
I have a class based view in a Django app that looks something like this :
class CBView(View):
def get(self, request, client, *args, **kwargs):
output1 = self.method1(argument1)
output2 = self.method2(argument2)
# Rest of the method implementation
...
return response
def method1(self, argument1):
# Implementation
...
return output1
def method2(self, argument2):
# Implementation
...
return output2
And I'm trying to write unit tests for the 'easy' class methods, namely method1 and method2. The tests looks like this :
class TestCBView(TestCase):
def setUp(self):
self.view = CBView()
def test_method1(self):
# Testing that output1 is as expected
...
output1 = self.view.method1(argument1)
...
self.assertEquals(output1, expected_output1)
def test_method2(self):
# Testing that output2 is as expected
...
output2 = self.view.method2(argument2)
...
self.assertEquals(output2, expected_output2)
After that, I run:
coverage run ./manage.py test django_app.tests.test_cbview
Which runs all the tests successfully, then I try to run:
coverage report -m django_app/views.py
And I get :
Name Stmts Miss Cover Missing
-------------------------------------
No data to report.
Am I doing something wrong ?
I'm using Coverage.py, version 4.0.3., Django 1.8.15 and Python 2.7.13.
I just had the same problem. It appeared I had some not migrated data, so after I ran in the terminal
python manage.py makemigrations
python manage.py migrate
and I tried again with
coverage html --include=django_app/views.py
coverage run manage.py test django_app.tests
everything was already fine.
When running tests with coverage it generates the .coverage file which is in a private format and not intended to be read directly. This file contains the raw reports which will be used to show the report for you, either on console or in html format with coverage html. So normally if the tests were ran, the .coverage file must be there in the directory from which you launched the command coverage run ./manage.py test django_app.tests.test_cbview and then being under that directory, you can hit coverage report ... and should work.
I've got Django 1.4. In my test.py, I've got the requisite TestCase import:
from django.test import TestCase
To isolate the issue, I've added the line:
fixtures = ['westeros']
to the default example test case, i.e.
class SimpleTest(TestCase):
fixtures = ['westeros']
def test_basic_addition(self):
"""
Tests that 1 + 1 always equals 2.
"""
self.assertEqual(1 + 1, 2)
Using django-admin.py dumpdata, I created a fixture file called "westeros" in my customers/fixtures directory, where "customers" is an app that is listed in settings.INSTALLED_APPS.
When I run the test, at any verbosity, Django simply ignores the fixture and passes the test_basic_addition test. No error, no fixture loading. It's as if the TestCase import isn't there. Any ideas on what could be wrong or how to debug this?
It's ok to omit the extension when defining fixtures as you have done, i.e.
fixtures = ['westeros']
However, the fixture file itself must have the extension that corresponds to its serializer e.g westeros.json, westeros.json.zip or westeros.xml for json, zipped json or xml respectively.
Where is your westeros file located?
It needs to either be in a fixtures directory inside an app or in the dir specified by FIXTURE_DIRS in your settings.py file
You can run with tests with verbosity=2 to get full output.
https://docs.djangoproject.com/en/1.0/ref/django-admin/#test
Is your fixtures file named westeros ? or does it have a file extension?
Django will fail silently on fixture loads as you see. (at least up until 1.3, I haven't used fixtures in new 1.4 version yet). But you are not actually testing if the fixtures are loading.
Throw in a self.assertGreater(YourModel.objects.all(), 0) or somethign to verify that there are no objects, or drop in a debbuger and start querying some of your models.
I am trying to run tests on django with coverage. It works fine, but it doesn't detect class definitions, because they are defined before coverage is started. I have following test runner, that I use, when I compute coverage:
import sys
import os
import logging
from django.conf import settings
MAIN_TEST_RUNNER = 'django.test.simple.run_tests'
if settings.COMPUTE_COVERAGE:
try:
import coverage
except ImportError:
print "Warning: coverage module not found: test code coverage will not be computed"
else:
coverage.exclude('def __unicode__')
coverage.exclude('if DEBUG')
coverage.exclude('if settings.DEBUG')
coverage.exclude('raise')
coverage.erase()
coverage.start()
MAIN_TEST_RUNNER = 'django-test-coverage.runner.run_tests'
def run_tests(test_labels, verbosity=1, interactive=True, extra_tests=[]):
# start coverage - jeśli włączmy już tutaj, a wyłączymy w django-test-coverage,
# to dostaniemy dobrze wyliczone pokrycie dla instrukcji wykonywanych przy
# imporcie modułów
test_path = MAIN_TEST_RUNNER.split('.')
# Allow for Python 2.5 relative paths
if len(test_path) > 1:
test_module_name = '.'.join(test_path[:-1])
else:
test_module_name = '.'
test_module = __import__(test_module_name, {}, {}, test_path[-1])
test_runner = getattr(test_module, test_path[-1])
failures = test_runner(test_labels, verbosity=verbosity, interactive=interactive)
if failures:
sys.exit(failures)
What can I do, to have classes also included in coverage? Otherwise I have quite a low coverage and I can't easily detect places, that really need to be covered.
The simplest thing to do is to use coverage to execute the test runner. If your runner is called "runner.py", then use:
coverage run runner.py
You can put your four exclusions into a .coveragerc file, and you'll have all of the benefits of your coverage code, without keeping any of your coverage code.
I created custom django-admin commands
But, I don't know how to test it in standard django tests
If you're using some coverage tool it would be good to call it from the code with:
from django.core.management import call_command
from django.test import TestCase
class CommandsTestCase(TestCase):
def test_mycommand(self):
" Test my custom command."
args = []
opts = {}
call_command('mycommand', *args, **opts)
# Some Asserts.
From the official documentation
Management commands can be tested with the call_command() function. The output can be redirected into a StringIO instance
You should make your actual command script the minimum possible, so that it just calls a function elsewhere. The function can then be tested via unit tests or doctests as normal.
you can see in github.com example
see here
def test_command_style(self):
out = StringIO()
management.call_command('dance', style='Jive', stdout=out)
self.assertEquals(out.getvalue(),
"I don't feel like dancing Jive.")
To add to what has already been posted here. If your django-admin command passes a file as parameter, you could do something like this:
from django.test import TestCase
from django.core.management import call_command
from io import StringIO
import os
class CommandTestCase(TestCase):
def test_command_import(self):
out = StringIO()
call_command(
'my_command', os.path.join('path/to/file', 'my_file.txt'),
stdout=out
)
self.assertIn(
'Expected Value',
out.getvalue()
)
This works when your django-command is used in a manner like this:
$ python manage.py my_command my_file.txt
A simple alternative to parsing stdout is to make your management command exit with an error code if it doesn't run successfully, for example using sys.exit(1).
You can catch this in a test with:
with self.assertRaises(SystemExit):
call_command('mycommand')
I agree with Daniel that the actual command script should do the minimum possible but you can also test it directly in a Django unit test using os.popen4.
From within your unit test you can have a command like
fin, fout = os.popen4('python manage.py yourcommand')
result = fout.read()
You can then analyze the contents of result to test whether your Django command was successful.