I want to perform some exhaustive testing against one of my test-cases (say, create a document, to debug some weird things I am encountering..)
My brutal force was to fire python manage.py test myapp in a loop either using Popen or os.system, but now I am back to pure way ?.....
def SimpleTest(unittest.TestCase):
def setUp(self):
def test_01(self):
def tearDown(self):
def suite():
suite = unittest.TestCase()
suite.add(SimpleTest("setUp"))
suite.add(SimpleTest("test_01"))
suite.add(SimpleTest("tearDown"))
return suite
def main():
for i in range(n):
suite().run("runTest")
I ran python manage.py test myapp and I got
File "/var/lib/system-webclient/webclient/apps/myapps/tests.py", line 46, in suite
suite = unittest.TestCase()
File "/usr/lib/python2.6/unittest.py", line 216, in __init__
(self.__class__, methodName)
ValueError: no such test method in <class 'unittest.TestCase'>: runTest
I've googled the error, but I still clueless (I was told to add an empty runTest method, but that doesn't sound right at all...)
Well, according to python's unittest.TestCase:
The simplest TestCase subclass will simply override the runTest()
method in order to perform specific testing code
As you can see, my whole goal is to run my SimpleTest N times. I need to keep track of pass, failure against N.
What option do I have?
Thanks.
Tracking race conditions via unit tests is tricky. Sometimes you're better off hitting your frontend with automated testing tool like Selenium -- unlike unit test, environment is the same and there's no need for extra work to ensure concurrency. Here's one way to run concurrent code in tests when there're no better option: http://www.caktusgroup.com/blog/2009/05/26/testing-django-views-for-concurrency-issues/
Just keep in mind that concurrent test is no definite proof you're free from race conditions -- there's no guarantee it'll recreate all possible combinations of execution order among processes.
Related
While writing test cases in Odoo 10 custom modules, I'm using self.env['test.model'].create({..}), this is actually committing the records in DB instead of getting rolled back after the completion of test cases. I used SingleTrasactionCase and TransactionCase to write my test cases.
Can someone suggest to me the cleanup code I can add to prevent this from happening?
Below is my sample code:
class TestModel(SingleTransactionCase):
def setUp(self:
super(TestModel, self).setUp()
self.testMod = self.env['test.model'].create({..})
def test_form_submit(self):
self.testMod.form_submit()
self.assertEquals(...)
I have a a test case that uses setUp and looks like this:
from django.test import TestCase, Client, TransactionTestCase
...
class TestPancakeView(TestCase):
def setUp(self):
self.client = Client()
# The Year objects are empty (none here - expected)
print(Year.objects.all())
# When I run these Test by itself, these are empty (expected)
# When I run the whole file, this prints out a number of
# Pancake objects causing these tests to fail - these objects appear
# to be persisting from previous tests
print(Pancake.objects.all())
# This sets up my mock/test data
data_setup.basic_setup(self)
# This now prints out the expected years that data_setup creates
print(Year.objects.all())
# This prints all the objects that I should have
# But if the whole file is ran - it's printing objects
# That were created in other tests
print(Pancake.objects.all())
[...tests and such...]
data_setup.py is a file that just creates all the appropriate test-data for my tests when it needs them. I'm using Factory Boy for creating test data.
When I run TestPancakeView by itself - my tests pass as expected.
When I run my entire test_views.py file, my TestPancakeView tests fail.
If I change it to TransactionTestCase - it still fails when I run the whole file.
I'm creating the Pancake test data like this:
f.PancakeFactory.create_batch(
2,
flavor=blueberry,
intensity='high'
)
No other tests exhibit this behavior, they all act the same way when I run them individually or when I run them as part of the entire file.
So I found the reason for this was because my Pancake class was from a different application in the Django project (which was tied to a different DB table); because of that - when I was creating those objects initially - since they existed on a different test DB, they were persisting.
Solution:
Use the tearDown functionality for each test.
Each test now has the following:
def tearDown(self):
food.Pancake.objects.all().delete()
This ensures that my Pancake objects are removed after each test - so my tests against Pancake specifically pass regardless of being run on their own or with the rest of the file.
Is there any way to make running tests compulsory before running the server in django? I have a project on which many people will be working on so i want to make the testing compulsory before running it and all tests must pass before it runs. So basically lock the runserver command until all the tests pass successfully. This implementation will be just for some time and not for long.
Add this line execute_from_command_line([sys.argv[0], 'test']) before execute_from_command_line(sys.argv) in function main() in module manage.py. It can solve your problem. The main() will look like this:
def main():
# settings, import execute_from_command_line in 'try except' block
if (os.environ.get('RUN_MAIN') != 'true') & (sys.argv[1] == 'runserver'): # just run once when execute command 'manage.py runserver' but not other commands
execute_from_command_line([sys.argv[0], 'test']) # run ALL the test first
execute_from_command_line(sys.argv)
or you can specify the module for testing: execute_from_command_line([sys.argv[0], 'test', 'specific_module'])
or with file pattern:
execute_from_command_line([sys.argv[0], 'test', '--pattern=tests*.py'])
I agree with #LFDMR that this is probably a bad idea and will make your development process really inefficient. Even when with test-driven development, it is perfectly sensible to use the development server, for example, to figure out why your tests don't pass. I think you would be better served with a Git pre-commit or pre-push hook or the equivalent in your version control system.
That being said, here is how you can achieve what you are after:
You can overwrite an existing management command by adding a management command of the same name to one of your apps.
So you have to create file management/commands/runserver.py in one of your apps which looks like this:
from django.core import management
from django.core.management.commands.runserver import BaseRunserverCommand
class Command(BaseRunserverCommand):
def handle(self, *args, **kwargs):
call_command('test') # calls `sys.exit(1)` on test failure
super().handle(*args, **kwargs)
If I were a developer in your team, the first thing I would do is deleting this file ;)
In my experience it will be a terrible idea.
What you should really look into is Continuous integration
It is whenever someone push something all tests should run and a email will be send to the user who have pushed is something fail.
I am facing an issue when I run the tests of my django app with the command
python manage.py test app_name OR
python manage.py test
All the test cases where I am fetching some data by calling the GET API, they seem to fail because there is no data in the response in spite of there being in the test data. The structure which I have followed in my test suite is there is a base class of django rest framework's APITestCase and a set_up method which creates test objects of different models used in the APIs and I inherit this class in my app's test_views class for any particular API
such as
class BaseTest(APITestCase):
def set_up(self):
'''
create the test objects which can be accessed by the main test
class.
'''
self.person1= Person.objects.create(.......)
class SomeViewTestCase(BaseTest):
def setUp(self):
self.set_up()
def test_some_api(self):
url='/xyz/'
self.client.login(username='testusername3',password='testpassword3')
response=self.client.get(url,{'person_id':self.person3.id})
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(len(response.data),6)
So whenever I run the test as
./manage.py test abc.tests.test_views.SomeViewTestCase
it works fine, but when I run as
./manage.py test abc
The test above response.data has 0 entries and similarly, with the other tests within the same class the data is just not fetched and hence all the asserts fail.
How can I ensure the successful run of the test when they are run as a whole because during deployment they have to go through CI?
The versions of the packages and system configuration are as follows:
Django Version -1.6
Django Rest Framework - 3.1.1
Python -2.7
Operating System - Mac OS(Sierra)
Appreciate the help.Thanks.
Your test methods are executed in arbitrary order... And after each test, there's a tearDown() method that takes care to "rollback to initial state" so you have isolation between tests execution.
The only part that is shared among them is your setUp() method. that is invoked each time a test runs.
This means that if the runner start from the second test method and you only declare your response.data in your first test, all tests are gonna fail apart the posted one.
Hope it helps...
I don't understand how teardown in FactoryBoy + Django works.
I have a testcase like this:
class TestOptOutCountTestCase(TestCase):
multi_db = True
def setUp(self):
TestCase.setUp(self)
self.date = datetime.datetime.strptime('05Nov2014', '%d%b%Y')
OptoutFactory.create(p_id=1, cdate=self.date, email='inv1#test.de', optin=1)
def test_optouts2(self):
report = ReportOptOutsView()
result = report.get_optouts()
self.assertEqual(len(result), 1)
self.assertEqual(result[0][5], -1)
setUp is running once for all tests correct? Now if I had a second test and needed a clean state before running it, how do I achieve this? Thanks
If I understand you correctly you don't need tearDown in this case, as resetting the database between each test is the default behaviour for a TestCase.
See:
At the start of each test case, before setUp() is run, Django will flush the database, returning the database to the state it was in directly after migrate was called.
...
This flush/load procedure is repeated for each test in the test case, so you can be certain that the outcome of a test will not be affected by another test, or by the order of test execution.
Or do you mean to limit the creation of instances via the OutputFactory to certain tests?
Then you probably shouldn’t put the creation of instances into setUp.
Or you create two variants of your TestCase, one for all tests that rely on the factory and one for the ones that don't.
Regarding the uses of tearDown check this answer: Django when to use teardown method