Why does a failing test with mox fail other tests as well? - unit-testing

My issue is quite simple: I'm having a bunch of unit tests using pymox. When I add a new test that fails, most of the time a whole lot of others fail as well. How can I prevent that from happening?
For example, I have a simple script for which I have two unit tests:
def test_main_returnsUnknown_ifCalculator_returnsMinus1(self):
m=mox.Mox()
m.StubOutWithMock(check_es_insert,"getArgs")
check_es_insert.getArgs(\
'Nagios plugin for checking the total number of documents stored in Elasticsearch')\
.AndReturn({ 'critical' : 7, 'warning' : 5, 'address' : 'myhost:1234', 'file' : '/tmp/bla'})
################
#some other mocking here, not relevant, I think
################
m.ReplayAll()
#now let's test
check_es_docs.main()
#verify and cleanup
m.UnsetStubs()
m.VerifyAll()
m.ResetAll()
def test_main_doesWhatPrintAndExitSays_inNormalConditions(self):
m=mox.Mox()
m.StubOutWithMock(check_es_insert,"getArgs")
check_es_insert.getArgs(\
'Nagios plugin for checking the total number of documents stored in Elasticsearch')\
.AndReturn({ 'critical' : 7, 'warning' : 5, 'address' : 'myhost:1234', 'file' : '/tmp/bla'})
################
#some other mocking here, not relevant, I think
################
m.ReplayAll()
#now let's test
check_es_docs.main()
#verify and clean up
m.UnsetStubs()
m.VerifyAll()
m.ResetAll()
Normally, both tests pass, but if I sneak in a typo on my second tests, I get this output when running the tests:
$ ./check_es_docs.test.py
FE
======================================================================
ERROR: test_main_returnsUnknown_ifCalculator_returnsMinus1 (__main__.Main)
If it can't get the current value from ES, print an error message and exit 3
----------------------------------------------------------------------
Traceback (most recent call last):
File "./check_es_docs.test.py", line 13, in test_main_returnsUnknown_ifCalculator_returnsMinus1
m.StubOutWithMock(check_es_insert,"getArgs")
File "/usr/local/lib/python2.7/dist-packages/mox-0.5.3-py2.7.egg/mox.py", line 312, in StubOutWithMock
raise TypeError('Cannot mock a MockAnything! Did you remember to '
TypeError: Cannot mock a MockAnything! Did you remember to call UnsetStubs in your previous test?
======================================================================
FAIL: test_main_doesWhatPrintAndExitSays_inNormalConditions (__main__.Main)
If getCurrent returns a positive value, main() should print the text and exit with the code Calculator.printandexit() says
----------------------------------------------------------------------
Traceback (most recent call last):
File "./check_es_docs.test.py", line 69, in test_main_doesWhatPrintAndExitSays_inNormalConditions
check_es_docs.main()
File "/home/radu/check_es_docs.py", line 25, in main
check_es_insert.printer("Total number of documents in Elasticsearch is %d | 'es_docs'=%d;%d;%d;;" % (result,result,cmdline['warning'],cmdline['critical']))
File "/usr/local/lib/python2.7/dist-packages/mox-0.5.3-py2.7.egg/mox.py", line 765, in __call__
return mock_method(*params, **named_params)
File "/usr/local/lib/python2.7/dist-packages/mox-0.5.3-py2.7.egg/mox.py", line 1002, in __call__
expected_method = self._VerifyMethodCall()
File "/usr/local/lib/python2.7/dist-packages/mox-0.5.3-py2.7.egg/mox.py", line 1060, in _VerifyMethodCall
raise UnexpectedMethodCallError(self, expected)
UnexpectedMethodCallError: Unexpected method call. unexpected:- expected:+
- printer.__call__("Total number of documents in Elasticsearch is 3 | 'es_docs'=3;5;7;;") -> None
? -
+ printer.__call__("Total nuber of documents in Elasticsearch is 3 | 'es_docs'=3;5;7;;") -> None
----------------------------------------------------------------------
Ran 2 tests in 0.002s
FAILED (failures=1, errors=1)
The first test should have passed with no error, since it wasn't changed a bit. check_es_insert.getArgs() shouldn't be a MockAnything instance, and I didn't forget to call UnsetStubs. I've searched quite a lot and I didn't find other people with the same problem. So I guess I'm missing something pretty obvious...
Additional info:
check_es_docs is the script I'm testing
check_es_insert is another script from which I'm importing a lot of stuff
I've tried putting UnsetStubs() after VerifyAll() with the same results
I've tried initializing the mox.Mox() object from the SetUp method, and also putting the cleanup stuff in TearDown, with the same results

I would recommend putting all of your tests into test classes that extend TestCase and then add in an UnsetStubs in the tearDown method:
from unittest import TestCase
import mox
class MyTestCasee(TestCase):
def __init__(self, testCaseName):
self.m = mox.Mox()
TestCase.__init__(self, testCaseName)
def tearDown(self):
self.m.UnsetStubs()
def test_main_returnsUnknown_ifCalculator_returnsMinus1(self):
self.m.StubOutWithMock(check_es_insert,"getArgs")
check_es_insert.getArgs(\
'Nagios plugin for checking the total number of documents stored in Elasticsearch')\
.AndReturn({ 'critical' : 7, 'warning' : 5, 'address' : 'myhost:1234', 'file' : '/tmp/bla'})
################
#some other mocking here, not relevant, I think
################
self.m.ReplayAll()
#now let's test
check_es_docs.main()
#verify and cleanup
self.m.VerifyAll()
def test_main_doesWhatPrintAndExitSays_inNormalConditions(self):
self.m.StubOutWithMock(check_es_insert,"getArgs")
check_es_insert.getArgs(\
'Nagios plugin for checking the total number of documents stored in Elasticsearch')\
.AndReturn({ 'critical' : 7, 'warning' : 5, 'address' : 'myhost:1234', 'file' : '/tmp/bla'})
################
#some other mocking here, not relevant, I think
################
self.m.ReplayAll()
#now let's test
check_es_docs.main()
#verify and clean up
self.m.VerifyAll()
self.m.ResetAll()

You can also use mox.MoxTestBase, which sets up self.mox and calls VerifyAll() on tearDown.
class ClassTestTest(mox.MoxTestBase):
def test():
m = self.mox.CreateMockAnything()
m.something()
self.mox.ReplayAll()
m.something() # If this line is removed the test will fail

Related

Why isn't call_command( ... stdout=f ) intercepting stdout? At my wits end

Help! I am unable to get testing for my management command to work. The command works fine when tested manually:
$ ./manage.py import_stock stock/tests/header_only.csv
Descriptions: 0 found, 0 not found, 0 not unique
StockLines: 0 found, 0 not found, 0 not unique
but not in a test. It's outputting to stdout despite call_command specifying stdout=f (f is a StringIO()). Running the test, I get
$ ./manage.py test stock/tests --keepdb
Using existing test database for alias 'default'...
System check identified no issues (0 silenced).
Descriptions: 0 found, 0 not found, 0 not unique
StockLines: 0 found, 0 not found, 0 not unique
Returned
""
F
======================================================================
FAIL: test_001_invocation (test_import_stock_mgmt_cmd.Test_010_import_stock)
make sure I've got the basic testing framework right!
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/nigel/django/sarah/silsondb/stock/tests/test_import_stock_mgmt_cmd.py",line 32, in test_001_invocation
self.assertIn('Descriptions: 0', text) # do-nothing
AssertionError: 'Descriptions: 0' not found in ''
----------------------------------------------------------------------
Ran 1 test in 0.006s
FAILED (failures=1)
Preserving test database for alias 'default'...
The test code which generates this is as follows. print(f'Returned\n"{text}"') shows that I'm getting a null string back from do_command (which creates the StringIO() and invokes call_command ). What I'm trying to intercept is being written to the console, just as when I invoke the command directly.
import csv
import io
from django.core.management import call_command
from django.core.management.base import CommandError
from django.test import TestCase
class Test_010_import_stock( TestCase):
def do_command( self, *args, **kwargs):
with io.StringIO() as f:
call_command( *args, stdout=f )
return f.getvalue()
def test_001_invocation(self):
""" make sure I've got the basic testing framework right! """
text = self.do_command( 'import_stock', 'stock/tests/header_only.csv')
print(f'Returned\n"{text}"')
print()
self.assertIn('Descriptions: 0', text) # do-nothing
self.assertIn('Stocklines: 0', text )
Answering own question. It was a silly bit of confusion in the management command itself.
I knew you didn't use print but should use self.stdout.write() in a management command
But a braino resulted in sys.stdout.write and by sheer bad luck, this particular command was importing sys. It's been one of those mornings.

Testing Django Rest Framework POST returns 500 despite call working

Update
This issue was caused by me not including a token in the APIClient's header. This is resolved.
I have a standard ModelViewSet at /test-endpoint. I am trying to use APIClient to test the endpoint.
from rest_framework.test import APIClient
... # During this process, a file is uploaded to S3. Could this cause the issue? Again, no errors are thrown. I just get a 500.
self.client = APIClient()
...
sample_call = {
"name": "test_document",
"description": "test_document_description"
}
response = self.client.post('/test-endpoint', sample_call, format='json')
self.assertEqual(response.status_code, 201)
This call works with the parameters I set in sample_call. It returns a 201. When I run the test, however, I get a 500. How can I modify this to get the 201 passed?
I run the tests with python src/manage.py test modulename
To rule out the obvious, I copy-pasted the sample call into Postman and run it without issue. I believe the 500 status code is coming from the fact that I'm testing the call and not using it in a live environment.
No error messages are being thrown beyond the AssertionError:
AssertionError: 500 != 201
Full Output of testing
/home/bryant/.virtualenvs/REDACTED/lib/python3.4/site- packages/django_boto/s3/shortcuts.py:28: RemovedInDjango110Warning: Backwards compatibility for storage backends without support for the `max_length` argument in Storage.get_available_name() will be removed in Django 1.10.
s3.save(full_path, fl)
F
======================================================================
FAIL: test_create (sample.tests.SampleTestCase)
Test CREATE Document
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/bryant/api/redacted/src/sample/tests.py", line 31, in test_create
self.assertEqual(response.status_code, 201)
AssertionError: 500 != 201
----------------------------------------------------------------------
Ran 1 test in 2.673s
FAILED (failures=1)
Destroying test database for alias 'default'...
The S3 warning is expected. Otherwise, all appears normal.
To debug a failing test case in Django/DRF:
Put import pdb; pdb.set_trace() just before the assertion and see the request.content as suggested by #Igonato, or you can just add a print(request.content), there is no shame for it.
Increase verbosity of your tests by adding -v 3
Use dot notation to investigate the specific test case: python src/manage.py test modulename.tests.<TestCase>.<function>
I hope these are useful to keep in mind.

AttributeError: TestSwitch instance has no attribute 'assertTrue'

I have a following pyunit test case code where I am collecting the result of the function (True or False) and using it to drive my assertion. However, I am getting the "no attribute" error for assertTrue. What is missing here?
I am using python 2.7.8 and pyunit version of PyUnit-1.4.1-py2.7.
The same code when run from the Eclipse (pydev plugin) from my Mac, it works fine. Only when I take this to my Linux box, it does throw below error. So to me it looks like some package incompatibility problem.
import json
import unittest
class TestSwitch(unittest.TestCase):
def testFunction(self):
self.assertTrue(True, "test case failed")
Below is the test suite class.
import unittest
from mysample import TestSwitch
# Create an instance of each test case.
testCase = TestSwitch('testFunction')
# Add test cases to the test suite.
testSuite = unittest.TestSuite()
testSuite.addTest(testCase)
# Execute the test suite.
testRunner = unittest.TextTestRunner(verbosity=2)
testRunner.run(testSuite)
It throws below error.
bash-3.2$ python mysuite.py
testFunction (mysample.TestSwitch) ... ERROR
======================================================================
ERROR: testFunction (mysample.TestSwitch)
----------------------------------------------------------------------
Traceback (most recent call last):
File "workspace/pyunit/mysample.py", line 7, in testFunction
self.assertTrue(True, "test case failed")
AttributeError: TestSwitch instance has no attribute 'assertTrue'
----------------------------------------------------------------------
Ran 1 tests in 0.000s
FAILED (errors=1)
bash-3.2$
For now I've figured a workaround for this problem by using 'assertEqual' comparing with a boolean value and it works. I am not sure why 'assertTrue' and for that matter 'assertFalse' is having problem. I did not change any package version or anything.
The workaround code is as below.
17 def testFunction(self):
18 res = True
19 self.assertEqual(res, True, 'test case failed')

2 arguments missing but method works

Could you help me understand what is going on here. The question is about the error in the traceback. The failure is just as the illustration. And what I would like to illustrate that the function works.
Well, I was told that 2 positional arguments: 'view_instance' and 'address' are missing.
But the method really has taken those 2 positional arguments and worked happily till its logical end. In the interactive playing I show that I can catch the arguments transmitted.
Why does error appear? Thank you in advance for your help.
ADDED LATER:
Well, this seems to be because of the 'test_' beginning of the function.
Without "test" it works (def anonymous_user_redirected_to_login_page(self, view_instance, address):).
/photoarchive/general/tests.py
class GeneralTest(TestCase):
def test_anonymous_user_redirected_to_login_page(self, view_instance, address):
pdb.set_trace()
request = RequestFactory().get(address)
request.user = AnonymousUser()
response = view_instance(request)
self.assertEqual(response.status_code, 302)
self.assertEqual(response['location'], '/accounts/login/')
def test_anonymous_user_from_home_page_redirected_to_login_page(self):
view_instance = HomePageView.as_view()
address = '/'
self.test_anonymous_user_redirected_to_login_page(view_instance, address)
Traceback
(photoarchive) michael#michael:~/workspace/photoarchive/photoarchive$ python manage.py test general
Creating test database for alias 'default'...
FE
======================================================================
ERROR: test_anonymous_user_redirected_to_login_page (general.tests.GeneralTest)
----------------------------------------------------------------------
TypeError: test_anonymous_user_redirected_to_login_page() missing 2 required positional arguments: 'view_instance' and 'address'
======================================================================
FAIL: test_anonymous_user_from_home_page_redirected_to_login_page (general.tests.GeneralTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/michael/workspace/photoarchive/photoarchive/general/tests.py", line 29, in test_anonymous_user_from_home_page_redirected_to_login_page
self.test_anonymous_user_redirected_to_login_page(view_instance, address)
File "/home/michael/workspace/photoarchive/photoarchive/general/tests.py", line 23, in test_anonymous_user_redirected_to_login_page
self.assertEqual(response.status_code, 302)
AssertionError: 200 != 302
----------------------------------------------------------------------
Ran 2 tests in 0.002s
FAILED (failures=1, errors=1)
Destroying test database for alias 'default'...
Interactive playing:
(photoarchive) michael#michael:~/workspace/photoarchive/photoarchive$ python manage.py test general
Creating test database for alias 'default'...
> /home/michael/workspace/photoarchive/photoarchive/general/tests.py(20)test_anonymous_user_redirected_to_login_page()
-> request = RequestFactory().get(address)
(Pdb) view_instance
<function HomePageView at 0x7faa0f76fea0>
(Pdb) address
'/'
(Pdb)
test_anonymous_user_redirected_to_login_page() method is treated by unittest framework as a test method, because its name starts with test. The framework tries to execute it, but is not passing any arguments to it (test methods don't normally take any arguments). However, the method requires them, hence the error.
If this method is only a helper method to be called from the other method, name it so that it doesn't start with test, e.g. _test_anonymous_user_redirected_to_login_page().
Note that the traceback is not related to this problem. The traceback simply shows where the other test method failed at an assertion. That is, the other test method runs correctly (both in unittest run and in your interactive session).

Testing Motor calls with IOLoop

I'm running unittests in the callbacks for motor database calls, and I'm successfully catching AssertionErrors and having them surface when running nosetests, but the AssertionErrors are being caught in the wrong test. The tracebacks are to different files.
My unittests look generally like this:
def test_create(self):
#self.callback
def create_callback(result, error):
self.assertIs(error, None)
self.assertIsNot(result, None)
question_db.create(QUESTION, create_callback)
self.wait()
And the unittest.TestCase class I'm using looks like this:
class MotorTest(unittest.TestCase):
bucket = Queue.Queue()
# Ensure IOLoop stops to prevent blocking tests
def callback(self, func):
def wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception as e:
self.bucket.put(traceback.format_exc())
IOLoop.current().stop()
return wrapper
def wait(self):
IOLoop.current().start()
try:
raise AssertionError(self.bucket.get(block = False))
except Queue.Empty:
pass
The errors I'm seeing:
======================================================================
FAIL: test_sync_user (app.tests.db.test_user_db.UserDBTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/----/Documents/app/app-Server/app/tests/db/test_user_db.py", line 39, in test_sync_user
self.wait()
File "/Users/----/Documents/app/app-Server/app/tests/testutils/mongo.py", line 25, in wait
raise AssertionError(self.bucket.get(block = False))
AssertionError: Traceback (most recent call last):
File "/Users/----/Documents/app/app-Server/app/tests/testutils/mongo.py", line 16, in wrapper
func(*args, **kwargs)
File "/Users/----/Documents/app/app-Server/app/tests/db/test_question_db.py", line 32, in update_callback
self.assertEqual(result["question"], "updated question?")
TypeError: 'NoneType' object has no attribute '__getitem__'
Where the error is reported to be in UsersDbTest but is clearly in test_questions_db.py (which is QuestionsDbTest)
I'm having issues with nosetests and asynchronous tests in general, so if anyone has any advice on that, it'd be greatly appreciated as well.
I can't fully understand your code without an SSCCE, but I'd say you're taking an unwise approach to async testing in general.
The particular problem you face is that you don't wait for your test to complete (asynchronously) before leaving the test function, so there's work still pending in the IOLoop when you resume the loop in your next test. Use Tornado's own "testing" module -- it provides convenient methods for starting and stopping the loop, and it recreates the loop between tests so you don't experience interference like what you're reporting. Finally, it has extremely convenient means of testing coroutines.
For example:
import unittest
from tornado.testing import AsyncTestCase, gen_test
import motor
# AsyncTestCase creates a new loop for each test, avoiding interference
# between tests.
class Test(AsyncTestCase):
def callback(self, result, error):
# Translate from Motor callbacks' (result, error) convention to the
# single arg expected by "stop".
self.stop((result, error))
def test_with_a_callback(self):
client = motor.MotorClient()
collection = client.test.collection
collection.remove(callback=self.callback)
# AsyncTestCase starts the loop, runs until "remove" calls "stop".
self.wait()
collection.insert({'_id': 123}, callback=self.callback)
# Arguments passed to self.stop appear as return value of "self.wait".
_id, error = self.wait()
self.assertIsNone(error)
self.assertEqual(123, _id)
collection.count(callback=self.callback)
cnt, error = self.wait()
self.assertIsNone(error)
self.assertEqual(1, cnt)
#gen_test
def test_with_a_coroutine(self):
client = motor.MotorClient()
collection = client.test.collection
yield collection.remove()
_id = yield collection.insert({'_id': 123})
self.assertEqual(123, _id)
cnt = yield collection.count()
self.assertEqual(1, cnt)
if __name__ == '__main__':
unittest.main()
(In this example I create a new MotorClient for each test, which is a good idea when testing applications that use Motor. Your actual application must not create a new MotorClient for each operation. For decent performance you must create one MotorClient when your application begins, and use that same one client throughout the process's lifetime.)
Take a look at the testing module, and particularly the gen_test decorator:
http://tornado.readthedocs.org/en/latest/testing.html
These test conveniences take care of many details related to unittesting Tornado applications.
I gave a talk and wrote an article about testing in Tornado, there's more info here:
http://emptysqua.re/blog/eventually-correct-links/