Cause test failure from pytest autouse fixture - unit-testing

pytest allows the creation of fixtures that are automatically applied to every test in a test suite (via the autouse keyword argument). This is useful for implementing setup and teardown actions that affect every test case. More details can be found in the pytest documentation.
In theory, the same infrastructure would also be very useful for verifying post-conditions that are expected to exist after each test runs. For example, maybe a log file is created every time a test runs, and I want to make sure it exists when the test ends.
Don't get hung up on the details, but I hope you get the basic idea. The point is that it would be tedious and repetitive to add this code to each test function, especially when autouse fixtures already provide infrastructure for applying this action to every test. Furthermore, fixtures can be packaged into plugins, so my check could be used by other packages.
The problem is that it doesn't seem to be possible to cause a test failure from a fixture. Consider the following example:
#pytest.fixture(autouse=True)
def check_log_file():
# Yielding here runs the test itself
yield
# Now check whether the log file exists (as expected)
if not log_file_exists():
pytest.fail("Log file could not be found")
In the case where the log file does not exist, I don't get a test failure. Instead, I get a pytest error. If there are 10 tests in my test suite, and all of them pass, but 5 of them are missing a log file, I will get 10 passes and 5 errors. My goal is to get 5 passes and 5 failures.
So the first question is: is this possible? Am I just missing something? This answer suggests to me that it is probably not possible. If that's the case, the second question is: is there another way? If the answer to that question is also "no": why not? Is it a fundamental limitation of pytest infrastructure? If not, then are there any plans to support this kind of functionality?

In pytest, a yield-ing fixture has the first half of its definition executed during setup and the latter half executed during teardown. Further, setup and teardown aren't considered part of any individual test and thus don't contribute to its failure. This is why you see your exception reported as an additional error rather than a test failure.
On a philosophical note, as (cleverly) convenient as your attempted approach might be, I would argue that it violates the spirit of test setup and teardown and thus even if you could do it, you shouldn't. The setup and teardown stages exist to support the execution of the test—not to supplement its assertions of system behavior. If the behavior is important enough to assert, the assertions are important enough to reside in the body of one or more dedicated tests.
If you're simply trying to minimize the duplication of code, I'd recommend encapsulating the assertions in a helper method, e.g., assert_log_file_cleaned_up(), which can be called from the body of the appropriate tests. This will allow the test bodies to retain their descriptive power as specifications of system behavior.

AFAIK it isn't possible to tell pytest to treat errors in particular fixture as test failures.
I also have a case where I would like to use fixture to minimize test code duplication but in your case pytest-dependency may be a way to go.
Moreover, test dependencies aren't bad for non-unit tests and be careful with autouse because it makes tests harder to read and debug. Explicit fixtures in test function header give you at least some directions to find executed code.

I prefer using context managers for this purpose:
from contextlib import contextmanager
#contextmanager
def directory_that_must_be_clean_after_use():
directory = set()
yield directory
assert not directory
def test_foo():
with directory_that_must_be_clean_after_use() as directory:
directory.add("file")
If you absoulutely can't afford to add this one line for every test, it's easy enough to write this as a plugin.
Put this in your conftest.py:
import pytest
directory = set()
# register the marker so that pytest doesn't warn you about unknown markers
def pytest_configure(config):
config.addinivalue_line("markers",
"directory_must_be_clean_after_test: the name says it all")
# this is going to be run on every test
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item):
directory.clear()
yield
if item.get_closest_marker("directory_must_be_clean_after_test"):
assert not directory
And add the according marker to your tests:
# test.py
import pytest
from conftest import directory
def test_foo():
directory.add("foo file")
#pytest.mark.directory_must_be_clean_after_test
def test_bar():
directory.add("bar file")
Running this will give you:
fail.py::test_foo PASSED
fail.py::test_bar FAILED
...
> assert not directory
E AssertionError: assert not {'bar file'}
conftest.py:13: AssertionError
You don't have to use markers, of course, but these allow controlling the scope of the plugin. You can have the markers per-class or per-module as well.

Related

Pytest test file that reads from current directory

I am trying to do a unit test with Pytest on a function of the form
def function():
with open('file.txt', 'r') as file:
# read for information in file
return information_from_file
I want to make a temporary directory which I can use the function to read from. This lead me to using tmpdir and tmpdir_factory, however both these options require that a path object is inputted as an argument from what I can tell. Though in this case, that isn't an option since the function is reading from the current directory rather than a directory that was inputted.
Is there a way to use Pytest in order to do a test on this kind of function?
You could use mock_open, but that would be a glass-box test breaking the encapsulation of the function. If the filename changes, the test breaks. If it changes its implementation, the test breaks.
More importantly, it ignores what your test is telling you.
Testing is a great way to find out where your design is too rigid. If you find a function hard to test, its probably hard to use. Hard coded filenames are a classic example. This is an opportunity to make the code more flexible. For example, introduce a configuration object.
def function():
with open(MyApp.config('function','file'), 'r') as file:
# read for information in file
return information_from_file
Exactly how this is implemented depends on your situation. You can use Python's built in ConfigParser to work with the config file. And here I've gone with an application singleton object to store application-wide information.
Then you have two tests. One which checks the default MyApp.config('function','file') is as expected. And other which sets MyApp.config('function','file') to a temp file and tests the function reads it.

How do I make Django unit tests check M2M DB constraints?

Say I have this model definition:
class Foo(Model):
...
class Bar(Model):
some_m2m_field = ManyToManyField(Foo)
and this code:
bar = Bar.objects.create()
bar.some_m2m_field.set(an_id_array_with_some_invalid_pks)
When I run that normally, the last line will, as it should, throw an IntegrityError. However, if I run the same code from a django.test.TestCase, the last line will NOT throw an error. It instead will wait until the _post_teardown() phase of the test to throw the IntegrityError.
Here's a small project that demonstrates the issue: https://github.com/t-evans/m2mtest
How do I fix that? I suppose that's configurable, but I haven't been able to find it...
Follow-up question:
Ultimately, I need to handle the case when there are bad IDs being passed to the m2m_field.set() method (and I need unit tests that verify that bad IDs are being handled correctly, which is why the delayed IntegrityError in the unit test won't work).
I know I can find the bad IDs by looping over the array and hitting the DB one for each ID. Is there a more efficient way to find the bad IDs or (better) simply tell the set() method to ignore/drop the bad IDs?
TestCase wraps tests in additional atomic() blocks, compared to TransactionTestCase, so to test specific database transaction behaviour, you should use TransactionTestCase.
I believe an IntegrityError is thrown only when the transaction is committed, as that's the moment the db would find out about missing ids.
In general if you want to test for db exceptions raised during a test, you should use a TransactionTestCase and test your code using:
with self.assertRaises(IntegrityError):
# do something that gets committed to db
See the answer from #dirkgroten for how to fix the unit test issue.
As for the followup question on how to more-efficiently eliminate the bad IDs, one way is as follows:
good_ids = Foo.objects.filter(id__in=an_id_array_with_some_invalid_ids).values_list('id', flat=True)

Django TestCase with fixtures causes IntegrityError due to duplicate keys

I'm having trouble moving away from django_nose.FastFixtureTestCase to django.test.TestCase (or even the more conservative django.test.TransactionTestCase). I'm using Django 1.7.11 and I'm testing against Postgres 9.2.
I have a Testcase class that loads three fixtures files. The class contains two tests. If I run each test individually as a single run (manage test test_file:TestClass.test_name), they each work. If I run them together, (manage test test_file:TestClass), I get
IntegrityError: Problem installing fixture '<path>/data.json': Could not load <app>.<Model>(pk=1): duplicate key value violates unique constraint "<app_model_field>_49810fc21046d2e2_uniq"
To me it looks like the db isn't actually getting flushed or rolled back between tests since it only happens when I run the tests in a single run.
I've stepped through the Django code and it looks like they are getting flushed or rolled back -- depending on whether I'm trying TestCase or TransactionTestCase.
(I'm moving away from FastFixtureTestCase because of https://github.com/django-nose/django-nose/issues/220)
What else should I be looking at? This seems like it should be a simple matter and is right within what django.test.TestCase and Django.test.TransactionTestCase are designed for.
Edit:
The test class more or less looks like this:
class MyTest(django.test.TransactionTestCase): # or django.test.TestCase
fixtures = ['data1.json', 'data2.json', 'data3.json']
def test1(self):
return # I simplified it to just this for now.
def test2(self):
return # I simplified it to just this for now.
Update:
I've managed to reproduce this a couple of times with a single test, so I suspect something in the fixture loading code.
One of my basic assumptions was that my db was clean for every TestCase. Tracing into the django core code I found instances where an object (in one case django.contrib.auth.User) already existed.
I temporarily overrode _fixture_setup() to assert the db was clean prior to loading fixtures. The assertion failed.
I was able to narrow the problem down to code that was in a TestCase.setUpClass() instead of TestCase.setUp(), and so the object was leaking out of the test and conflicting with other TestCase fixtures.
What I don't understand completely is I thought that the db was dropped and recreated between TestCases -- but perhaps that is not correct.
Update: Recent version of Django includes setUpTestData() that should be used instead of setUpClass()

nose does not discover unit tests using load_tests

Python 2.7.1
Nose 1.1.2
I have read related questions on this but they do not help. I have Test cases that look like the below
For example in my_tests.py
def load_tests(loader, tests, pattern):
return unittest.TestSuite(MyTest() for scenario_name in list)
I have several such modules with load_tests method and I run them using unittest as follows
test_loader = unittest.defaultTestLoader.discover( '.', my_pattern_var);
test_runner = unittest.TextTestRunner();
result = test_runner.run(test_loader)
sys.exit(not result.wasSuccessful())
If I replace this with the equivalent nose code nose.main() it finds 0 tests.
Questions
How do I get nose to discover tests? WITHOUT actually losing the ability to just run my tests using python unittest. I would like to use NOSE as an addon to python unittest to get clover and coverage reports
How do I get it to run tests matching only a specific pattern?
sorry for getting back so late. We basically did the same thing you're trying to do here for a console script except we named all of our test modules integration_foo.py. Anyhow the solution is straightforward, just run nose programmatically.
import re
from nose.config import Config
TEST_REGEX = '(?:^|[\\b_\\./-])[Ll]oad'
# Change the test match pattern
nose_config = Config()
nose_config.testMatch = re.compile(TEST_REGEX)
# Specify the use of a Plugin Manager, load plugins
nose_config.plugins = BuiltinPluginManager()
nose_config.plugins.loadPlugins()
run(config=nose_config)
So this basic option changes the regex pattern nose is looking for from all methods labeled test to all methods labeled load. However this is not what you would need completely to run nose, it is also necessary to get some kind of parser object or pass a specific set of argv to nose.
If you want to pass a specific set of argv to be parsed by nose just do
run(config=nose_config, argv=["foo", "bar"])
Otherwise you can probably specify nose specific arguments at the command line and as long as you don't throw in anything funky nose shouldn't error.
Check out the nose source code at https://github.com/nose-devs/nose/tree/master/nose its where I got all the information I needed to write this

Google test framework - Dependency between test cases

I am new to using Google test framework and still going through lot of materials to utilize it to full extent.
Is there any way I can dictate/specify a relation between test cases so that it can be executed conditionally? Like lets say I have two tests; Can I run the second test only if the first succeeds? I am not really sure if it falls under the original rule of testing 'units' but was just wondering if its possible.
There no way to do it in source. Possible solution use shell scripts and run tests using filter.
Python example:
from subprocess import call
def runTest(pattern):
return call(['test', '--gtest_filter=%s' % pattern])
if runTest('FirstPriorityTestPattern') == 0:
return runTest('SecondPriorityTestPattern')
return 1