pytest is running methods twice - python-2.7

I have a problem when running the test methods on two different modules.
I have created suite function in a different module and defined it as a fixture.
On the two test classes, I have entered the created fixture, to use it as setup function just once.
For each test method, I have created setup and teardown methods.
When the test methods of the first module are run, the test from the other class (second module) is starting in the first class and then are run again in the second class. So the test methods in the second class are run twice, once from the first class in the first module and then from the second class in the second module.
I want to run the test methods once per class or module, and not to be run twice.
Can someone help me to get a solution?
PS: I use for the suite fixture the parameter scope='session' (tried with the module is the same)!
Example (conftest.py):
#pytest.fixture(scope="session")
def suite():
print "start application"
open_app()
Example (test.py):
def setup_method(self, method):
if method.__name__ == 'test_1':
pass
elif method.__name__ == 'test_2':
pass
else:
print "test method not found"
def teardown_method(self, method):
print "teardown methods"
def test_1(self):
pass
def test_2(self):
pass
def test_3(self):
pass
def setup_test_3(self, testcase):
print "this is only for the test methd: test_3"
def teardown_test_3(self):
print "cleanup state after running test method test_3"

You can pass the parameter to your setup and teardown functions and put the condition based on the class or method name or module name and act it accordingly.
for e.g:--
def setup_method(value):
if value.__name__ == '<name of method from the module1>':
pass
elif value.__name__ == '<name of method from the module2>':
pass

Related

Parametrize attributes of Django-models with multi-table inheritance in pytest-factoryboy

I am using Django and want to write a test using pytest, pytest-django, pytest-factoryboy and pytest-lazyfixtures.
I have Django-models that are using multi-table inheritance, like this:
class User(models.Model):
created = models.DateTimeField()
active = models.BooleanField()
class Editor(User):
pass
class Admin(User):
pass
I also created factories for all models and registered them, such as:
class UserFactory(factory.django.DjangoModelFactory):
class Meta:
model = User
created = ... # some datetime
active = factory.Faker("pybool")
class EditorFactory(UserFactory):
class Meta:
model = Editor
...
Now I want to test a function that can take any of User, Editor or Admin as an input and parametrize the test with all user types and variations of active and created, like this (unfortunately it doesn't work like that):
#pytest.mark.parametrize("any_user", [lazy_fixture("user"), lazy_fixture("editor"), lazy_fixture("admin")])
#pytest.mark.parametrize("any_user__active", [True, False])
def test_some_func(any_user):
... # test some stuff
However that fails with In test_some_func: function uses no argument 'any_user__active'.
Any idea how to best solve this?
I could of course do sth like this, but it's not as nice:
#pytest.mark.parametrize("any_user", [lazy_fixture("user"), lazy_fixture("editor"), lazy_fixture("admin")])
#pytest.mark.parametrize("active", [True, False])
def test_some_func(any_user, active):
any_user.active = active
# save any_user if necessary
... # test some stuff
Any better suggestions?
pytest-factoryboy is not as expressive as I'd wish in cases like this. It would be nice to call pytest_factoryboy.register with an alternate name for model fixtures — but unfortunately, even though register takes a _name parameter intended for this purpose, _name is ignored, and underscore(factory_class._meta.model.__name__) is used instead.
Thankfully, we can trick this logic into using the model name we desire:
#register
class AnyUserFactory(UserFactory):
class Meta:
model = type('AnyUser', (User,), {})
Essentially, we create a new subclass of User with the name AnyUser. This will cause pytest-factoryboy to create the any_user model fixture, along with any_user__active, any_user__created, etc. Now, how do we parametrize any_user to use UserFactory, EditorFactory, and AdminFactory?
Thankfully again, model fixtures work by requesting the model_name_factory fixture with request.getfixturevalue('model_name_factory'), and not by directly referencing the #register'd factory class. The upshot is that we can simply override any_user_factory with whatever factory we wish!
#pytest.fixture(autouse=True, params=[
lazy_fixture('user_factory'),
lazy_fixture('editor_factory'),
lazy_fixture('admin_factory'),
])
def any_user_factory(request):
return request.param
NOTE: pytest seems to prune the graph of available fixtures based on the test method args, as well as any args requested by fixtures. When a fixture uses request.getfixturevalue, pytest may report being unable to find the requested fixture — even if it's clearly defined — because it was pruned. We pass autouse=True to our fixture, to force pytest into including it in the dependency graph.
Now, we can parametrize any_user__active directly on our test, and any_user will be a User, Editor, and Admin with each value of active
#pytest.mark.parametrize('any_user__active', [True, False])
def test_some_func(any_user):
print(f'{type(any_user)=} {any_user.active=}')
Which outputs:
py.test test.py -sq
type(any_user)=<class 'test.User'> any_user.active=True
.type(any_user)=<class 'test.User'> any_user.active=False
.type(any_user)=<class 'test.Editor'> any_user.active=True
.type(any_user)=<class 'test.Editor'> any_user.active=False
.type(any_user)=<class 'test.Admin'> any_user.active=True
.type(any_user)=<class 'test.Admin'> any_user.active=False
.
6 passed in 0.04s
Also, if #pytest.fixture with request.param feels a bit verbose, I might suggest using pytest-lambda (disclaimer: I am the author). Sometimes, #pytest.mark.parametrize can be limiting, or can require including extra arg names in the test method that go unused; in those cases, it can be convenient to declare new fixtures without writing the full fixture method.
from pytest_lambda import lambda_fixture
any_user_factory = lambda_fixture(autouse=True, params=[
lazy_fixture('user_factory'),
lazy_fixture('editor_factory'),
lazy_fixture('admin_factory'),
])
#pytest.mark.parametrize('any_user__active', [True, False])
def test_some_func(any_user):
print(f'{type(any_user)=} {any_user.active=}')
If including autouse=True on any_user_factory is bothersome, because it causes all other tests to be parametrized, we have to find some other way to include any_user_factory in the pytest dependency graph.
Unfortunately, the first approach I'd try caused errors. I tried to override the any_user fixture, requesting both the original any_user fixture, and our overridden any_user_factory, like this
#pytest.fixture
def any_user(any_user, any_user_factory):
return any_user
Alas, pytest didn't like that
___________________________ ERROR collecting test.py ___________________________
In test_some_func: function uses no argument 'any_user__active'
Fortunately, pytest-lambda provides a decorator to wrap a fixture function, so the arguments of both the decorated method and the wrapped fixture are preserved. This allows us to explicitly add any_user_factory to the dependency graph
from pytest_lambda import wrap_fixture
#pytest.fixture(params=[ # NOTE: no autouse
lazy_fixture('user_factory'),
lazy_fixture('editor_factory'),
lazy_fixture('admin_factory'),
])
def any_user_factory(request):
return request.param
#pytest.fixture
#wrap_fixture(any_user)
def any_user(any_user_factory, wrapped):
return wrapped() # calls the original any_user() fixture method
NOTE: #wrap_fixture(any_user) directly references the any_user fixture method defined by pytest_factoryboy when calling #register. It'll appear as an unresolved reference in most static code checkers / IDEs; but as long as it appears after class AnyUserFactory and in the same module, it will work.
Now, only tests which request any_user will hit any_user_factory and receive its parametrization.
#pytest.mark.parametrize('any_user__active', [True, False])
def test_some_func( any_user):
print(f'{type(any_user)=} {any_user.active=}')
def test_some_other_func():
print('some_other_func')
Output:
py.test test.py -sq
type(any_user)=<class 'test.User'> any_user.active=True
.type(any_user)=<class 'test.User'> any_user.active=False
.type(any_user)=<class 'test.Editor'> any_user.active=True
.type(any_user)=<class 'test.Editor'> any_user.active=False
.type(any_user)=<class 'test.Admin'> any_user.active=True
.type(any_user)=<class 'test.Admin'> any_user.active=False
.some_other_func
.
7 passed in 0.06 seconds
However that fails with In test_some_func: function uses no argument
'any_user__active'.
This is because you haven't passed this any_user__active as the argument on the test function.
so change your test file to
def test_some_func(any_user__active, any_user):
An example will be like as below
#pytest.mark.parametrize("days, expected", [
(-1, 0),
(1, 1),
(0, 0),
(365, 365)
])
def test_subscription_to_for_user(days, expected):

Use pytest fixture in a function decorator

I want to build a decorator for my test functions which has several uses. One of them is helping to add properties to the generated junitxml.
I know there's a fixture built-in pytest for this called record_property that does exactly that. How can I use this fixture inside my decorator?
def my_decorator(arg1):
def test_decorator(func):
def func_wrapper():
# hopefully somehow use record_property with arg1 here
# do some other logic here
return func()
return func_wrapper
return test_decorator
#my_decorator('some_argument')
def test_this():
pass # do actual assertions etc.
I know I can pass the fixture directly into every test function and use it in the tests, but I have a lot of tests and it seems extremely redundant to do this.
Also, I know I can use conftest.py and create a custom marker and call it in the decorator, but I have a lot of conftest.py files and I don't manage all of them alone so I can't enforce it.
Lastly, trying to import the fixture directly in to my decorator module and then using it results in an error - so that's a no go also.
Thanks for the help
It's a bit late but I came across the same problem in our code base. I could find a solution to it but it is rather hacky, so I wouldn't give a guarantee that it works with older versions or will prevail in the future.
Hence I asked if there is a better solution. You can check it out here: How to use pytest fixtures in a decorator without having it as argument on the decorated function
The idea is to basically register the test functions which are decorated and then trick pytest into thinking they would require the fixture in their argument list:
class RegisterTestData:
# global testdata registry
testdata_identifier_map = {} # Dict[str, List[str]]
def __init__(self, testdata_identifier, direct_import = True):
self.testdata_identifier = testdata_identifier
self.direct_import = direct_import
self._always_pass_my_import_fixture = False
def __call__(self, func):
if func.__name__ in RegisterTestData.testdata_identifier_map:
RegisterTestData.testdata_identifier_map[func.__name__].append(self.testdata_identifier)
else:
RegisterTestData.testdata_identifier_map[func.__name__] = [self.testdata_identifier]
# We need to know if we decorate the original function, or if it was already
# decorated with another RegisterTestData decorator. This is necessary to
# determine if the direct_import fixture needs to be passed down or not
if getattr(func, "_decorated_with_register_testdata", False):
self._always_pass_my_import_fixture = True
setattr(func, "_decorated_with_register_testdata", True)
#functools.wraps(func)
#pytest.mark.usefixtures("my_import_fixture") # register the fixture to the test in case it doesn't have it as argument
def wrapper(*args: Any, my_import_fixture, **kwargs: Any):
# Because of the signature of the wrapper, my_import_fixture is not part
# of the kwargs which is passed to the decorated function. In case the
# decorated function has my_import_fixture in the signature we need to pack
# it back into the **kwargs. This is always and especially true for the
# wrapper itself even if the decorated function does not have
# my_import_fixture in its signature
if self._always_pass_my_import_fixture or any(
"my_import_fixture" in p.name for p in signature(func).parameters.values()
):
kwargs["my_import_fixture"] = my_import_fixture
if self.direct_import:
my_import_fixture.import_all()
return func(*args, **kwargs)
return wrapper
def pytest_collection_modifyitems(config: Config, items: List[Item]) -> None:
for item in items:
if item.name in RegisterTestData.testdata_identifier_map and "my_import_fixture" not in item._fixtureinfo.argnames:
# Hack to trick pytest into thinking the my_import_fixture is part of the argument list of the original function
# Only works because of #pytest.mark.usefixtures("my_import_fixture") in the decorator
item._fixtureinfo.argnames = item._fixtureinfo.argnames + ("my_import_fixture",)

how to mock a method decorated with Python Flask route()

I need to unit test whether a method decorated by a Flask route() gets called or not.
I'd like to do this without modifying the original code under test, if possible, so mocking the method would suite my requirements perfectly.
Hence I am asking this specific question about how to mock a decorated request method (I want to stress this to try to avoid people wasting their time with less specific answers)...
Sample application jflask.py:
from flask import Flask
app = Flask(__name__)
app.config.from_object(__name__)
#app.route('/hello') # This method represents the code under test.
def hello(): # I want to assert that this method gets
return 'Hello, World' # called without modifying this code.
if __name__ == "__main__":
app.run()
In the unit test I'm using #patch() to mock the method so I can assert it was called, but the assertion fails. I.e. the mock method doesn't get called, when I expect it to.
Sample unit test test_hello.py:
import unittest
import jflask
from unittest.mock import patch
class jTest(unittest.TestCase):
def setUp(self):
#jflask.app.testing = True
self.app = jflask.app.test_client()
#patch('jflask.hello') # mock the hello() method
def test_hello(self, mock_method):
rv = self.app.get('/hello')
mock_method.assert_called() # this assertion fails
What am I doing wrong ?
Background
Some background information about the actual behaviour I'm trying to test
(since the above is just a condensed test case, and may not seem entirely sane by itself).
In the actual code I am unit testing, there is a before_request() handler
installed for the app. This gets called by Flask before each request is handled, and in
certain situations this handler has been designed to return a response value, which
causes Flask request processing to stop (in this application's case, this feature is used to centrally validate request parameters), so that the usual routed request handler will (deliberately) not get called.
My unit tests need to assert that request processing gets stopped
or continues, appropriately depending on the situation.
Hence, my test needs to mock the real request handler and assert whether
it was called or not.
This is a little hacky but you could inject a logger.
#app.route(...):
def hello(logger=None):
logger = logger or self.logger
logger.info(...)
return ...
def test_...(self):
logger = MagicMock()
self.app.get(logger)
self.assertTrue(logger.info.called)
from functools import wraps
import logging
from datetime import datetime
logging.basicConfig(filename=datetime.now().strftime('%d_%m_%Y.log'),level=logging.INFO)
def logger_required(f):
#wraps(f)
def decorated(*args, **kwargs):
logging.info(f.__name__ + ' was called')
return f(*args, **kwargs)
return decorated
#app.route('/hello')
#logger_required
def hello(): # I want to assert that this gets called
return 'Hello, World'

Mocking Celery `self.request` attribute for bound tasks when called directly

I have a task foobar:
#app.task(bind=True)
def foobar(self, owner, a, b):
if already_working(owner): # check if a foobar task is already running for owner.
register_myself(self.request.id, owner) # add myself in the DB.
return a + b
How can I mock the self.request.id attribute? I am already patching everything and calling directly the task rather than using .delay/.apply_async, but the value of self.request.id seems to be None (as I am doing real interactions with DB, it is making the test fail, etc…).
For the reference, I'm using Django as a framework, but I think that this problem is just the same, no matter the environment you're using.
Disclaimer: Well, I do not think it was documented somewhere and this answer might be implementation-dependent.
Celery wraps his tasks into celery.Task instances, I do not know if it swaps the celery.Task.run method by the user task function or whatever.
But, when you call a task directly, you call __call__ and it'll push a context which will contain the task ID, etc…
So the idea is to bypass __call__ and Celery usual workings, first:
we push a controlled task ID: foobar.push_request(id=1) for example.
then, we call the run method: foobar.run(*args, **kwargs).
Example:
#app.task(bind=True)
def foobar(self, name):
print(name)
return foobar.utils.polling(self.request.id)
#patch('foobar.utils.polling')
def test_foobar(mock_polling):
foobar.push_request(id=1)
mock_polling.return_value = "done"
assert foobar.run("test") == "done"
mock_polling.assert_called_once_with(1)
You can call the task synchronously using
task = foobar.s(<args>).apply()
This will assign a unique task ID, so the value will not be None and your code will run. Then you can check the results as part of your test.
There is probably a way to do this with patch, but I could not work out a way to assign a property. The most straightforward way is to just mock self.
tasks.py:
#app.task(name='my_task')
def my_task(self, *args, **kwargs):
*__do some thing__*
test_tasks.py:
from mock import Mock
def test_my_task():
self = Mock()
self.request.id = 'ci_test'
my_task(self)

Nose #with_test not working

I have a situation where for some of the tests, I require to use different setup method than I have defined for all, And for this I though to use #with_setup decorator of nose.
However this doesn't seem to be working.
code:
import unittest
from nose.tools.nontrivial import with_setup
__author__ = 'gaurang_shah1'
class Demo(unittest.TestCase):
def setup_func(self):
print "setup_func"
def teardown_func(self):
print "teardown function"
def setUp(self):
print "setup"
#with_setup(setup_func, teardown_func)
def test_setup(self):
print "test setup"
I am expecting following output,
setup_func
test setup
teardown_func
However I am getting following output, is there anything wrong I am doing here.
setup
test setup
You are constructing a unittest subclass, and as such it will always use unittest.setUp and tearDown methods for the test. As described in the documentation:
Note that with_setup is useful only for test functions, not for test
methods or inside of TestCase subclasses.
If you want to use #with_setup, drop the class all together:
from nose.tools.nontrivial import with_setup
def setup_func():
print "setup_func"
def teardown_func():
print "teardown function"
#with_setup(setup_func, teardown_func)
def test_something():
print "test"
Or better yet, create another unittest class that does your custom setUp function.