How to pass Django mock instance to class method? - django

The Mock testing library is the one Django topic I just can't seem to wrap my head around. For example, in the following code, why don't the mock User instances that I create in my unit test appear in the User object that I query in the 'get_user_ids' method? If I halt the test in the 'get_user_ids' method via the debug call and do "User.objects.all()", there's nothing in the User queryset and the test fails. Am I not creating three mock User instances that will be queried the the UserProxy's static method?
I'm using Django 1.6 and Postgres 9.3 and running the test with the command "python manage.py test -s apps.profile.tests.model_tests:TestUserProxy".
Thanks!
# apps/profile/models.py
from django.contrib.auth.models import User
class UserProxy(User):
class Meta:
proxy = True
#staticmethod
def get_user_ids(usernames):
debug()
user_ids = []
for name in usernames:
try:
u = User.objects.get(username__exact=name)
user_ids.append(u.id)
except ObjectDoesNotExist:
logger.error("We were unable to find '%s' in a list of usernames." % name)
return user_ids
# apps/profile/tests/model_tests.py
from django.test import TestCase
from django.contrib.auth.models import User
from mock import Mock
from apps.profile.models import UserProxy
class TestUserProxy(TestCase):
def test_get_user_ids(self):
u1 = Mock(spec=User)
u1.id = 1
u1.username = 'user1'
u2 = Mock(spec=User)
u2.id = 2
u2.username = 'user2'
u3 = Mock(spec=User)
u3.id = 3
u3.username = 'user3'
usernames = [u1.username, u2.username, u3.username]
expected = [u1.id, u2.id, u3.id]
actual = UserProxy.get_user_ids(usernames)
self.assertEqual(expected, actual)

Mocking is awesome for testing, and can lead to very clean tests, however it suffers a little from (a) being a bit fiddly to get ones head around when starting out, and (b) does require some effort often to set up mock objects and have then injected/used in the correct places.
The mock objects you are creating for the users are objects that look like a Django User model object, but they are not actual model objects, and therefore do not get put into the database.
To get your test working, you have two options, depending on what kind of test you want to write.
Unit Test - Mock the data returned from the database
The first option is to get this working as a unit test, i.e. testing the get_user_ids method in isolation from the database layer. To do this, you would need to mock the call to User.objects.get(username__exact=name) so that it returns the three mock objects you created in your test. This would be the more correct approach (as it is better to test units of code in isolation), however it would involve more work to set up than the alternative below.
One way to achieve this would be to firstly separate out the user lookup into it's own function in apps/profile/models.py:
def get_users_by_name(name):
return User.objects.get(username__exact=name)
This would need to be called in your function, by replacing the call to Users.objects.get(username__exact=name) with get_users_by_name(name). You can then modify your test to patch the function like so:
from django.test import TestCase
from django.contrib.auth.models import User
from mock import Mock, patch
from apps.profile.models import UserProxy
class TestUserProxy(TestCase):
#patch('apps.profile.models.get_user_by_name')
def test_get_user_ids(self, mock_get_user_by_name):
u1 = Mock(spec=User)
u1.id = 1
u1.username = 'user1'
u2 = Mock(spec=User)
u2.id = 2
u2.username = 'user2'
u3 = Mock(spec=User)
u3.id = 3
u3.username = 'user3'
# Here is where we wire up the mocking - we take the patched method to return
# users and tell it that, when it is called, it must return the three mock
# users you just created.
mock_get_user_by_name.return_value = [u1, u2, u3]
usernames = [u1.username, u2.username, u3.username]
expected = [u1.id, u2.id, u3.id]
actual = UserProxy.get_user_ids(usernames)
self.assertEqual(expected, actual)
Integration Test - Create real user objects
The second approach is to modify this to be an integration test, i.e. one that tests both this unit of code and also the interaction with the database. This is a little less clean, in that you are now exposing your tests on the method to the chance of failing because of problems in a different unit of code (i.e. the Django code that interacts with the database). However, this does make the setup of the test a lot simpler, and pragmatically may be the right approach for you.
To do this, simply remove the mocks you have created and create actual users in the database as part of your test.

Related

create users before running tests DRF

i want to run django tests,
but i want to create some users before i run the test and the users' username will be attribute of the class and can be shared in all tests somthing like this:
class DoSomeTests(TestCase):
def setup_createusers(self):
self.usr1 = create_user1()
self.usr2 = create_user1()
self.usr3 = create_user1()
def test_number_one(self):
use self.user1/2/3
def test_number_two(self):
use self.user1/2/3
def test_number_three(self):
use self.user1/2/3
how can i do it becuase every time i tried the test dont recognize the self's attributes
ive tried use setupclass and setUp but nothing happend
cretae users before running tests
Generally (and personally) since setUpTestData was introduced I use this one but you can use setUp also based on your approach and what you need.
In order to use setUpTestData you need to put a class decorator and approach it with cls instead of self since you want to set up data for the whole TestCase, something like:
class TestViews(TestCase):
#classmethod
def setUpTestData(cls):
cls.user1 = User.objects.create_user(........)
cls.user2 = User.objects.create_user(........)
cls.user3 = User.objects.create_user(........)
Then in your tests, in order to access (and log in) each user you can use this:
def test_number_one(self):
test_user = self.user1
self.client.force_login(test_user)

Is it considered good practice using too many factories in pytest?

I am trying to write tests for Django/DjangoREST project. I have decided to use pytest. I have poor experience in writing tests for Django projects specifically. So I am confused now.
Here is an example:
#pytest.mark.django_db
def test_some_view(
api_client,
simple_user,
model1_factory,
model2_factory,
... # many other model factories
modelN_factory
):
# ...
# creating here other objects that really depends on each other
# ...
model2_obj = ... # model2 object on its own side depends on model3, model4... and so on
model1_objs = []
for i in range(10):
model1_objs.append(model1_factory(some_field=100, some_model2_rel=model2_obj)
assert len(model1_objs) == 1, "Created items with duplicate `some_field`"
As you can see I have too many factories to be used in one test. But looking
at my model structure right now, I can't think of a better way. Is it ok to use so many
factories for one test? Or should I find some issues related to my tables' relations?
Any help is appreciated. Thanks in advance
The main goal of factory_boy is getting rid of fixtures; its typical use case is:
Design your Factory classes, which are basically recipes for getting a "realistic" object instance
In your test, call only the factories you need, specifying just the parameters for that test case.
As I understand it, pytest fixtures are intended for "setting up the test environment": booting the database, mocking an external service, etc.; creating objects inside the database isn't a good fit for them.
The way I'd write your code would be the following:
# factories.py
import factory
import factory.fuzzy
from . import models
class DivisionFactory(factory.django.DjangoModelFactory):
class Meta:
model = models.Division
name = factory.Faker('company')
class EmployeeFactory(factory.django.DjangoModelFactory):
class Meta:
model = models.Employee
username = factory.Faker('username')
name = factory.Faker('name')
employee_id = factory.Sequence(lambda n: 'EM199%06d' % n)
division = factory.SubFactory(DivisionFactory)
role = factory.fuzzy.FuzzyChoice(models.Employee.ROLES)
hired_on = factory.fuzzy.FuzzyDate(
start_date=datetime.date.today() - datetime.timedelta(days=100),
end_date=datetime.date.today() - datetime.timedelta(days=10),
)
We have a factory for an employee, and one for a division - and each employee gets assigned to a division.
Every mandatory field is provided; if we need to make specific factories for some object profiles, this can be added through either subclassing or using traits.
We can now write our tests, passing only the details required for the test:
# tests.py
#pytest.mark.django_db
def test_get_new_hire(api_client):
employee = factories.EmployeeFactory(
hired_on=datetime.date.today(),
division__name="Finance",
)
data = api_client.get(f'/employees/{employee.username}')
assert data['division'] == "Finance"
assert data['orientation_status'] == 'pending'
As a side note, wouldn't it make more sense to use Django's test runner directly? It's more finely tuned for Django internals: each test can be natively wrapped in a sub-transaction for performance, the test client provides helpers for in-depth introspection of view results, etc.

is there a way to override model methods in django?

I have a model like this:
class Car(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4)
create_date = models.DateTimeField('date added', auto_now_add=True)
modify_date = models.DateTimeField('date modified', auto_now=True)
...
def last_tracked_location(self):
...
try:
url = 'myurl'
return requests.get(
url,
).json()['location'])
except:
return False
This method get's called later for the admin's panel. It requests something from an api and then returns either this or false.
In testing mode the other api doesn't exist so the request delays all the tests till it timeouts and then returns False.
Is there a way to override this? I checked the docs but could only find out to override the settings.
Another idea I had was to test inside the method if it's being called in testing mode and then just never go into this try block. But I don't think this is a clean way.
UPDATE
I am calling the tests like so:
python3 manage.py test --settings=app.settings_test
You can write your tests so that they mock the response of the API. The unittest mock module is a good starting point and part of the Python standard library since Python 3.3.
I don't have access to your full code but here is an example to get you started:
from unittest import mock
from django.test import TestCase
from .models import Car
#mock.patch('yourapp.models.Car.last_tracked_location')
class CarTestCase(TestCase):
def test_get_last_tracked_location(self, mock_last_tracked_location):
mock_last_tracked_location.return_value = {'location': 'Paris'}
car = Car.objects.create()
response = car.last_tracked_location()
assert response['location'] == 'Paris'
Checking DEBUG directly in method is not pretty clean in my opinion.
Maybe it will be better for you to write some API class for making this request?
Then you can either write separate logic for this class to run in 'testing mode' (not sure i understand correctly what do you mean by that :) test env ?) or just create mock in tests.
Here you can read more about conditionally testing scenarios:
https://realpython.com/testing-third-party-apis-with-mocks/

Same hypothesis test for different django models

I want to use hypothesis to test a tool we've written to create avro schema from Django models. Writing tests for a single model is simple enough using the django extra:
from avro.io import AvroTypeException
from hypothesis import given
from hypothesis.extra.django.models import models as hypothetical
from my_code import models
#given(hypothetical(models.Foo))
def test_amodel_schema(self, amodel):
"""Test a model through avro_utils.AvroSchema"""
# Get the already-created schema for the current model:
schema = (s for m, s in SCHEMA if m == amodel.model_name)
for schemata in schema:
error = None
try:
schemata.add_django_object(amodel)
except AvroTypeException as error:
pass
assert error is None
...but if I were to write tests for every model that can be avro-schema-ified they would be exactly the same except for the argument to the given decorator. I can get all the models I'm interested in testing with ContentTypeCache.list_models() that returns a dictionary of schema_name: model (yes, I know, it's not a list). But how can I generate code like
for schema_name, model in ContentTypeCache.list_models().items():
#given(hypothetical(model))
def test_this_schema(self, amodel):
# Same logic as above
I've considered basically dynamically generating each test method and directly attaching it to globals, but that sounds awfully hard to understand later. How can I write the same basic parameter tests for different django models with the least confusing dynamic programming possible?
You could write it as a single test using one_of:
import hypothesis.strategies as st
#given(one_of([hypothetical(model) for model in ContentTypeCache.list_models().values()]))
def test_this_schema(self, amodel):
# Same logic as above
You might want to up the number of tests run in this case using something like #settings(max_examples=settings.default.max_examples * len(ContentTypeCache.list_models())) so that it runs the same number of examples as N tests.
I would usually solve this kind of problem by parametrising the test, and drawing from the strategy internally:
#pytest.mark.parametrize('model_type', list(ContentTypeCache.list_models().values()))
#given(data=st.data())
def test_amodel_schema(self, model_type, data):
amodel = data.draw(hypothetical(model_type))
...

Model instance fixtures not persisted on the database

I have a test class with two methods, and want to share a saved model instance between both methods.
My fixtures:
#pytest.fixture(scope='class')
def model_factory():
class ModelFactory(object):
def get(self):
x = Model(email='test#example.org',
name='test')
x.save()
return x
return ModelFactory()
#pytest.fixture(scope='class')
def model(model_factory):
m = model_factory.get()
return m
My expectation is to receive only the model fixture on (both) my test methods and have it be the same, persisted on the database:
#pytest.mark.django_db
class TestModel(object):
def test1(self, model):
assert model.pk is not None
Model.objects.get(pk=model.pk) # Works, instance is in the db
def test2(self, model):
assert model.pk is not None # model.pk is the same as in test1
Model.objects.get(pk=model.pk) # Fails:
# *** DoesNotExist: Model matching query does not exist
I've verified using --pdb that at the end of test1, running Model.objects.all() returns the single instance I created. Meanwhile, psql shows no record:
test_db=# select * from model_table;
id | ยทยทยท fields
(0 rows)
Running the Model.objects.all() in pdb at the end of test2 returns an empty list, which is presumably right considering that the table is empty.
Why isn't my model being persisted, while the query still returns an instance anyway?
Why isn't the instance returned by the query in the second test, if my model fixture is marked scope='class' and saved? (This was my original question until I found out saving the model didn't do anything on the database)
Using django 1.6.1, pytest-django 2.9.1, pytest 2.8.5
Thanks
Tests must be independent of each other. To ensure this, Django - like most frameworks - clears the db after each test. See the documentation.
By looking at the postgres log I've found that pytest-django by default does a ROLLBACK after each test to keep things clean (which makes sense, as tests shouldn't depend on state possibly modified by earlier tests).
By decorating the test class with django_db(transaction=True) I could indeed see the data commited at the end of each test from psql, which answers my first question.
Same as before, the test runner ensures no state is kept between tests, which is the answer to my second point.
Scope argument is in this case a bit misleading, however if you would write your code like this:
#pytest.fixture(scope='class')
def model_factory(db, request):
# body
then you would get an error basically saying that database fixture has to be implemented with 'function' scope.
I would like to add that this is being currently worked on and might be an killing feature in the future ;) github pull request