I want to use hypothesis to test a tool we've written to create avro schema from Django models. Writing tests for a single model is simple enough using the django extra:
from avro.io import AvroTypeException
from hypothesis import given
from hypothesis.extra.django.models import models as hypothetical
from my_code import models
#given(hypothetical(models.Foo))
def test_amodel_schema(self, amodel):
"""Test a model through avro_utils.AvroSchema"""
# Get the already-created schema for the current model:
schema = (s for m, s in SCHEMA if m == amodel.model_name)
for schemata in schema:
error = None
try:
schemata.add_django_object(amodel)
except AvroTypeException as error:
pass
assert error is None
...but if I were to write tests for every model that can be avro-schema-ified they would be exactly the same except for the argument to the given decorator. I can get all the models I'm interested in testing with ContentTypeCache.list_models() that returns a dictionary of schema_name: model (yes, I know, it's not a list). But how can I generate code like
for schema_name, model in ContentTypeCache.list_models().items():
#given(hypothetical(model))
def test_this_schema(self, amodel):
# Same logic as above
I've considered basically dynamically generating each test method and directly attaching it to globals, but that sounds awfully hard to understand later. How can I write the same basic parameter tests for different django models with the least confusing dynamic programming possible?
You could write it as a single test using one_of:
import hypothesis.strategies as st
#given(one_of([hypothetical(model) for model in ContentTypeCache.list_models().values()]))
def test_this_schema(self, amodel):
# Same logic as above
You might want to up the number of tests run in this case using something like #settings(max_examples=settings.default.max_examples * len(ContentTypeCache.list_models())) so that it runs the same number of examples as N tests.
I would usually solve this kind of problem by parametrising the test, and drawing from the strategy internally:
#pytest.mark.parametrize('model_type', list(ContentTypeCache.list_models().values()))
#given(data=st.data())
def test_amodel_schema(self, model_type, data):
amodel = data.draw(hypothetical(model_type))
...
Related
I am trying to write tests for Django/DjangoREST project. I have decided to use pytest. I have poor experience in writing tests for Django projects specifically. So I am confused now.
Here is an example:
#pytest.mark.django_db
def test_some_view(
api_client,
simple_user,
model1_factory,
model2_factory,
... # many other model factories
modelN_factory
):
# ...
# creating here other objects that really depends on each other
# ...
model2_obj = ... # model2 object on its own side depends on model3, model4... and so on
model1_objs = []
for i in range(10):
model1_objs.append(model1_factory(some_field=100, some_model2_rel=model2_obj)
assert len(model1_objs) == 1, "Created items with duplicate `some_field`"
As you can see I have too many factories to be used in one test. But looking
at my model structure right now, I can't think of a better way. Is it ok to use so many
factories for one test? Or should I find some issues related to my tables' relations?
Any help is appreciated. Thanks in advance
The main goal of factory_boy is getting rid of fixtures; its typical use case is:
Design your Factory classes, which are basically recipes for getting a "realistic" object instance
In your test, call only the factories you need, specifying just the parameters for that test case.
As I understand it, pytest fixtures are intended for "setting up the test environment": booting the database, mocking an external service, etc.; creating objects inside the database isn't a good fit for them.
The way I'd write your code would be the following:
# factories.py
import factory
import factory.fuzzy
from . import models
class DivisionFactory(factory.django.DjangoModelFactory):
class Meta:
model = models.Division
name = factory.Faker('company')
class EmployeeFactory(factory.django.DjangoModelFactory):
class Meta:
model = models.Employee
username = factory.Faker('username')
name = factory.Faker('name')
employee_id = factory.Sequence(lambda n: 'EM199%06d' % n)
division = factory.SubFactory(DivisionFactory)
role = factory.fuzzy.FuzzyChoice(models.Employee.ROLES)
hired_on = factory.fuzzy.FuzzyDate(
start_date=datetime.date.today() - datetime.timedelta(days=100),
end_date=datetime.date.today() - datetime.timedelta(days=10),
)
We have a factory for an employee, and one for a division - and each employee gets assigned to a division.
Every mandatory field is provided; if we need to make specific factories for some object profiles, this can be added through either subclassing or using traits.
We can now write our tests, passing only the details required for the test:
# tests.py
#pytest.mark.django_db
def test_get_new_hire(api_client):
employee = factories.EmployeeFactory(
hired_on=datetime.date.today(),
division__name="Finance",
)
data = api_client.get(f'/employees/{employee.username}')
assert data['division'] == "Finance"
assert data['orientation_status'] == 'pending'
As a side note, wouldn't it make more sense to use Django's test runner directly? It's more finely tuned for Django internals: each test can be natively wrapped in a sub-transaction for performance, the test client provides helpers for in-depth introspection of view results, etc.
What is/are the best practices to use get_model() and when should it be imported ?
Ref: https://docs.djangoproject.com/en/1.8/ref/applications/
You usually use get_model() when you need to dynamically get a model class.
A practical example: when writing a RunPython operation for a migration, you get the app registry as one of the args, and you use apps.get_model('TheModel') to import historical models.
Another example: you have an app which has dynamically built serializers and you set their Meta.model to the class you just got with get_model() .
Yet another example is importing models in AppConfig.ready() with self.get_model().
An important thing to remember, if you are using AppConfig.get_model() or apps.get_models(), that they can be used only once the application registry is fully populated.
The other option (from .models import TheModel) is just the default way to import models anywhere in your code.
These are just examples though, there are many other possible scenarios.
I Prefer, use .models import, cause is a simple way to get the Model Object.
But if you works with metaclasses, maybe the get_model, would be the best option.
def get_model(self, app_label, model_name=None):
"""
Returns the model matching the given app_label and model_name.
As a shortcut, this function also accepts a single argument in the
form <app_label>.<model_name>.
model_name is case-insensitive.
Raises LookupError if no application exists with this label, or no
model exists with this name in the application. Raises ValueError if
called with a single argument that doesn't contain exactly one dot.
"""
self.check_models_ready()
if model_name is None:
app_label, model_name = app_label.split('.')
return self.get_app_config(app_label).get_model(model_name.lower())
Maybe this SO POST, can help too.
I'm testing the Django views - a view, that contains a class in itself. Can anyone help me to solve this problem: how I can to mock class or another way to test this view, because without mocking I get the error: OperationalError: no such table: - as I understand the test database is empty in new thread(I use sqlite3 in-memory for testing), but I don't know how to solve this problem.
View:
def view(self):
import threading
class Fun(threading.Thread):
def run(self):
SomeModel.class_method() - this method remove and create SomeModel instances
Fun().start()
return render_to_response('some_html.html')
The Mock testing library is the one Django topic I just can't seem to wrap my head around. For example, in the following code, why don't the mock User instances that I create in my unit test appear in the User object that I query in the 'get_user_ids' method? If I halt the test in the 'get_user_ids' method via the debug call and do "User.objects.all()", there's nothing in the User queryset and the test fails. Am I not creating three mock User instances that will be queried the the UserProxy's static method?
I'm using Django 1.6 and Postgres 9.3 and running the test with the command "python manage.py test -s apps.profile.tests.model_tests:TestUserProxy".
Thanks!
# apps/profile/models.py
from django.contrib.auth.models import User
class UserProxy(User):
class Meta:
proxy = True
#staticmethod
def get_user_ids(usernames):
debug()
user_ids = []
for name in usernames:
try:
u = User.objects.get(username__exact=name)
user_ids.append(u.id)
except ObjectDoesNotExist:
logger.error("We were unable to find '%s' in a list of usernames." % name)
return user_ids
# apps/profile/tests/model_tests.py
from django.test import TestCase
from django.contrib.auth.models import User
from mock import Mock
from apps.profile.models import UserProxy
class TestUserProxy(TestCase):
def test_get_user_ids(self):
u1 = Mock(spec=User)
u1.id = 1
u1.username = 'user1'
u2 = Mock(spec=User)
u2.id = 2
u2.username = 'user2'
u3 = Mock(spec=User)
u3.id = 3
u3.username = 'user3'
usernames = [u1.username, u2.username, u3.username]
expected = [u1.id, u2.id, u3.id]
actual = UserProxy.get_user_ids(usernames)
self.assertEqual(expected, actual)
Mocking is awesome for testing, and can lead to very clean tests, however it suffers a little from (a) being a bit fiddly to get ones head around when starting out, and (b) does require some effort often to set up mock objects and have then injected/used in the correct places.
The mock objects you are creating for the users are objects that look like a Django User model object, but they are not actual model objects, and therefore do not get put into the database.
To get your test working, you have two options, depending on what kind of test you want to write.
Unit Test - Mock the data returned from the database
The first option is to get this working as a unit test, i.e. testing the get_user_ids method in isolation from the database layer. To do this, you would need to mock the call to User.objects.get(username__exact=name) so that it returns the three mock objects you created in your test. This would be the more correct approach (as it is better to test units of code in isolation), however it would involve more work to set up than the alternative below.
One way to achieve this would be to firstly separate out the user lookup into it's own function in apps/profile/models.py:
def get_users_by_name(name):
return User.objects.get(username__exact=name)
This would need to be called in your function, by replacing the call to Users.objects.get(username__exact=name) with get_users_by_name(name). You can then modify your test to patch the function like so:
from django.test import TestCase
from django.contrib.auth.models import User
from mock import Mock, patch
from apps.profile.models import UserProxy
class TestUserProxy(TestCase):
#patch('apps.profile.models.get_user_by_name')
def test_get_user_ids(self, mock_get_user_by_name):
u1 = Mock(spec=User)
u1.id = 1
u1.username = 'user1'
u2 = Mock(spec=User)
u2.id = 2
u2.username = 'user2'
u3 = Mock(spec=User)
u3.id = 3
u3.username = 'user3'
# Here is where we wire up the mocking - we take the patched method to return
# users and tell it that, when it is called, it must return the three mock
# users you just created.
mock_get_user_by_name.return_value = [u1, u2, u3]
usernames = [u1.username, u2.username, u3.username]
expected = [u1.id, u2.id, u3.id]
actual = UserProxy.get_user_ids(usernames)
self.assertEqual(expected, actual)
Integration Test - Create real user objects
The second approach is to modify this to be an integration test, i.e. one that tests both this unit of code and also the interaction with the database. This is a little less clean, in that you are now exposing your tests on the method to the chance of failing because of problems in a different unit of code (i.e. the Django code that interacts with the database). However, this does make the setup of the test a lot simpler, and pragmatically may be the right approach for you.
To do this, simply remove the mocks you have created and create actual users in the database as part of your test.
I'm looking for functionality vaguely like that provided by Semantic MediaWiki. In short, I'd like for a user, in an arbitrary text field, to be able to do things like the following (I'm making up the markup as I go).
*Hi, everyone, don't forget that we have [[::AfricanSwallow.count]] African Swallows in our land.
*Did you know that Harry the European Swallow has carried [[::EuropeanSwallow.get(name="harry").coconuts.count]] coconuts back with him?
In addition to these kinds of features, I'd like to be able to autocomplete inline - perhaps when the user starts typing.
I can do all of these things, but I'm hoping that some or all of them have been done. Any idea if that's the case?
I think something like this is feasible but making it universal (allowing full read-only access to the ORM) would be very difficult to make in a secure way.
Here are some ideas:
Limit the actions to a predefined set of explicitly marked methods on a custom manager class. For example:
from django.db import models
class MarkupAccessManager(models.Manager):
def count(self):
return super(MarkupAccessManager, self).count()
count.expose_to_markup = True
class AfricanSwallow(models.Model):
objects = MarkupAccessManager()
To refer to models from the markup, you could take advantage of the django.contrib.contenttypes framework and the tags could have the following format: app_label.model_name action or app_label.model_name action arg1 arg2.
Depending on the markup language you choose, you could either use custom tags (if the language provides them), Django template tags, or plain regular expressions. Once you get the contents of a tag, this is how you could replace it with the output of the referred method:
from django.contrib.contenttypes.models import ContentType
def replace_tag(tag):
"""
'birds.africanswallow count' => birds.models.AfricanSwallow.objects.count()
"""
bits = tag.split()
model_ref = bits[0]
action = bits[1]
args = bits[2:]
try:
ct = ContentType.objects.get_by_natural_key(*model_ref.split('.'))
except ContentType.DoesNotExist:
return 'Invalid model reference.'
model = ct.model_class()
method = getattr(model._base_manager, action, None)
if not method or not method.expose_to_markup:
return 'Invalid action.'
return method(*args)
To provide autocomplete, something along these lines would help you to build a list of all the available options:
from django.db.models.loading import get_models
from django.contrib.contenttypes.models import ContentType
def model_refs():
for model in get_models():
if isinstance(model._base_manager, MarkupAccessManager):
ct = ContentType.objects.get_for_model(model)
yield '%s.%s' % (ct.app_label, ct.model)
def actions():
for attr_name in dir(MarkupAccessManager):
attr = getattr(MarkupAccessManager, attr_name)
if attr.expose_to_markup:
yield attr.__name__
I haven't tested the code. Hope this helps a bit.
Most elegant solution would be to create a compiler that will allow execution of only certian instructions. Find out more # http://en.wikibooks.org/wiki/Compiler_Construction
Another way is to use exec() but you should avoid this as it brings a lot of security issues into your application. You can always try to parse the string first (for valid syntax) but it will still be a possible vulnerability.