Proper way to manage fixtures in django - django

today I had a discussion with my colleguaes about how we should manage fixtures in our django application. We cound not find any solution that would satisfy everyone, so I'm asking this question here.
Suppose we have quite big django project with dozen of applications inside, each application has tests.py file with several TestClasses. Having this, how I should manage test data for all of these applications?
From my perpective, there is 2 different ways:
Store all data in separate for each application test_data.json file. This file will contain test data for all models defined in the application's models.py file, irrespective of where this data is used (it can be used in tests from different application)
Store some common data that would be probably required by all tests (like auth.users) in test_data.json and data for each TestCase in a separate test_case.json file.
From my perpective, second approach seems to be more cleaner, but I would like to know if somebody could tell me the concrete pros and cons of these approaches or may be suggest some other approach?

If you think about the cleanest way to define test data for your tests, I would like to recommend you read about django-any application:
django-any the explicit replacement for old-style, big and error-prone
implicit fixture files.
django-any allows to specify only fields important for test, and fill
rest by random with acceptable values.
It makes tests clean and easy to undestood, without reading fixture
files.
from django_any import any_model, WithTestDataSeed
class TestMyShop(TestCase):
def test_order_updates_user_account(self):
account = any_model(Account, amount=25, user__is_active=True)
order = any_model(Order, user=account.user, amount=10)
order.proceed()
account = Account.objects.get(pk=account.pk)
self.assertEquals(15, account.amount)
The same approach available for forms also (django_any.any_form)
This solution is helpful for avoiding to keep extra data in you DB while your tests are executing.

Related

Is there a standard way to mock Django models?

I have a model called Pdb:
class Pdb(models.Model):
id = models.TextField(primary_key=True)
title = models.TextField()
It is in a one-to-many relationship with the model Residue:
class Residue(models.Model):
id = models.TextField(primary_key=True)
name = models.TextField()
pdb = models.ForeignKey(Pdb)
Unit tesing Pdb is fine:
def test_can_create_pdb(self):
pdb = Pdb(pk="1XXY", title="The PDB Title")
pdb.save()
self.assertEqual(Pdb.objects.all().count(), 1)
retrieved_pdb = Pdb.objects.first()
self.assertEqual(retrieved_pdb, pdb)
When I unit test Residue I just want to use a mock Pdb object:
def test_can_create_residue(self):
pdb = Mock(Pdb)
residue = Residue(pk="1RRRA1", name="VAL", pdb=mock_pdb)
residue.save()
But this fails because it needs some attribute called _state:
AttributeError: Mock object has no attribute '_state'
So I keep adding mock attributes to make it look like a real model, but eventually I get:
django.db.utils.ConnectionDoesNotExist: The connection db doesn't exist
I don't know how to mock the actual call to the database. Is there a standard way to do this? I really don't want to have to actually create a Pdb record in the test database because then the test won't be isolated.
Is there an established best practices way to do this?
Most of the SF and google results I get for this relate to mocking particular methods of a model. Any help would be appreciated.
You are not strictly unit testing here as you are involving the database, I would call that integration testing, but that is another very heated debate!
My suggestion would be to have your wrapping test class inherit from django.test.TestCase. If you are that concerned about each individual test case being completely isolated then you can just create multiple classes with a test method per class.
It might also be worth reconsidering if these tests need writing at all, as they appear to just be validating that the framework is working.
Oh, I managed to solve this with a library called 'mixer'...
from mixer.backend.django import mixer
def test_can_create_residue(self):
mock_pdb = mixer.blend(Pdb)
residue = Residue(pk="1RRRA1", name="VAL", pdb=mock_pdb)
residue.save()
Still think django should provide a native way to do this though. It already provides a lot of testing tools - this feels like a major part of proper unit testing.
I'm not sure exactly what you mean by mocking Django models. The simplest option for writing a test that requires some model objects is to use a test fixture. It's basically a YAML file that gets loaded into a database table before your test runs.
In your answer you mentioned mixer, which looks like a library for randomly generating those test fixtures.
Those are fine tools, but they still require database access, and they're a lot slower than pure unit tests. If you want to completely mock out the database access, try Django mock queries. It completely mocks out the database access layer, so it's very fast, and you don't have to worry about foreign keys. I use it when I want to test some complex code that has simple database access. If the database access has some complicated query conditions, then I stick with the real database.
Full disclosure: I'm a minor contributor to the Django mock queries project.

Django Rest Framework: is it possible to modify a Serializer class at runtime?

I see I can easily modify the Meta options of a Serializer at run time (i'm not even sure this is the right way to call it, I read around somebody call it monkey patching, even though i don't like it):
NodeDetailSerializer.Meta.fields.append('somefield')
What if I need to do something like:
NodeDetailSerializer.contact = serializers.HyperlinkedIdentityField(view_name='api_node_contact', slug_field='slug')
NodeDetailSerializer.Meta.fields.append('contact')
Why would I need to do that?
I'm trying to build a modular application, I have some optional apps that can be added in an they automatically add some features to the core ones.
I would like to keep the code of the two apps separate, also because the additional applications might be moved in a different repository.
Writing modular and extensible apps is really a tricky business.
Would like to know more about that if anybody has some useful resources to share.
Federico
I found a solution for my problem.
My problem was: I needed to be able to add hyperlinks to other resources without editing the code of a core app. I needed to do it from the code of the additional module.
I wrote this serializer mixin: https://gist.github.com/nemesisdesign/8132696
Which can be used this way:
from myapp.serializers import MyExtensibleSerializer
MyExtensibleSerializer.add_relationship(**{
'name': 'key_name',
'view_name': 'view_name_in_urls_py',
'lookup_field': 'arg_passed_to_to_view_name'
})

What are the best practices for testing "different layers" in Django? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm NOT new to testing, but got really confused with the mess of recommendations for testing different layers in Django.
Some recommend (and they are right) to avoid Doctests in the model as they are not maintainable...
Others say don't use fixtures, as they are less flexible than helper functions, for instance..
There are also two groups of people who fight for using Mock objects. The first group believe in using Mock and isolating the rest of the system, while another group prefer to Stop Mocking and start testing..
All I have mentioned above, were mostly in regards to testing models. Functional testing is an another story (using test.Client() VS webTest VS etc. )
Is there ANY maintainable, extandible and proper way for testing different layers??
UPDATE
I am aware of Carl Meyer's talk at PyCon 2012..
UPDATE 08-07-2012
I can tell you my practices for unit testing that are working pretty well for my own ends and I'll give you my reasons:
1.- Use Fixtures only for information that is necessary for testing but is not going to change, for example, you need a user for every test you do so use a base fixture to create users.
2.- Use a factory to create your objects, I personally love FactoryBoy (this comes from FactoryGirl which is a ruby library). I create a separate file called factories.py for every app where I save all these objects. This way I keep off the test files all the objects I need which makes it a lot more readable and easy to maintain. The cool thing about this approach is that you create a base object that can be modified if you want to test something else based on some object from the factory. Also it doesn't depend on django so when I migrated these objects when I started using mongodb and needed to test them, everything was smooth. Now after reading about factories it's common to say "Why would I want to use fixtures then". Since these fixtures should never change all the extra goodies from factories are sort of useless and django supports fixtures very well out of the box.
3.- I Mock calls to external services, because these calls make my tests very slow and they depend on things that have nothing to do with my code being right or wrong. for example, if I tweet within my test, I do test it to tweet rightly, copy the response and mock that object so it returns that exact response every time without doing the actual call. Also sometimes is good to test when things go wrong and mocking is great for that.
4.- I use an integration server (jenkins is my recommendation here) which runs the tests every time I push to my staging server and if they fail it sends me an email. This is just great since it happens to me a lot that I break something else in my last change and I forgot to run the tests. It also gives you other goodies like a coverage report, pylint/jslint/pep8 verifications and there exists a lot of plugins where you can set different statistics.
About your question for testing front end, django comes with some helper functions to handle this in a basic way.
This is what I personally use, you can fire gets, posts, login the user, etc. that's enough for me. I don't tend to use a complete front end testing engine like selenium since I feel it's an overkill to test anything else besides the business layer. I am sure some will differ and it always depends on what you are working on.
Besides my opinion, django 1.4 comes with a very handy integration for in-browser frameworks.
I'll set an example app where I can apply this practices so it is more understandable. Let's create a very basic blog app:
structure
blogger/
__init__.py
models.py
fixtures/base.json
factories.py
tests.py
models.py
from django.db import models
class Blog(models.Model):
user = models.ForeignKey(User)
text = models.TextField()
created_on = models.DateTimeField(default=datetime.now())
fixtures/base.json
[
{
"pk": 1,
"model": "auth.user",
"fields": {
"username": "fragilistic_test",
"first_name": "demo",
"last_name": "user",
"is_active": true,
"is_superuser": true,
"is_staff": true,
"last_login": "2011-08-16 15:59:56",
"groups": [],
"user_permissions": [],
"password": "IAmCrypted!",
"email": "test#email.com",
"date_joined": "1923-08-16 13:26:03"
}
}
]
factories.py
import factory
from blog.models import User, Blog
class BlogFactory(factory.Factory):
FACTORY_FOR = Blog
user__id = 1
text = "My test text blog of fun"
tests.py
class BlogTest(TestCase):
fixtures = ['base'] # loads fixture
def setUp(self):
self.blog = BlogFactory()
self.blog2 = BlogFactory(text="Another test based on the last one")
def test_blog_text(self):
self.assertEqual(Blog.objects.filter(user__id=1).count(), 2)
def test_post_blog(self):
# Lets suppose we did some views
self.client.login(username='user', password='IAmCrypted!')
response = self.client.post('/blogs', {'text': "test text", user='1'})
self.assertEqual(response.status, 200)
self.assertEqual(Blog.objects.filter(text='test text').count(), 1)
def test_mocker(self):
# We will mock the datetime so the blog post was created on the date
# we want it to
mocker = Mock()
co = mocker.replace('datetime.datetime')
co.now()
mocker.result(datetime.datetime(2012, 6, 12))
with mocker:
res = Blog.objects.create(user__id=1, text='test')
self.assertEqual(res.created_on, datetime.datetime(2012, 6, 12))
def tearDown(self):
# Django takes care of this but to be strict I'll add it
Blog.objects.all().delete()
Notice I am using some specific technology for the sake of the example (which haven't been tested btw).
I have to insist, this may not be the standard best practice (which I doubt it exists) but it is working pretty well for me.
I really like the suggestions from #Hassek and want to stress out what an excellent point he makes about the obvious lack of standard practices, which holds true for many of Django's aspects, not just testing, since all of us approach the framework with different concerns in mind, also adding to that the great degree of flexibility we have with designing our applications, we often end up with drastically different solutions that are applicable to the same problem.
Having said that, though, most of us still strive for many of the same goals when testing our applications, mainly:
Keeping our test modules neatly organized
Creating reusable assertion and helper methods, helper functions that reduce the LOC for test methods, to make them more compact and readable
Showing that there is an obvious, systematic approach to how the application components are tested
Like #Hassek, these are my preferences that may directly conflict with the practices that you may be applying, but I feel it's nice to share the things we've proven that work, if only in our case.
No test case fixtures
Application fixtures work great, in cases you have certain constant model data you'd like to guarantee to be present in the database, say a collection of towns with their names and post office numbers.
However, I see this as an inflexible solution for providing test case data. Test fixtures are very verbose, model mutations force you to either go through a lengthy process of reproducing the fixture data or to perform tedious manual changes and maintaining referential integrity is difficult to manually perform.
Additionally, you'll most likely use many kinds of fixtures in your tests, not just for models: you'd like to store the response body from API requests, to create fixtures that target NoSQL database backends, to write have fixtures that are used to populate form data, etc.
In the end, utilizing APIs to create data is concise, readable and it makes it much easier to spot relations, so most of us resort to using factories for dynamically creating fixtures.
Make extensive use of factories
Factory functions and methods are preferable to stomping out your test data. You can create helper factory module-level functions or test case methods that you may want to either reuse
across application tests or throughout the whole project. Particularly, factory_boy, that #Hassek mentions, provides you with the ability to inherit/extend fixture data and do automatic sequencing, which might look a bit clumsy if you'd do it by hand otherwise.
The ultimate goal of utilizing factories is to cut down on code-duplication and streamline how you create test data. I cannot give you exact metrics, but I'm sure if you go through your test methods with a discerning eye you will notice that a large portion of your test code is mainly preparing the data that you'll need to drive your tests.
When this is done incorrectly, reading and maintaining tests becomes an exhausting activity. This tends to escalate when data mutations lead to not-so-obvious test failures across the board, at which point you'll not be able to apply systematic refactoring efforts.
My personal approach to this problem is to start with a myproject.factory module that creates easy-to-access references to QuerySet.create methods for my models and also for any objects I might regularly use in most of my application tests:
from django.contrib.auth.models import User, AnonymousUser
from django.test import RequestFactory
from myproject.cars.models import Manufacturer, Car
from myproject.stores.models import Store
create_user = User.objects.create_user
create_manufacturer = Manufacturer.objects.create
create_car = Car.objects.create
create_store = Store.objects.create
_factory = RequestFactory()
def get(path='/', data={}, user=AnonymousUser(), **extra):
request = _factory.get(path, data, **extra)
request.user = user
return request
def post(path='/', data={}, user=AnonymousUser(), **extra):
request = _factory.post(path, data, **extra)
request.user = user
return request
This in turn allows me to do something like this:
from myproject import factory as f # Terse alias
# A verbose, albeit readable approach to creating instances
manufacturer = f.create_manufacturer(name='Foomobiles')
car1 = f.create_car(manufacturer=manufacturer, name='Foo')
car2 = f.create_car(manufacturer=manufacturer, name='Bar')
# Reduce the crud for creating some common objects
manufacturer = f.create_manufacturer(name='Foomobiles')
data = {name: 'Foo', manufacturer: manufacturer.id)
request = f.post(data=data)
view = CarCreateView()
response = view.post(request)
Most people are rigorous about reducing code duplication, but I actually intentionally introduce some whenever I feel it contributes to test comprehensiveness. Again, the goal with whichever approach you take to factories is to minimize the amount of brainfuck you introduce into the header of each test method.
Use mocks, but use them wisely
I'm a fan of mock, as I've developed an appreciation for the author's solution to what I believe was the problem he wanted to address. The tools provided by the package allow you to form test assertions by injecting expected outcomes.
# Creating mocks to simplify tests
factory = RequestFactory()
request = factory.get()
request.user = Mock(is_authenticated=lamda: True) # A mock of an authenticated user
view = DispatchForAuthenticatedOnlyView().as_view()
response = view(request)
# Patching objects to return expected data
#patch.object(CurrencyApi, 'get_currency_list', return_value="{'foo': 1.00, 'bar': 15.00}")
def test_converts_between_two_currencies(self, currency_list_mock):
converter = Converter() # Uses CurrencyApi under the hood
result = converter.convert(from='bar', to='foo', ammount=45)
self.assertEqual(4, result)
As you can see, mocks are really helpful, but they have a nasty side effect: your mocks clearly show your making assumptions on how it is that your application behaves, which introduces coupling. If Converter is refactored to use something other than the CurrencyApi, someone may not obviously understand why the test method is suddenly failing.
So with great power comes great responsibility--if your going to be a smartass and use mocks to avoid deeply rooted test obstacles, you may completely obfuscate the true nature of your test failures.
Above all, be consistent. Very very consistent
This is the most important point to be made. Be consistent with absolutely everything:
how you organize code in each of your test modules
how you introduce test cases for your application components
how you introduce test methods for asserting the behavior of those components
how you structure test methods
how you approach testing common components (class-based views, models, forms, etc.)
how you apply reuse
For most projects, the bit about how your collaboratively going to approach testing is often overlooked. While the application code itself looks perfect--adhering to style guides, use of Python idioms, reapplying Django's own approach to solving related problems, textbook use of framework components, etc.--no one really makes it an effort to figure out how to turn test code into a valid, useful communication tool and it's a shame if, perhaps, having clear guidelines for test code is all it takes.

Keep app-models in one file or separate each into a new file?

I was reading over the following code, and the models were structured such that each class/model had a separate file and then it was imported in __init__.py. For example:
# __init__.py
from service import Service
from note import Note
etc...
# service.py (one example of the imports)
from django.db import models
class Service(models.Model):
#: service provider name (e.g. Hulu)
name = models.CharField(max_length=64, verbose_name="Title Name", unique=True)
def __unicode__(self):
return u'Service id=%s, name=%s' % (self.pk, self.name)
Which way is better practice, to have all models in one models.py file, or to have one file-per model? I usually keep all my models for an app in one file, and I had never seen the models separated, which is why I'm asking this question.
If you're talking true "best practice", that would be following the way Django recommends and using just models.py. However, there's a lot of opinion and argument that goes into the this topic. Nevertheless, my recommendations:
If you have a simple app, with only a few models. Stick with the "Django-way" of just a models.py.
If you have a huge app with lots of models and thousands of lines of code, divvying it out is probably better. However, at this point, you should also ask yourself why your app is so huge and if anything can be factored out into auxiliary apps.
Long and short, my personal opinion is that breaking out of the models into separate files is never a good idea. It can in some cases cause problems and I honestly can't see a use case where it's truly justified. Usually, if your app is big enough to warrant doing this, it's actually a sign that you're lumping too much functionality together that would be better delegated out into other apps.
Best practice is to put them in one file. Look at the django source for example.
The reason you've never seen it is because it's practically never done.
If you can justify it somehow then by all means do it, but it's definitely not the recommended structure. People start to explore splitting models when the file gets too large / can be logically separated.

Can a fixture be changed dynamically between test methods in CakePHP?

Is it possible to have a fixture change between test methods? If so, how can I do this?
My syntax for this problem :
In the cakephp framework i am building tests for a behavior that is configured by adding fields to the table. This is intended to work in the same way that adding the "created"
and "modified" fields will auto-populate these fields on save.
To test this I could create dozens of fixtures/model combos to test the different setups, but it would be a hundred times better, faster and easier to just have the fixture change "shape" between test methods.
If you are not familiar with the CakePHP framework, you can maybe still help me as it uses SimpleTest
Edit: rephrased question to be more general
I'm not familiar specifically with CakePHP, but this kind of thing seems to happen anywhere with fixtures.
There is no built in way in rails at least for this to happen, and I imagine not in cakePHP or anywhere else either because the whole idea of a fixture, is that it is fixed
There are 2 'decent' workarounds I'm aware of
Write a changefixture method, and just before you do your asserts/etc, run it with the parameters of what to change. It should go and update the database or whatever needs to be done.
Don't use fixtures at all, and use some kind of object factory or object generator to create your objects each time
This is not an answer to my quetion, but a solution to my issue example.
Instead of using multiple fixtures or changing the fixtures, I edit the Model::_schema arrays by removing the fields that I wanted to test without. This has the effect that the model acts as if the fields was not there, but I am unsure if this is a 100% test. I do not think it is for all cases, but it works for my example.