I'm trying to add generic relations and one-to-one relations support for django-test-utils makefixture command, here is the source http://github.com/ericholscher/django-test-utils/blob/master/test_utils/management/commands/makefixture.py
Does somebody have ideas how to do this? Or may be there is another tool for such thing as:
./manage.py dumpcmd User[:10] > fixtures.json
You have several options how to approach the problem. I'll concentrate on the poke-the-code aproach, since it's been a while since I mucked around with django internals.
I have included the relevant code below from the link. Note that I have removed irrelevant parts. Also note that the part you'll be editing YOUR CASE HERE is in need of a refactor.
Follow the following algorithm until you're satisfied.
Refactor the if statements depending on the fields into (one or more) separate function(s).
Add inspection code until you figure out what fields correspond to generic relations.
Add extraction code until the generic relations are followed.
Test.
def handle_models(self, models, **options):
# SNIP handle options
all = objects
if propagate:
collected = set([(x.__class__, x.pk) for x in all])
while objects:
related = []
for x in objects:
if DEBUG:
print "Adding %s[%s]" % (model_name(x), x.pk)
# follow forward relation fields
for f in x.__class__._meta.fields + x.__class__._meta.many_to_many:
# YOU CASE HERE
if isinstance(f, ForeignKey):
new = getattr(x, f.name) # instantiate object
if new and not (new.__class__, new.pk) in collected:
collected.add((new.__class__, new.pk))
related.append(new)
if isinstance(f, ManyToManyField):
for new in getattr(x, f.name).all():
if new and not (new.__class__, new.pk) in collected:
collected.add((new.__class__, new.pk))
related.append(new)
# SNIP
objects = related
all.extend(objects)
# SNIP serialization
Related
Isn't it possible to do something like the following with South in a schemamigration?
def forwards(self, orm):
## CREATION
# Adding model 'Added'
db.create_table(u'something_added', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('foo', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['something.Foo'])),
('bar', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['something.Bar'])),
))
db.send_create_signal(u'something', ['Added'])
## DATA
# Create Added for every Foo
for f in orm.Foo.objects.all():
self.prev_orm.Added.objects.create(foo=f, bar=f.bar)
## DELETION
# Deleting field 'Foo.bar'
db.delete_column(u'something_foo', 'bar_id')
See the prev_orm that would allow me to access to f.bar, and do all in one. I find that having to write 3 migrations for that is pretty heavy...
I know this is not the "way to do" but to my mind this would be honestly much cleaner.
Would there be a real problem to do so btw?
I guess your objective is to ensure that deletion does not run before the data-migration. For this you can use the dependency system in South.
You can break the above into three parts:
001_app1_addition_migration (in app 1)
then
001_app2_data_migration (in app 2, where the Foo model belongs)
and then
002_app1_deletion_migration (in app 1) with something like following:
class Migration:
depends_on = (
("app2", "001_app2_data_migration"),
)
def forwards(self):
## DELETION
# Deleting field 'Foo.bar'
db.delete_column(u'something_foo', 'bar_id')
First of all, the orm provided by South is the one that you are migrating to. In other words, it matches the schema after the migration is complete. So you can just write orm.Added instead of self.prev_orm.Added. The other implication of this fact is that you cannot reference foo.bar since it is not present in the final schema.
The way to get around that (and to answer your question) is to skip the ORM and just execute raw SQL directly.
In your case, the create statement that accesses the deleted row would look something like:
cursor.execute('SELECT "id", "bar_id" FROM "something_foo"')
for foo_id, bar_id in cursor.fetchall()
orm.Added.ojbects.create(foo_id=foo_id, bar_id=bar_id)
South migrations are using transaction management.
When doing several migrations at once, the code is similar to:
for migration in migrations:
south.db.db.start_transaction()
try:
migration.forwards(migration.orm)
south.db.db.commit_transaction()
except:
south.db.db.rollback_transaction()
raise
so... while it is not recommended to mix schema and data migrations, once you commit the schema with db.commit_transaction() the tables should be available for you to use. Be mindful to provide a backwards() method that does that correct steps backwards.
I understand that Django want to generate forms automatically so you don't have to do so in your template, and I do understand that many people find it cool.
But I have specific requirements and I have to write my forms on my own. I just need something to parse the data, be it a form submitted using a user interface, or an API request, or whatever.
I tried to use ModelForm, but it doesn't seem to work as I want it to work.
I'd like to have something with the following behavior:
possibility to specify the model of the object I am going to create/update
possibility to specify an object in case of an update
possibility to provide new data in a dictionary
if I am creating a new object, missing fields in my data should be replaced by their default values as specified in my model definition
if I am updating an existing object, missing fields in my data should be replaced by the current values of the object I am updating. Another way of saying is, do not update values that are missing in my data dictionary.
data validation should be performed before calling save(), and it should throw a ValidationError with the list of erroneous fields and errors.
Currently, I prefer to do everything manually :
o = myapp.models.MyModel() # or o = myapp.Models.MyModel.objects.get(pk = data['pk'])
o.field1 = data['field1']
o.field2 = data['field2']
…
o.full_clean()
o.save()
It would be nice to have a shortcut :
o = SuperCoolForm(myapp.models.MyModel, data)
o.save()
Do you know if Django does provide a solution for this or am I asking too much?
Thank you!
I've been using the built-in Django comments system which has been working great. On a particular page I need to list the latest X comments which I've just been fetching with:
latest_comments =
Comment.objects.filter(is_public=True, is_removed=False)
.order_by('submit_date').reverse()[:5]
However I've now introduced a Boolean field 'published' into the parent object of the comments, and I want to include that in the query above. I've tried using the content_type and object_pk fields but I'm not really getting anywhere. Normally you'd do something like:
Comment.objects.filter(blogPost__published=True)
But as it is not stored like that I am not sure how to proceed.
posts_ids = BlogPost.objects.filter(is_published=True).values_list('id', flat=True) #return [3,4,5,...]
ctype = ContentType.objects.get_for_model(BlogPost)
latest_comments = Comment.objects.filter(is_public=True, is_removed=False, content_type=ctype, content_object__in=posts_ids).order_by('-submit_date')[:5]
Comments use GenericForeignKey to store the relation to parent object. Because of the way generic relations work related lookups using __<field> syntax are not supported.
You can accomplish the desired behaviour using the 'in' lookup, however it'll require lot of comparisons when there'll be a lot of BlogPosts.
ids = BlogPost.objects.filter(published=True).values_list('id', flat=True) # Get list of ids, you would probably want to limit number of items returned here
content_type = ContentType.objects.get_for_model(BlogPost) # Becasue we filter only comments for BlogPost
latest_comments = Comment.objects.filter(content_type=content_type, object_pk__in=ids, is_public=True, is_removed=False, ).order_by('submit_date').reverse()[:5]
See the Comment model doc for the description of all fields.
You just cannot do that in one query. Comments use GenericForeignKey. Documentation says:
Due to the way GenericForeignKey is implemented, you cannot use such
fields directly with filters (filter() and exclude(), for example) via
the database API.
I am frustrated that in Django I often end up having to write methods on a custom Manager:
class EntryManager(Manager):
def filter_beatle(self, beatle):
return self.filter(headline__contains=beatle)
... and repeat pretty much the same method in a different Manager for a reverse query:
class BlogManager(Manager):
def filter_beatle(self, beatle):
return self.filter(entry__headline__contains=beatle)
... and a predicate on Entry:
def headline_contains(self, beatle):
return self.headline.find(beatle) != -1
[Note that the predicate on Entry will work on Entry objects that haven't even been saved yet.]
This feels like a violation of DRY. Is there some way to express this once and use it in all three places?
What I would like to be able to do is write something like:
q = Q(headline__contains="Lennon")
lennon_entries = Entry.objects.filter(q)
lennon_blogs = Blog.objects.filter(q.reverse(Entry))
is_lennon = entry.would_filter(q)
... where 'headline__contains="Lennon"' expresses exactly once what it means to be 'an Entry about "Lennon"', and this can be used to construct reverse queries and a predicate.
The best place for this is a custom manager. According to django's guidelines a manager class is the best place for code that is affecting more than one object of a class.
class EntryManager(models.Manager):
def filter_lennons(self):
return self.get_query_set().filter(headline__contains='Lennon')
class Entry(models.Model):
headline = models.CharField(max_length=100)
objects = EntryManager()
lennons = Entry.objects.filter_lennons()
You should never rarely have to do the following:
if entry.headline.find('Lennon') >= 0:
because the filter should take care of restricting the result set to the instances you're interested in.
If you're going to be using the same filter multiple times, you can create a custom manager or a simple class method.
class Entry(models.Model):
...
# this really should be on a custom manager, but this was quicker to demonstrate
#classmethod
def find_headlines(cls, text):
return cls.objects.filter(headline__contains=text)
entries = Entry.find_headlines('Lennon')
But really, the DRYness has already been contained within the Queryset API. How often are you really going to be hard coding the string 'Lennon' into a query? Usually, the search parameter will be passed into a view from a GET or POST. Perfectly DRY.
So, what is the actual problem? Other than exploring the queryset API, have you ever had to hard code lookup values in multiple queries like your question?
For the "reverse filter" case you can use a subquery:
Blog.objects.filter(entries__in=Entry.objects.filter_beatle("Lennon"))
Reusing or generating predicates is not possible (in general) as there are predicates that cannot be expressed as queries and queries that cannot be expressed as predicates without db access.
My most common use for the predicate seems to be in asserts. Often something like:
class Thing(Model):
class QuerySet(query.QuerySet):
def need_to_be_whacked():
# ... code ...
def needs_to_be_whacked(self):
return Thing.objects.need_to_be_whacked().filter(id=self.id).exists()
def whack(self):
assert self.needs_to_be_whacked()
for thing in Thing.objects.need_to_be_whacked():
thing.whack()
I want to make sure that no other code is calling whack() in state where it doesn't need to be whacked. It costs a database hit, but it works.
I want to make sure I am testing Models/Objects in isolation and not as one huge system.
If I have an Order object and it has Foreign Keys to Customers, Payments, OrderItems, etc. and I want to test Order functionality, I need to create fixtures for all of that related data, or create it in code. I think what I really need to be doing is mocking out the other items, but I don't see an easy (or possible) solution for that if I am doing queries on these Foreign Keys.
The common solutions (fixtures) don't really let me test one Object at a time. I am sure this is partly caused by my app being way over coupled.
I am trying my darndest to adopt TDD as my main method of working, but the way things work with Django, it seems you can either run very trivial unit tests, or these massive integration tests.
[Edit] Better explicit example and a some more humility
What I mean is that I seem to be able to only run trivial unit tests. I have seen people with very well tested and granular modules. I am certain some of this can be followed back to poor design.
Example:
I have a model call Upsell which is linked to a Product model. Then I have a Choices model which are children of Upsell (do you want what's behind door #1, #2, #3).
The Upsell model has several methods on it that derive items necessary to render the template from their choices. The most important one is that it creates a URL for each choice. It does this through some string mangling etc. If I wanted to test the Upsell.get_urls() method, I want to have it not depend on the values of choices in the fixtures, and I want to not have it depend on the value of Product in the fixtures.
Right now I populate the db in the setUp method for the tests, and that works well with the way Django backs out the transaction every time, but only outside of setUp and tearDown. This works fairly well except some of the Models are fairly complex to set up, while I actually only need to get one attribute for it.
I can't give you an example of that, since I can't accomplish it, but here is the type of thing I am doing now. Basically I input an entire order, create the A/B experiment it was attached to, etc. And that's not counting Product, Categories, etc. all set up by fixtures. It's not the extra work I am concerned as I can't even test one database-based object at a time. The Tests below are important, but they are integration tests. I would like to be building up to something like this by testing each item separately. As you pointed out, maybe I shouldn't have chosen a framework so closely tied to the db. Does any sort of dependency injection exist with something like this? (beyond my testing, but the code itself as well)
class TestMultiSinglePaySwap(TestCase):
fixtures = ['/srv/asm/fixtures/alchemysites.json','/srv/asm/fixtures/catalog.json','/srv/asm/fixtures/checkout_smallset.json','/srv/asm/fixtures/order-test-fixture.json','/srv/asm/fixtures/offers.json']
def setUp(self):
self.o = Order()
self.sp = SiteProfile.objects.get(pk=1)
self.c = Customer.objects.get(pk=1)
signals.post_save.disconnect(order_email_first, sender=Order)
self.o.customer = self.c
p = Payment()
p.cc_number = '4444000011110000'
p.cc_exp_month = '12'
p.cc_type = 'V'
p.cc_exp_year = '2020'
p.cvv2 = '123'
p.save()
self.o.payment = p
self.o.site_profile = self.sp
self.o.save()
self.initial_items = []
self.main_kit = Product.objects.get(pk='MOA1000D6')
self.initial_items.append(self.main_kit)
self.o.add_item('MOA1000D6', 1, False)
self.item1 = Product.objects.get(pk='MOA1041A-6')
self.initial_items.append(self.item1)
self.o.add_item('MOA1041A-6', 1, False)
self.item2 = Product.objects.get(pk='MOA1015-6B')
self.initial_items.append(self.item2)
self.o.add_item('MOA1015-6B', 1, False)
self.item3 = Product.objects.get(pk='STP1001-6E')
self.initial_items.append(self.item3)
self.o.add_item('STP1001-6E', 1, False)
self.swap_item1 = Product.objects.get(pk='MOA1041A-1')
def test_single_pay_swap_wholeorder(self):
o = self.o
swap_all_skus(o)
post_swap_order = Order.objects.get(pk = o.id)
swapped_skus = ['MOA1000D','MOA1041A-1','MOA1015-1B','STP1001-1E']
order_items = post_swap_order.get_all_line_items()
self.assertEqual(order_items.count(), 4)
pr1 = Product()
pr1.sku = 'MOA1000D'
item = OrderItem.objects.get(order = o, sku = 'MOA1000D')
self.assertTrue(item.sku.sku == 'MOA1000D')
pr2 = Product()
pr2.sku = 'MOA1015-1B'
item = OrderItem.objects.get(order = o, sku = 'MOA1015-1B')
self.assertTrue(item.sku.sku == 'MOA1015-1B')
pr1 = Product()
pr1.sku = 'MOA1041A-1'
item = OrderItem.objects.get(order = o, sku = 'MOA1041A-1')
self.assertTrue(item.sku.sku == 'MOA1041A-1')
pr1 = Product()
pr1.sku = 'STP1001-1E'
item = OrderItem.objects.get(order = o, sku = 'STP1001-1E')
self.assertTrue(item.sku.sku == 'STP1001-1E')
Note that I have never actually used a Mock framework though I have tried. So I may also just be fundamentally missing something here.
Look into model mommy. It can automagically create objects with Foreign Keys.
This will probably not answer your question but it may give you food for thought.
In my opinion when you are testing a database backed project or application there is a limit to what you can mock. This is especially so when you are using a framework and an ORM such as the one Django offers. In Django there is no distinction between the business model class and the persistence model class. If you want such a distinction then you'll have to add it yourself.
Unless you are willing to add that additional layer of complexity yourself it becomes tricky to test the business objects alone without having to add fixtures etc. If you must do so you will have to tackle some of the auto magic vodoo done by Django.
If you do choose to grit your teeth and dig in then Michael Foord's Python Mock library will come in quite handy.
I am trying my darndest to adopt TDD as my main method of working, but the way things work with Django, it seems you can either run very trivial unit tests, or these massive integration tests.
I have used Django unit testing mechanism to write non-trivial unit tests. My requirements were doubtless very different from yours. If you can provide more specific details about what you are trying to accomplish then users here would be able to suggest other alternatives.