Django - Unit Testing an AdminForm - python-2.7

I am very new to unit testing and am probably doing something wrong, but when I simulate a post to update a model via the admin backend it seems like my save_model method in my AdminForm isn't being called. I am trying to test this method - what am I doing wrong?
My second, less relevant question is in general how can I make sure a method is being called when I use unit testing? Is there some way to list all the methods that were hit?
Below is the code my test is running. In my save_model method in my AdminForm for this model, I set this model's foobar attribute to the username of the currently signed in user. Below is my test:
self.client = Client()
self.client.login(username='username',password='password')
# self.dict is a dictionary of field names and values for mymodel to be updated
response = self.client.post('/admin/myapp/mymodel/%d/' % self.mymodel.id, self.dict)
self.assertEqual(response.status_code,200) # passes
self.assertEqual(self.mymodel.foobar,'username') # fails
self.client.logout()
It fails because it says that self.mymodel.foobar is an empty string. That was what it should have been before the update. No value for foobar is passed in the self.dict but my save_model method is designed to set it on its own when the update happens. It is also worth noting that my code works correctly and save_model seems to work fine, just my test is failing. Since I am a total noob at TDD, I'm sure the issue is with my test and not my code. Thoughts?

From the code it looks like the problem is that, after posting the form, you don't reload self.mymodel from the database. If you hold a reference to a model object stored in the database, and one or more of the fields on that object is changed in the database, then you will need to reload the object from the database to see the updated values. As detailed in this question, you can do this with something like:
self.mymodel = MyModelClass.objects.get(id=self.mymodel.id)
To answer your second question, probably the most useful way to see what is happening would be to use logging to output what is happening in your save_model method - this will not only help you debug the issue during testing, but also if you encounter any issues in this method when running your application. The django guide to logging gives an excellent introduction:
https://docs.djangoproject.com/en/dev/topics/logging/

Related

Django helper function executed only in single test when PostgreSQL used

Recently I've changed db engine from SQLite to PostgreSQL. I have successfully migrated whole db design to PostgreSQL (just simple makemigaretions, migrate). Once I ran tests some have failed for some unknown reason (errors suggest that some objects were not created). Failure doesn't apply to all tests, just selected few. Everything has been working before.
I'd start investigating what's going on on test-by-test basis, but some bizarre behavior has appeared. Let's say my test is in class MyTestClass and test is called test_do_something and in MyTestClass there are other tests as well.
When I'm running python manage.py test MyTestClass I'm getting info that test_do_something has failed.
When I'm running python manage.py test MyTestClass.test_do_something everything passes.
On SQLite both ways pass.
I'm assuming that setUpTestData() and setUp() methods work same way in SQLite and PostgreSQL. Or they don't?
Any clue why such discrepancy might be happening?
EDIT
I think I've noticed what might be wrong, but I don't understand why. The problem is because my function I've used to call to create object which is later used only once. Which differs from SQLite execution.
What I mean in my test I have something like this:
def create_object(self):
self.client.post(reverse('myurl'), kwargs={'myargs':arg})
def test_mytest1(self):
# Do something
self.create_object()
# Do something
def test_mytest2(self):
# Do something
self.create_object()
# Do something
def test_mytest3(self):
# Do something
self.create_object()
# Do something
Only for one test create_object() will be executed.
I believe I've fond cause of those failures. As a matter of fact it hasn't been an issue with one-time execution of support function as I expected. The problem has been with hardcoded ids I've used for various reasons. It appears that object I've been hoping to see didn't exist.
Let me explain a bit more what I've experienced. E.g. I had test where I've been referring to particular object passing this object id in URL kwargs. Before this operation I've created object and passed id=1 as kwargs, because I assumed that if this is only place within this test and setUp() it will be 1. It appears that with PostgreSQL it's not so clear. It seems that ids are incremented despite DB flush. Which is completely different behavior then SQLite has been providing.
I'd very much appreciate if someone could provide some more detailed answer why is this happening. Is ID counter not zeroed in PostgreSQL on flush? It would look so.

How do I make Django unit tests check M2M DB constraints?

Say I have this model definition:
class Foo(Model):
...
class Bar(Model):
some_m2m_field = ManyToManyField(Foo)
and this code:
bar = Bar.objects.create()
bar.some_m2m_field.set(an_id_array_with_some_invalid_pks)
When I run that normally, the last line will, as it should, throw an IntegrityError. However, if I run the same code from a django.test.TestCase, the last line will NOT throw an error. It instead will wait until the _post_teardown() phase of the test to throw the IntegrityError.
Here's a small project that demonstrates the issue: https://github.com/t-evans/m2mtest
How do I fix that? I suppose that's configurable, but I haven't been able to find it...
Follow-up question:
Ultimately, I need to handle the case when there are bad IDs being passed to the m2m_field.set() method (and I need unit tests that verify that bad IDs are being handled correctly, which is why the delayed IntegrityError in the unit test won't work).
I know I can find the bad IDs by looping over the array and hitting the DB one for each ID. Is there a more efficient way to find the bad IDs or (better) simply tell the set() method to ignore/drop the bad IDs?
TestCase wraps tests in additional atomic() blocks, compared to TransactionTestCase, so to test specific database transaction behaviour, you should use TransactionTestCase.
I believe an IntegrityError is thrown only when the transaction is committed, as that's the moment the db would find out about missing ids.
In general if you want to test for db exceptions raised during a test, you should use a TransactionTestCase and test your code using:
with self.assertRaises(IntegrityError):
# do something that gets committed to db
See the answer from #dirkgroten for how to fix the unit test issue.
As for the followup question on how to more-efficiently eliminate the bad IDs, one way is as follows:
good_ids = Foo.objects.filter(id__in=an_id_array_with_some_invalid_ids).values_list('id', flat=True)

Django TestCase with fixtures causes IntegrityError due to duplicate keys

I'm having trouble moving away from django_nose.FastFixtureTestCase to django.test.TestCase (or even the more conservative django.test.TransactionTestCase). I'm using Django 1.7.11 and I'm testing against Postgres 9.2.
I have a Testcase class that loads three fixtures files. The class contains two tests. If I run each test individually as a single run (manage test test_file:TestClass.test_name), they each work. If I run them together, (manage test test_file:TestClass), I get
IntegrityError: Problem installing fixture '<path>/data.json': Could not load <app>.<Model>(pk=1): duplicate key value violates unique constraint "<app_model_field>_49810fc21046d2e2_uniq"
To me it looks like the db isn't actually getting flushed or rolled back between tests since it only happens when I run the tests in a single run.
I've stepped through the Django code and it looks like they are getting flushed or rolled back -- depending on whether I'm trying TestCase or TransactionTestCase.
(I'm moving away from FastFixtureTestCase because of https://github.com/django-nose/django-nose/issues/220)
What else should I be looking at? This seems like it should be a simple matter and is right within what django.test.TestCase and Django.test.TransactionTestCase are designed for.
Edit:
The test class more or less looks like this:
class MyTest(django.test.TransactionTestCase): # or django.test.TestCase
fixtures = ['data1.json', 'data2.json', 'data3.json']
def test1(self):
return # I simplified it to just this for now.
def test2(self):
return # I simplified it to just this for now.
Update:
I've managed to reproduce this a couple of times with a single test, so I suspect something in the fixture loading code.
One of my basic assumptions was that my db was clean for every TestCase. Tracing into the django core code I found instances where an object (in one case django.contrib.auth.User) already existed.
I temporarily overrode _fixture_setup() to assert the db was clean prior to loading fixtures. The assertion failed.
I was able to narrow the problem down to code that was in a TestCase.setUpClass() instead of TestCase.setUp(), and so the object was leaking out of the test and conflicting with other TestCase fixtures.
What I don't understand completely is I thought that the db was dropped and recreated between TestCases -- but perhaps that is not correct.
Update: Recent version of Django includes setUpTestData() that should be used instead of setUpClass()

How to do multiple save object calls in a Django view, but commit only once

I have a Django view in which I call my_model.save() in a single object (conditionally) in multiple spots. my_model is a normal model class.
save() is commited at once in Django, and thus, the database gets hit several times in the worst case. To prevent this, I defined a boolean variable save_model and set it to True in the case of a object modification. At the end of my view, I check this boolean and call save on my object in needed.
Is there a simpler way of doing this? I tried Djangos transaction.commit_on_success as a view decorator, but the save-calls appear to get queued and committed anyway.
You could look into django-dirtyfields.
Simply use DirtyFieldsMixin as a mixin to your model. You will then be able to check if an object has changed (using obj.is_dirty()) before doing a save().
You can use transaction support everywhere in your code, Django docs say it explicitely:
Although the examples below use view functions as examples, these decorators and context managers can be used anywhere in your code that you need to deal with transactions
But this isn't the thing transactions are for. You can get rid of your boolean variable using some existing app for that, like django-dirtyfields.
But it smells like a bad design. Why do you need to call save multiple times? Are you sure there is no way to call it only once?
There can be two approaches for this... But they are similar... First one is calling save() before returning response.
def my_view(request):
obj = Mymodel.objects.get(...)
if cond1:
obj.attr1 = True
elif cond2:
obj.attr2 = True
else:
obj.attr1 = False
obj.attr2 = False
obj.save()
return .......
Second one is your approach...
But there is no other way to do this, except you define your own decorator or do some other approach, but in fact, you need to check if there is any modification on your model (or you want to save changes to your data).

Django - Prevent automatic related table fetch

How can I prevent Django, for testing purposes, from automatically fetching related tables not specified in the select_related() call during the intial query?
I have a large application where I make significant use of
select_related() to bring in related model data during each original
query. All select_related() calls are used to specify the specific related models, rather than relying on the default, e.g. select_related('foo', 'bar', 'foo__bar')
As the application has grown, the select_related calls haven't
completely kept up, leaving a number of scenarios where Django happily
and kindly goes running off to the database to fetch related model
rows. This significantly increases the number of database hits, which
I obviously don't want.
I've had some success in tracking these down by checking the queries
generated using the django.db.connection.queries collection, but some
remain unsolved.
I've tried to find a suitable patch location in the django code to raise an
exception in this scenario, making the tracking much easier, but tend
to get lost in the code.
Thanks.
After some more digging, I've found the place in the code to do this.
The file in question is django/db/models/fields/related.py
You need to insert two lines into this file.
Locate class "SingleRelatedObjectDescriptor". You need to change the function __get__() as follows:
def __get__(self, instance, instance_type=None):
if instance is None:
return self
try:
return getattr(instance, self.cache_name)
except AttributeError:
raise Exception("Automated Database Fetch on %s.%s" % (instance._meta.object_name, self.related.get_accessor_name()))
# leave the old code here for when you revert!
Similarly, in class "ReverseSingleRelatedObjectDescriptor" further down the code, you again need to change __get__() to:
def __get__(self, instance, instance_type=None):
if instance is None:
return self
cache_name = self.field.get_cache_name()
try:
return getattr(instance, cache_name)
except AttributeError:
raise Exception("Automated Database Fetch on %s.%s" % (instance._meta.object_name, self.field.name))
# BEWARE: % parameters are different to previous class
# leave old code here for when you revert
Once you've done this, you'll find that Django raises an exception every time it performs an automatic database lookup. This is pretty annoying when you first start, but it will help you track down those pesky database lookups. Obviously, when you've found them all, it's probably best to revert the database code back to normal. I would only suggest using this during a debugging/performance investigation phase and not in the live production code!
So, you're asking how to stop a method from doing what it's specifically designed to do? I don't understand why you would want to do that.
However, one thing to know about select_related is that it doesn't automatically follow relationships which are defined as null=True. So if you can set your FKs to that for now, the relationship won't be followed.