Duplicate save in Django transaction causing InternalError instead of IntegrityError - django

I am using PostgreSQL as my backend with psycopg2, and I'm noticing something that seems strange to me. My set up is as follows:
class ParentModel(models.Model):
field1 = models.IntegerField()
class ChildModel(ParentModel):
field2 = models.IntegerField(unique=True)
When I try to save a ChildModel object with a duplicate field2 from the shell, I get an InternalError instead of an IntegrityError, but only if the save is done in a transaction.commit_on_success block like so:
with transaction.commit_on_success():
newObj = ChildModel(field1=5, field2=10) # Assume that a ChildModel with field2 == 10 exists
Outside of the transaction block, I get an IntegrityError, as I would expect.
When running inside view code, I always get the InternalError following the failed update even using savepoints:
try:
sid = transaction.savepoint()
newObj = ChildModel(field1=5, field2=10) # Assume that a ChildModel with field2 == 10 exists
transaction.savepoint_commit(sid)
except IntegrityError:
transaction.savepoint_rollback(sid)
except InternalError:
transaction.savepoint_rollback(sid)
... other stuff ... # raises InternalError on next database hit
Same thing happens if I do it within a commit_on_success transaction with block, instead of using savepoints. I have TransactionMiddleware installed and I am not running PostgreSQL in autocommit mode.
I can avoid the situation by simply checking to make sure the duplicate object doesn't exist, but I want to udnerstand what's going wrong with my understanding of Django transactions. What gives?

I had the exact same problem. I looked at Postgres log files and could see that another query is being run after the query which caused the integrity error, which leads to actual error being eclipsed in default transaction settings. I found out that the extra query is coming from django-debug-toolbar app.
Google pointed me to https://github.com/django-debug-toolbar/django-debug-toolbar/issues/351 which seems to solve the problem.
In short, if you are using django-debug-toolbar<=0.9.4, disabling it solves the problem.

Related

In Django how to do filter, followed by get queries and get a field value?

Following is a simplified models.py of one of the apps of my Django program:
#This is the custom queryset manager
class TenantManager(models.Manager):
def for_tenant(self, tenant):
return self.get_queryset().filter(tenant=tenant)
#This is one of the models:
class accountChart(models.Model):
name=models.CharField(max_length =200)
remarks=models.TextField(blank=True)
key=models.CharField(max_length=20)
tenant=models.ForeignKey(Tenant,related_name='accountchart_account_user_te nant')
objects = TenantManager()
#This is another data model, related ith FK to 1st model shown here
class paymentMode(models.Model):
name = models.TextField('Payment Mode Name')
payment_account=models.ForeignKey(accountChart,related_name='paymentMode_accountChart')
default=models.CharField('Default Payment Mode ?', max_length=3,choices=choice, default="No")
tenant=models.ForeignKey(Tenant,related_name='paymentmode_account_user_tenant')
objects = TenantManager()
Now, I'm doing the following queryset based on user inout. However, Django is throwing up errors. Request your kind help, as this thing is killing me for more than 2 days.
#Queryset:
payment_mode=paymentMode.objects.for_tenant(request.user.tenant).get(name__exact=request.POST.get('payment_mode'))
payment_account=payment_mode.account
However, Django is throwing up error with the second line of queryset. Even if I use filter instead of get, its showing error - Queryset doesn't have filter!!
From what I understand, first django is giving me all the payment modes related to this user, then getting the payment mode from the request.POST.get object and then in the second line it's trying to get the
related foreignkey. Can anyone kindly tell where I'm going wrong?
Well, sorry to disturb you all, I just got my answer as there was a typo in Queryset.
And as it should be, once I use "get" the solution is fine. Now, I'm not sure if there's a better way to do it except for caching the sessions data!!
This is an answer I wrote for future reference of others.

Model instance fixtures not persisted on the database

I have a test class with two methods, and want to share a saved model instance between both methods.
My fixtures:
#pytest.fixture(scope='class')
def model_factory():
class ModelFactory(object):
def get(self):
x = Model(email='test#example.org',
name='test')
x.save()
return x
return ModelFactory()
#pytest.fixture(scope='class')
def model(model_factory):
m = model_factory.get()
return m
My expectation is to receive only the model fixture on (both) my test methods and have it be the same, persisted on the database:
#pytest.mark.django_db
class TestModel(object):
def test1(self, model):
assert model.pk is not None
Model.objects.get(pk=model.pk) # Works, instance is in the db
def test2(self, model):
assert model.pk is not None # model.pk is the same as in test1
Model.objects.get(pk=model.pk) # Fails:
# *** DoesNotExist: Model matching query does not exist
I've verified using --pdb that at the end of test1, running Model.objects.all() returns the single instance I created. Meanwhile, psql shows no record:
test_db=# select * from model_table;
id | ··· fields
(0 rows)
Running the Model.objects.all() in pdb at the end of test2 returns an empty list, which is presumably right considering that the table is empty.
Why isn't my model being persisted, while the query still returns an instance anyway?
Why isn't the instance returned by the query in the second test, if my model fixture is marked scope='class' and saved? (This was my original question until I found out saving the model didn't do anything on the database)
Using django 1.6.1, pytest-django 2.9.1, pytest 2.8.5
Thanks
Tests must be independent of each other. To ensure this, Django - like most frameworks - clears the db after each test. See the documentation.
By looking at the postgres log I've found that pytest-django by default does a ROLLBACK after each test to keep things clean (which makes sense, as tests shouldn't depend on state possibly modified by earlier tests).
By decorating the test class with django_db(transaction=True) I could indeed see the data commited at the end of each test from psql, which answers my first question.
Same as before, the test runner ensures no state is kept between tests, which is the answer to my second point.
Scope argument is in this case a bit misleading, however if you would write your code like this:
#pytest.fixture(scope='class')
def model_factory(db, request):
# body
then you would get an error basically saying that database fixture has to be implemented with 'function' scope.
I would like to add that this is being currently worked on and might be an killing feature in the future ;) github pull request

OneToOne Fields on users causing some ID problems

I have a bit of a problem using django-registration and signals.
The basic setup is that I have a django 1.4.3 setup, with django-south and django-registration (and the db is SQLite for what it's worth).
EDIT: I changed the question a bit because the effect is the same in a shell, so the registration is not in cause (edits are in italic).
I have a one of my model that is related to the User model in the following way:
class MyUserProfile(models.Model):
user = models.OneToOneFiled(User)
#additional fields
I initialized the base using south.
When I do a little sqlall to check the sql that should be in it, I can clearly see:
CREATE TABLE "myApp_myuserprofile" (
"id" integer NOT NULL PRIMARY KEY
"user_id" integer NOT NULL UNIQUE REFERENCES "auth_user" ("id"),
#other fields
)
After that, I wanted to initialize the data if the user activated its account.
So in models.py I put
from django.dispatch import receiver
from registration.signals import user_activated
#Models....
#receiver(user_activated)
def createMyProfile(sender, **kwargs):
currentUser = kwargs['user']
profile = Profile(user = currentUser, #other fields default value)
profile.save()
#And now the reverse relation:
currentUser.myuserprofile = profile
currentUser.save()
While I am in there, everything seems alright, if I print the ids (both for the user and the profile), and if I travel back and forth between the 2, I see something that seems correct.
If I disable this part of the code and do the same kind of initialization using the shell, I get the same result.
But after that, everyhting is wrong.
If I open a shell and import the relevant matters, I have the following for every X value
MyUserProfile.objects.get(pk=X)
#DoesNotExist Exception
User.objects.get(pk=X).myuserprofile.pk
1
MyUserProfile.objects.all()[X].pk
1
Seems a bit weird no?
And now if I go to the sql shell
select id from myApp_myuserprofile;
1
1
1
1
...
So I have a primary column which is filled with the same value all over the place. Which is well... embarrassing to say the least (and does lead to problem, because everyone has a profile with the same Id).
Any idea to what could be the cause of the problem and how I could solve it?
P.S: Note that the foreign key from the related relation are correct and their uniqueness is preserved.
Well looks like the problem was indeed coming from the use of SQLite and South.
The doc states that:
SQLite doesn’t natively support much schema altering at all, but South has workarounds to allow deletion/altering of columns. Unique indexes are still unsupported, however; South will silently ignore any such commands.
Which was (I think) the case, as I hadn't created this relation from the start but with a latter migration. I just reseted the base and migrations and voilà.
See What's the recommended approach to resetting migration history using Django South? for the migrations and a simple ./manage.py reset myApp for the base reset.

Django tests work using SQLite but not MySQL

The tests that I have written for my Django application have been working perfectly during initial development where I have been using SQLite. Now that I am getting ready to deploy I have setup a MySQL server (as that is what I'll be deploying to) but now some of my tests are failing.
Lastly the tests that are failing don't fail when I manually test the functionality.
What could be going on?
I'm not doing anything unusual, all of the views do some database shenanigans and return a response. There isn't anything timing related (no threading or anything).
The tests all inherit from django.test.TestCase and I'm not using any fixtures.
Here is an example of a test that fails.
class BaseTest(TestCase):
def setUp(self):
super(BaseTest, self).setUp()
self.userCreds = dict(username='test', password='a')
# Create an admin user
admin = models.User.objects.create_superuser(
email='', username='admin', password='a')
# Create a user and grant them a licence
user = models.User.objects.create_user(
email='some#address.com', first_name="Mister",
last_name="Testy", **self.userCreds)
profile = models.getProfileFor(user)
node = profile.createNode(
'12345', 'acomputer', 'auser',
'user#email.com', '0456 987 123')
self.node = node
class TestClientUIViews(BaseTest):
def test_toggleActive(self):
url = reverse('toggleActive') + '?nodeId=%s' % self.node.nodeId
self.assertFalse(self.node.active)
# This should fail because only authenticated users can toggle a node active
resp = self.client.get(url)
self.assertEqual(resp.status_code, 403)
self.assertFalse(self.node.active)
# Login and make sure visiting the url toggles the active state
self.client.login(**self.userCreds)
resp = self.client.get(url)
self.assertEqual(resp.status_code, 200)
self.assertTrue(self.node.active)
resp = self.client.get(url)
self.assertEqual(resp.status_code, 200)
self.assertFalse(self.node.active)
And here is what the model looks like:
class Node(models.Model):
#property
def active(self):
'''
Activation state gets explictly tracked in its own table but is
exposed as a property for the sake of convenience
'''
activations = NodeActivation.objects \
.filter(node=self) \
.order_by('-datetime')
try:
return activations[0].active
except IndexError:
return False
#active.setter
def active(self, state):
if self.active != state:
NodeActivation.objects.create(node=self, active=state)
class NodeActivation(models.Model):
node = models.ForeignKey("Node")
datetime = models.DateTimeField(default=datetimeM.datetime.now)
active = models.BooleanField(default=False)
My local MySQL is 5.5.19 (so its using InnoDB) but I get the same failures on the deployment server which is using 5.1.56. The tests fail regardless of the storage engine.
And as I mentioned at the beginning, if I switch back to use a SQLite database, all the tests go back to passing.
With more of the actual code now revealed I'll provide the following hypothesis as to why this test is failing.
The NodeActivation model is incorrect. The datetime field should be:
datetime = models.DateTimeField(auto_now=True)
Using datetime.datetime.now() in a model definition will only be evaluated once.
Every time the setter creates a new NodeActivation record, the record will be created with the same date/time. ie The date/time that the NodeActivation model was first evaluated.
Your getter only ever returns a single result. But since both activation records have the same date/time, the ordering may be dependent on the database back end. There will be two NodeActivation records in your database at the end of the test, which one is returned is indeterminate.
By changing the active property on the Node model class to this:
#property
def active(self):
'''
Activation state gets explictly tracked in its own table but is
exposed as a property for the sake of convenience
'''
activations = NodeActivation.objects \
.filter(node=self) \
.order_by('-id')
try:
return activations[0].active
except IndexError:
return False
the problem goes away.
Note the change to the order_by call.
The records were getting created so quickly that ordering by datetime wasn't deterministic, hence the erratic behaviour. And I guess SQLite is just slower than MySQL which is why it wasn't a problem when using it as the backing database.
NOTE: Thanks to Austin Phillips for the tip (check out comments in his answer)
I had a similar problem (not sure if its the same) going from SQLite to PostgreSQL using a setup method like you have for your initial database objects.
Example::
def setUp(self):
user = User.objects.create_user(username='john', password='abbeyRd', email='john#thebeatles.com')
user.save()
I found that this would work fine on SQLite, but on PostgreSQL I was getting database connection failures (it would still build the database like you said, but couldn't run the tests without a connection error).
Turns out, the fix is to run your setup method as follows:
#classmethod
def setUpTestData(cls):
user = User.objects.create_user(username='john', password='abbeyRd', email='john#thebeatles.com')
user.save()
I don't remember the exact reason why this works and the other one didn't, but it had something to do with the connection to the database trying to reconnect every test if you use setUp(self) and if you use setUpTestData(cls) it only connects when the class is instantiated, but I might not have the details perfect. All I really know is it works!
Hope that helps, not sure if it addresses your exact issue, but it took me a good few hours to figure out, so I hope it helps you and maybe others in the future!

Django south migration error with unique field in postgresql database

Edit: I understand the reason why this happened. It was because of the existence of `initial_data.json` file. Apparently, south wants to add those fixtures after migration but failing because of the unique property of a field.
I changed my model from this:
class Setting(models.Model):
anahtar = models.CharField(max_length=20,unique=True)
deger = models.CharField(max_length=40)
def __unicode__(self):
return self.anahtar
To this,
class Setting(models.Model):
anahtar = models.CharField(max_length=20,unique=True)
deger = models.CharField(max_length=100)
def __unicode__(self):
return self.anahtar
Schema migration command completed successfully, but, trying to migrate gives me this error:
IntegrityError: duplicate key value violates unique constraint
"blog_setting_anahtar_key" DETAIL: Key (anahtar)=(blog_baslik) already
exists.
I want to keep that field unique, but still migrate the field. By the way, data loss on that table is acceptable, so long as other tables in DB stay intact.
It's actually the default behavior of syncdb to run initial_data.json each time. From the Django docs:
If you create a fixture named initial_data.[xml/yaml/json], that fixture will be loaded every time you run syncdb. This is extremely convenient, but be careful: remember that the data will be refreshed every time you run syncdb. So don't use initial_data for data you'll want to edit.
See: docs
Personally, I think the use-case for initial data that needs to be reloaded each and every time a change occurs is retarded, so I never use initial_data.json.
The better method, since you're using South, is to manually call loaddata on a specific fixture necessary for your migration. In the case of initial data, that would go in your 0001_initial.py migration.
def forwards(self, orm):
from django.core.management import call_command
call_command("loaddata", "my_fixture.json")
See: http://south.aeracode.org/docs/fixtures.html
Also, remember that the path to your fixture is relative to the project root. So, if your fixture is at "myproject/myapp/fixtures/my_fixture.json" call_command would actually look like:
call_command('loaddata', 'myapp/fixtures/my_fixture.json')
And, of course, your fixture can't be named 'initial_data.json', otherwise, the default behavior will take over.