Django syncing Database Integrity Error - django

i was trying to deploy my application to heroku i was using sqlite so i have changed the settings to postgreSQL, i had some initial data initial_data.json then i removed it but i keep getting this error when i try to syncdb even without any initial data.
File /usr/local/lib/python2.6/site-packages/django/db/backends/postgresql_psycopg2/base.py, line 44, in execute
return self.cursor.execute(query, args)
django.db.utils.IntegrityError: duplicate key value violates unique constraint auth_permission_content_type_id_key

actually after looking i found that i have a permission i added customly that propably caused a conflict when i removed it it worked !!

Related

How to resolve an error after deploy on heroku?

After deploy I have an error whenever I try to push a button.
What to do?
Text of error: ProgrammingError at /notion/
relation "base_notion" does not exist
LINE 1: ... "base_notion"."title", "base_notion"."body" FROM "base_noti...
It seems as though you are attempting to load data from your database without actually creating the tables / relationship(s).
If you are using a database ORM or connector that manages migrations - such as Django or sqlalchemy - then you may need to create a new a new migration file, which is a library specific operation.
Once created, you can implement a Release Phase task which will apply those migrations once a Heroku deploy finishes. This reduces the time between your code & database being "out of sync" with each other and hopefully avoids this issue from occurring in the future.

How can I inspect the request Django makes to a database?

I have a management command running on an EC2 instance which fails when trying to execute ORM queries like so:
File "/usr/local/lib/python2.7/dist-packages/django/db/models/query.py", line 53, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch)
File "/usr/local/lib/python2.7/dist-packages/django/db/models/sql/compiler.py", line 899, in execute_sql
raise original_exception
OperationalError: SSL connection has been closed unexpectedly
I can connect to the same database just fine from a django-admin shell_plus on the same instance.
To diagnose this, I'd like to inspect the parameters of the connection request Django is making in each case, to see what's different, but after a bit of poking through the Django source it seemed best to ask how rather than getting lost in the weeds for hours :)
alternate strategies for diagnosing this also welcome!
According to the documentation about logging database queries, you should look in your database log files:
This logging does not include framework-level initialization (e.g. SET
TIMEZONE) or transaction management queries (e.g. BEGIN, COMMIT, and
ROLLBACK). Turn on query logging in your database if you wish to
view all database queries.
I've gotten to the bottom of this: the issue was that this management command was daemonized, DB connections do not survive fork(), and Django is not smart enough to notice, so it tries to use the dead connection and breaks.
The solution is
from django.db import close_old_connections
class Daemon():
def start()
self.daemonize()
close_old_connections()

django get_or_create not working as expected

I have a model called snacks:
class Snack(models.Model):
snack = models.CharField(max_length=9)
When I do
Snack.objects.get_or_create(snack="borsh")
I get this error:
django.db.utils.IntegrityError: duplicate key value violates unique constraint "restaurants_snack_pkey"
DETAIL: Key (id)=(6) already exists.
If I do it again, it's going to tell me the same thing except the key (that already exists) will be 7 and so on.
It's true that the key already exists, I'm wondering how can I make it pick a key that's the next one available?
Thanks in advance.
Jenia
You probably did some manual data importing, even if not, it's not problem with django, but with database auto increment sequence. Django has a management command that generates sql command to reset those:
./manage.py sqlsequencereset myapp1 myapp2
If you are using PostgreSQL you can just pipe it in like that:
./manage.py sqlsequencereset myapp1 myapp2 | psql
or mysql if you are using MySQL, but I haven't ried it.

Django+Postgres: "current transaction is aborted, commands ignored until end of transaction block"

I've started working on a Django/Postgres site. Sometimes I work in manage.py shell, and accidentally do some DB action that results in an error. Then I am unable to do any database action at all, because for any database action I try to do, I get the error:
current transaction is aborted, commands ignored until end of transaction block
My current workaround is to restart the shell, but I should find a way to fix this without abandoning my shell session.
(I've read this and this, but they don't give actionable instructions on what to do from the shell.)
You can try this:
from django.db import connection
connection._rollback()
The more detailed discussion of This issue can be found here
this happens to me sometimes, often it's the missing
manage.py migrate
or
manage.py syncdb
as mentioned also here
it also can happen the other way around, if you have a schemamigration pending from your models.py. With south you need to update the schema with.
manage.py schemamigration mymodel --auto
Check this
The quick answer is usually to turn on database level autocommit by adding:
'OPTIONS': {'autocommit': True,}
To the database settings.
I had this error after restoring a backup to a totally empty DB. It went away after running:
./manage syncdb
Maybe there were some internal models missing from the dump...
WARNING: the patch below can possibly cause transactions being left in an open state on the db (at least with postgres). Not 100% sure about that (and how to fix), but I highly suggest not doing the patch below on production databases.
As the accepted answer does not solve my problems - as soon as I get any DB error, I cannot do any new DB actions, even with a manual rollback - I came up with my own solution.
When I'm running the Django-shell, I patch Django to close the DB connection as soon as any errors occur. That way I don't ever have to think about rolling back transactions or handling the connection.
This is the code I'm loading at the beginning of my Django-shell-session:
from django import db
from django.db.backends.util import CursorDebugWrapper
old_execute = CursorDebugWrapper.execute
old_execute_many = CursorDebugWrapper.executemany
def execute_wrapper(*args, **kwargs):
try:
old_execute(*args, **kwargs)
except Exception, ex:
logger.error("Database error:\n%s" % ex)
db.close_connection()
def execute_many_wrapper(*args, **kwargs):
try:
old_execute_many(*args, **kwargs)
except Exception, ex:
logger.error("Database error:\n%s" % ex)
db.close_connection()
CursorDebugWrapper.execute = execute_wrapper
CursorDebugWrapper.executemany = execute_many_wrapper
For me it was a test database without migrations. I was using --keepdb for testing. Running it once without it fixed the error.
There are a lot of useful answers on this topic, but still it can be a challenge to figure out what is the root of the issue. Because of this, I will try to give just a little more context on how I was able to figure out the solution for my issue.
For Django specifically, you want to turn on logs for db queries and before the error is raised, you can find the query that is failing in the console. Run that query directly on db, and you will see what is wrong.
In my case, one column was missing in db, so after migration everything worked correctly.
I hope this will be helpful.
If you happen to get such an error when running migrate (South), it can be that you have lots of changes in database schema and want to handle them all at once. Postgres is a bit nasty on that. What always works, is to break one big migration into smaller steps. Most likely, you're using a version control system.
Your current version
Commit n1
Commit n2
Commit n3
Commit n4 # db changes
Commit n5
Commit n6
Commit n7 # db changse
Commit n8
Commit n9 # db changes
Commit n10
So, having the situation described above, do as follows:
Checkout repository to "n4", then syncdb and migrate.
Checkout repository to "n7", then syncdb and migrate.
Checkout repository to "n10", then syncdb and migrate.
And you're done. :)
It should run flawlessly.
If you are using a django version before 1.6 then you should use Christophe's excellent xact module.
xact is a recipe for handling transactions sensibly in Django applications on PostgreSQL.
Note: As of Django 1.6, the functionality of xact will be merged into the Django core as the atomic decorator. Code that uses xact should be able to be migrated to atomic with just a search-and-replace. atomic works on databases other than PostgreSQL, is thread-safe, and has other nice features; switch to it when you can!
I add the following to my settings file, because I like the autocommit feature when I'm "playing around" but dont want it active when my site is running otherwise.
So to get autocommit just in shell, I do this little hack:
import sys
if 'shell' in sys.argv or sys.argv[0].endswith('pydevconsole.py'):
DATABASES['default']['OPTIONS']['autocommit'] = True
NOTE: That second part is just because I work in PyCharm, which doesnt directly run manage.py
I got this error in Django 1.7. When I read in the documentation that
This problem cannot occur in Django’s default mode and atomic()
handles it automatically.
I got a bit suspicious. The errors happened, when I tried running migrations. It turned out that some of my models had my_field = MyField(default=some_function). Having this function as a default for a field worked alright with sqlite and mysql (I had some import errors, but I managed to make it work), though it seems to not work for postgresql, and it broke the migrations to the point that I didn't event get a helpful error message, but instead the one from the questions title.

South doesn't create default permissions on new models in time for latter migrations to use them

I'm not 100% sure I'm doing this right, but I think I've found an issue where auth.Permission objects aren't being created soon enough for migrations to use them when you initialize a DB from scratch.
The important details:
I'm trying to initialize a Django DB from scratch using ./manage.py syncdb --migrate --noinput
I have 11 migrations in my chain
The 1st migration creates a new model called myapp.CompanyAccount
The 9th migration tries to fetch the permission myapp.change_companyaccount with:
p = orm[ "auth.Permission" ].objects.get( codename = "change_companyaccount" )
At that point, an exception is raised:
django.contrib.auth.models.DoesNotExist: Permission matching query does not exist
I had assumed that the default permissions that are defined for every object (as per http://docs.djangoproject.com/en/dev/topics/auth/#default-permissions) would have been created by the time the 1st migration finished, but it doesn't appear that they are. If I re-run the migration after the exception, it works the second time because apparently the permission now exists and the 9th migration can execute without error.
Is there anything that can be done to "flush" everything sometime before the 9th migration runs so that the whole thing can run in a single pass without bailing out?
Thanks for any help / advice.
EDIT: In response to John's comment below, I found out that the following command-line sequence will work:
./manage.py syncdb (this initializes the default Django tables)
./manage.py migrate myapp 0001 (this causes the CompanyAccount table to be created)
./manage.py migrate myapp (this migrates all the way to the end without error)
Unfortunately, skipping step #2 above means that you get the same exception in the 0009 migration, which tells me that my original suspicion was correct that default permissions on new models are not created by South immediately, and are somehow only pushed into the database when the entire migration chain finishes.
This is better than where I was (I'm at least avoiding exceptions now) but I still need to manually segment the migration around the creation of new models that latter migrations might need to touch the permissions of, so this isn't a complete solution.
As it turns out, the answer is to manually call db.send_pending_create_signals() at some point before you try to access the default permission since South only does this "flushing" step quite late in the process. Thanks to Andrew Godwin from South for replying to this on the South mailing list here:
http://groups.google.com/group/south-users/browse_thread/thread/1de2219fe4f35959
Don't you have to run the default "syncdb" on a virgin database in order to create the South migration table; before you can use south. Are you doing that? It typically creates the permissions table at that time since you have django.contrib.auth in your INSTALLED_APPS.
http://south.aeracode.org/docs/installation.html#configuring-your-django-installation