I have a Django model which looks like this:
class Dummy(models.Model):
...
system = models.CharField(max_length=16)
I want system never to be empty or to contain whitespace.
I know how to use validators in Django.
But I would enforce this at database level.
What is the easiest and django-like way to create a DB constraint for this?
I use PostgreSQL and don't need to support any other database.
2019 Update
Django 2.2 added support for database-level constrains. The new CheckConstraint and UniqueConstraint classes enable adding custom database constraints. Constraints are added to models using the Meta.constraints option.
Your system validation would look like something like this:
from django.db import models
from django.db.models.constraints import CheckConstraint
from django.db.models.query_utils import Q
class Dummy(models.Model):
...
system = models.CharField(max_length=16)
class Meta:
constraints = [
CheckConstraint(
check=~Q(system="") & ~Q(system__contains=" "),
name="system_not_blank")
]
First issue: creating a database constraint through Django
A)
It seems that django does not have this ability build in yet. There is a 9-year-old open ticket for it, but I wouldn't hold my breath for something that has been going on this long.
Edit: As of release 2.2 (april 2019), Django supports database-level check constraints.
B) You could look into the package django-db-constraints, through which you can define constraints in the model Meta. I did not test this package, so I don't know how useful it really is.
# example using this package
class Meta:
db_constraints = {
'price_above_zero': 'check (price > 0)',
}
Second issue: field system should never be empty nor contain whitespaces
Now we would need to build the check constraint in postgres syntax to accomplish that. I came up with these options:
Check if the length of system is different after removing whitespaces. Using ideas from this answer you could try:
/* this check should only pass if `system` contains no
* whitespaces (`\s` also detects new lines)
*/
check ( length(system) = length(regexp_replace(system, '\s', '', 'g')) )
Check if the whitespace count is 0. For this you could us regexp_matches:
/* this check should only pass if `system` contains no
* whitespaces (`\s` also detects new lines)
*/
check ( length(regexp_matches(system, '\s', 'g')) = 0 )
Note that the length function can't be used with regexp_matches because the latter returns a set of text[] (set of arrays), but I could not find the proper function to count the elements of that set right now.
Finally, bringing both of the previous issues together, your approach could look like this:
class Dummy(models.Model):
# this already sets NOT NULL to the field in the database
system = models.CharField(max_length=16)
class Meta:
db_constraints = {
'system_no_spaces': 'check ( length(system) > 0 AND length(system) = length(regexp_replace(system, "\s", "", "g")) )',
}
This checks that the fields value:
does not contain NULL (CharField adds NOT NULL constraint by default)
is not empty (first part of the check: length(system) > 0)
has no whitespaces (second part of the check: same length after replacing whitespace)
Let me know how that works out for you, or if there are problems or drawbacks to this approach.
You can add CHECK constraint via custom django migration. To check string length you can use char_length function and position to check for containing whitespaces.
Quote from postgres docs (https://www.postgresql.org/docs/current/static/ddl-constraints.html):
A check constraint is the most generic constraint type. It allows you
to specify that the value in a certain column must satisfy a Boolean
(truth-value) expression.
To run arbitrary sql in migaration RunSQL operation can be used (https://docs.djangoproject.com/en/2.0/ref/migration-operations/#runsql):
Allows running of arbitrary SQL on the database - useful for more
advanced features of database backends that Django doesn’t support
directly, like partial indexes.
Create empty migration:
python manage.py makemigrations --empty yourappname
Add sql to create constraint:
# Generated by Django A.B on YYYY-MM-DD HH:MM
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('yourappname', '0001_initial'),
]
operations = [
migrations.RunSQL('ALTER TABLE appname_dummy ADD CONSTRAINT syslen '
'CHECK (char_length(trim(system)) > 1);',
'ALTER TABLE appname_dummy DROP CONSTRAINT syslen;'),
migrations.RunSQL('ALTER TABLE appname_dummy ADD CONSTRAINT syswh '
'CHECK (position(' ' in trim(system)) = 0);',
'ALTER TABLE appname_dummy DROP CONSTRAINT syswh;')
]
Run migration:
python manage.py migrate yourappname
I modify my answer to reach out your requirements.
So, if you would like to run a DB constraint try this one :
import psycopg2
def your_validator():
conn = psycopg2.connect("dbname=YOURDB user=YOURUSER")
cursor = conn.cursor()
query_result = cursor.execute("YOUR QUERY")
if query_result is Null:
# Do stuff
else:
# Other Stuff
Then use the pre_save signal.
In your models.py file add,
from django.db.models.signals import pre_save
class Dummy(models.Model):
...
#staticmethod
def pre_save(sender, instance, *args, **kwargs)
# Of course, feel free to parse args in your def.
your_validator()
Related
I'm following up in regards to a question that I asked earlier in which I sought to seek a conversion from a goofy/poorly written mysql query to postgresql. I believe I succeeded with that. Anyways, I'm using data that was manually moved from a mysql database to a postgres database. I'm using a query that looks like so:
UPDATE krypdos_coderound cru
set is_correct = case
when t.kv_values1 = t.kv_values2 then True
else False
end
from
(select cr.id,
array_agg(
case when kv1.code_round_id = cr.id
then kv1.option_id
else null end
) as kv_values1,
array_agg(
case when kv2.code_round_id = cr_m.id
then kv2.option_id
else null end
) as kv_values2
from krypdos_coderound cr
join krypdos_value kv1 on kv1.code_round_id = cr.id
join krypdos_coderound cr_m
on cr_m.object_id=cr.object_id
and cr_m.content_type_id =cr.content_type_id
join krypdos_value kv2 on kv2.code_round_id = cr_m.id
WHERE
cr.is_master= False
AND cr_m.is_master= True
AND cr.object_id=%s
AND cr.content_type_id=%s
GROUP BY cr.id
) t
where t.id = cru.id
""" % ( self.object_id, self.content_type.id)
)
I have reason to believe that this works well. However, this has lead to a new issue. When trying to submit, I get an error from django that states:
IntegrityError at (some url):
duplicate key value violates unique constraint "krypdos_value_pkey"
I've looked at several of the responses posted on here and I haven't quite found the solution to my problem (although the related questions have made for some interesting reading). I see this in my logs, which is interesting because I never explicitly call insert- django must handle it:
STATEMENT: INSERT INTO "krypdos_value" ("code_round_id", "variable_id", "option_id", "confidence", "freetext")
VALUES (1105935, 11, 55, NULL, E'')
RETURNING "krypdos_value"."id"
However, trying to run that results in the duplicate key error. The actual error is thrown in the code below.
# Delete current coding
CodeRound.objects.filter(
object_id=o.id, content_type=object_type, is_master=True
).delete()
code_round = CodeRound(
object_id=o.id,
content_type=object_type,
coded_by=request.user, comments=request.POST.get('_comments',None),
is_master=True,
)
code_round.save()
for key in request.POST.keys():
if key[0] != '_' or key != 'csrfmiddlewaretoken':
options = request.POST.getlist(key)
for option in options:
Value(
code_round=code_round,
variable_id=key,
option_id=option,
confidence=request.POST.get('_confidence_'+key, None),
).save() #This is where it dies
# Resave to set is_correct
code_round.save()
o.status = '3'
o.save()
I've checked the sequences and such and they seem to be in order. At this point I'm not sure what to do- I assume it's something on django's end but I'm not sure. Any feedback would be much appreciated!
This happend to me - it turns out you need to resync your primary key fields in Postgres. The key is the SQL statement:
SELECT setval('tablename_id_seq', (SELECT MAX(id) FROM tablename)+1);
It appears to be a known difference of behaviour between the MySQL and SQLite (they update the next available primary key even when inserting an object with an explicit id) backends, and other backends like Postgres, Oracle, ... (they do not).
There is a ticket describing the same issue. Even though it was closed as invalid, it provides a hint that there is a Django management command to update the next available key.
To display the SQL updating all next ids for the application MyApp:
python manage.py sqlsequencereset MyApp
In order to have the statement executed, you can provide it as the input for the dbshell management command. For bash, you could type:
python manage.py sqlsequencereset MyApp | python manage.py dbshell
The advantage of the management commands is that abstracts away the underlying DB backend, so it will work even if later migrating to a different backend.
I had an existing table in my "inventory" app and I wanted to add new records in Django admin and I got this error:
Duplicate key value violates unique constraint "inventory_part_pkey"
DETAIL: Key (part_id)=(1) already exists.
As mentioned before, I run the code below to get the SQL command to reset the id-s:
python manage.py sqlsequencereset inventory
Piping the python manage.py sqlsequencereset inventory | python manage.py dbshell to the shell was not working
So I copied the generated raw SQL command
Then opened pgAdmin3 https://www.pgadmin.org for postgreSQL and opened my db
Clicked on the 6. icon (Execute arbitrary SQL queries)
Copied the statement what was generated
In my case the raw SQL command was:
BEGIN;
SELECT setval(pg_get_serial_sequence('"inventory_signup"','id'), coalesce(max("id"), 1), max("id") IS NOT null) FROM "inventory_signup";
SELECT setval(pg_get_serial_sequence('"inventory_supplier"','id'), coalesce(max("id"), 1), max("id") IS NOT null) FROM "inventory_supplier";
COMMIT;
Executed it with F5.
This fixed everything.
In addition to zapphods answer:
In my case the indexing was indeed incorrect, since I had deleted all migrations, and the database probably 10-15 times when developing as I wasn't in the stage of migrating anything.
I was getting an IntegrityError on finished_product_template_finishedproduct_pkey
Reindex the table and restart runserver:
I was using pgadmin3 and for whichever index was incorrect and throwing duplicate key errors I navigated to the constraints and reindexed.
And then reindexed.
The solution is that you need to resync your primary key fields as reported by "Hacking Life" who wrote an example SQL code but, as suggested by "Ad N" is better to run the Django command sqlsequencereset to get the exact SQL code that you can copy and past or run with another command.
As a further improvement to these answers I would suggest to you and other reader to dont' copy and paste the SQL code but, more safely, to execute the SQL query generated by sqlsequencereset from within your python code in this way (using the default database):
from django.core.management.color import no_style
from django.db import connection
from myapps.models import MyModel1, MyModel2
sequence_sql = connection.ops.sequence_reset_sql(no_style(), [MyModel1, MyModel2])
with connection.cursor() as cursor:
for sql in sequence_sql:
cursor.execute(sql)
I tested this code with Python3.6, Django 2.0 and PostgreSQL 10.
If you want to reset the PK on all of your tables, like me, you can use the PostgreSQL recommended way:
SELECT 'SELECT SETVAL(' ||
quote_literal(quote_ident(PGT.schemaname) || '.' || quote_ident(S.relname)) ||
', COALESCE(MAX(' ||quote_ident(C.attname)|| '), 1) ) FROM ' ||
quote_ident(PGT.schemaname)|| '.'||quote_ident(T.relname)|| ';'
FROM pg_class AS S,
pg_depend AS D,
pg_class AS T,
pg_attribute AS C,
pg_tables AS PGT
WHERE S.relkind = 'S'
AND S.oid = D.objid
AND D.refobjid = T.oid
AND D.refobjid = C.attrelid
AND D.refobjsubid = C.attnum
AND T.relname = PGT.tablename
ORDER BY S.relname;
After running this query, you will need to execute the results of the query. I typically copy and paste into Notepad. Then I find and replace "SELECT with SELECT and ;" with ;. I copy and paste into pgAdmin III and run the query. It resets all of the tables in the database. More "professional" instructions are provided at the link above.
If you have manually copied the databases, you may be running into the issue described here.
I encountered this error because I was passing extra arguments to the save method in the wrong way.
For anybody who encounters this, try forcing UPDATE with:
instance_name.save(..., force_update=True)
If you get an error that you cannot pass force_insert and force_update at the same time, you're probably passing some custom arguments the wrong way, like I did.
This question was asked about 9 years ago, and lots of people gave their own ways to solve it.
For me, I put unique=True in my email custom model field, but while creating superuser I didn't ask for the email to be mandatory.
Now after creating a superuser my email field is just saved as blank or Null. Now this is how I created and saved new user
obj = mymodel.objects.create_user(username='abc', password='abc')
obj.email = 'abc#abc.com'
obj.save()
It just threw the error saying duplicate-key-value-violates in the first line because the email was set to empty by default which was the same with the admin user. Django spotted a duplicate !!!
Solution
Option1: Make email mandatory while creating any user (for superuser as well)
Option2: Remove unique=True and run migrations
Option3: If you don't know where are the duplicates, you either drop the column or you can clear the database using python manage.py flush
It is highly recommended to know the reason why the error occurred in your case.
I was getting the same error as the OP.
I had created some Django models, created a Postgres table based on the models, and added some rows to the Postgres table via Django Admin. Then I fiddled with some of the columns in the models (changing around ForeignKeys, etc.) but had forgotten to migrate the changes.
Running the migration commands solved my problem, which makes sense given the SQL answers above.
To see what changes would be applied, without actually applying them:
python manage.py makemigrations --dry-run --verbosity 3
If you're happy with those changes, then run:
python manage.py makemigrations
Then run:
python manage.py migrate
I was getting a similar issue and nothing seemed to be working. If you need the data (ie cant exclude it when doing dump) make sure you have turned off (commented) any post_save receivers. I think the data would be imported but it would create the same model again because of these. Worked for me.
You just have to go to pgAdmin III and there execute your script with the name of the table:
SELECT setval('tablename_id_seq', (SELECT MAX(id) FROM tablename)+1);
Based on Paolo Melchiorre's answer, I wrote a chunk as a function to be called before any .save()
from django.db import connection
def setSqlCursor(db_table):
sql = """SELECT pg_catalog.setval(pg_get_serial_sequence('"""+db_table+"""', 'id'), MAX(id)) FROM """+db_table+""";"""
with connection.cursor() as cursor:
cursor.execute(sql)
This is the right statement. Mostly, It happens when we insert rows with id field.
SELECT setval('tablename_id_seq', (SELECT MAX(id) FROM tablename));
I'm adding a search engine to a Django project, and thus set up SearchVectorFields on several models, with custom triggers.
I would like to unit-test that my columns of type TSVECTOR are updated when the instance of a Model changes.
However, I've been unable to find any information on how to test the content of a SearchVectorField ... I can't compare my_document.search to SearchVector(Value("document content")) or similar, because the first one seems to be string-like, while the latter is an object.
TL;DR
More precisely, with the model:
from django.db import models
class Document(models.Model):
...
content = TextField()
search = SearchVectorField()
and trigger:
-- create trigger function
CREATE OR REPLACE FUNCTION search_trigger() RETURNS trigger AS $$
begin
NEW.search := to_tsvector(COALESCE(NEW.content, ''))
return NEW;
end
$$ LANGUAGE plpgsql;
-- add trigger on insert
DROP TRIGGER IF EXISTS search_trigger ON myapp_document;
CREATE TRIGGER search_trigger
BEFORE INSERT
ON myapp_document
FOR EACH ROW
EXECUTE PROCEDURE search_trigger();
-- add trigger on update
DROP TRIGGER IF EXISTS search_trigger_update ON myapp_document;
CREATE TRIGGER search_trigger_update
BEFORE UPDATE OF content
ON myapp_document
FOR EACH ROW
WHEN (OLD.content IS DISTINCT FROM NEW.content)
EXECUTE PROCEDURE search_trigger();
How can I test that when I create a new Document instance, its search field is populated with the right values ? Same question for updating an existing Document instance, but the answer should be fairly similar.
Thanks for any hint ;)
I think you can compare string representation of your SearchVectorField values:
from django.test import TestCase
from .models import Document
class DocumentTest(TestCase):
def setUp(self):
Document.objects.create(content='Pizza Recipes')
def test_document_search(self):
document_list = list(Document.objects.values_list('search', flat=True))
search_list = ["'pizza':1 'recip':2"]
self.assertSequenceEqual(document_list, search_list)
I want to alter a foreign key in one of my models that can currently have NULL values to not be nullable.
I removed the null=True from my field and ran makemigrations
Because I'm an altering a table that already has rows which contain NULL values in that field I am asked to provide a one-off value right away or edit the migration file and add a RunPython operation.
My RunPython operation is listed BEFORE the AlterField operation and does the required update for this field so it doesn't contain NULL values (only rows who already contain a NULL value).
But, the migration still fails with this error:
django.db.utils.OperationalError: cannot ALTER TABLE "my_app_site" because it has pending trigger events
Here's my code:
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
def add_default_template(apps, schema_editor):
Template = apps.get_model("my_app", "Template")
Site = apps.get_model("my_app", "Site")
accept_reject_template = Template.objects.get(name="Accept/Reject")
Site.objects.filter(template=None).update(template=accept_reject_template)
class Migration(migrations.Migration):
dependencies = [
('my_app', '0021_auto_20150210_1008'),
]
operations = [
migrations.RunPython(add_default_template),
migrations.AlterField(
model_name='site',
name='template',
field=models.ForeignKey(to='my_app.Template'),
preserve_default=False,
),
]
If I understand correctly this error may occur when a field is altered to be not-nullable but the field contains null values.
In that case, the only reason I can think of why this happens is because the RunPython operation transaction didn't "commit" the changes in the database before running the AlterField.
If this is indeed the reason - how can I make sure the changes reflect in the database?
If not - what can be the reason for the error?
Thanks!
This happens because Django creates constraints as DEFERRABLE INITIALLY DEFERRED:
ALTER TABLE my_app_site
ADD CONSTRAINT "[constraint_name]"
FOREIGN KEY (template_id)
REFERENCES my_app_template(id)
DEFERRABLE INITIALLY DEFERRED;
This tells PostgreSQL that the foreign key does not need to be checked right after every command, but can be deferred until the end of transactions.
So, when a transaction modifies content and structure, the constraints are checked on parallel with the structure changes, or the checks are scheduled to be done after altering the structure. Both of these states are bad and the database will abort the transaction instead of making any assumptions.
You can instruct PostgreSQL to check constraints immediately in the current transaction by calling SET CONSTRAINTS ALL IMMEDIATE, so structure changes won't be a problem (refer to SET CONSTRAINTS documentation). Your migration should look like this:
operations = [
migrations.RunSQL('SET CONSTRAINTS ALL IMMEDIATE',
reverse_sql=migrations.RunSQL.noop),
# ... the actual migration operations here ...
migrations.RunSQL(migrations.RunSQL.noop,
reverse_sql='SET CONSTRAINTS ALL IMMEDIATE'),
]
The first operation is for applying (forward) migrations, and the last one is for unapplying (backwards) migrations.
EDIT: Constraint deferring is useful to avoid insertion sorting, specially for self-referencing tables and tables with cyclic dependencies. So be careful when bending Django.
LATE EDIT: on Django 1.7 and newer versions there is a special SeparateDatabaseAndState operation that allows data changes and structure changes on the same migration. Try using this operation before resorting to the "set constraints all immediate" method above. Example:
operations = [
migrations.SeparateDatabaseAndState(database_operations=[
# put your sql, python, whatever data migrations here
],
state_operations=[
# field/model changes goes here
]),
]
Yes, I'd say it's the transaction bounds which are preventing the data change in your migration being committed before the ALTER is run.
I'd do as #danielcorreia says and implement it as two migrations, as it looks like the even the SchemaEditor is bound by transactions, via the the context manager you'd be obliged to use.
Adding null to the field giving you a problem should fix it. In your case the "template" field. Just add null=True to the field. The migrations should than look like this:
class Migration(migrations.Migration):
dependencies = [
('my_app', '0021_auto_20150210_1008'),
]
operations = [
migrations.RunPython(add_default_template),
migrations.AlterField(
model_name='site',
name='template',
field=models.ForeignKey(to='my_app.Template', null=True),
preserve_default=False,
),
]
Isn't it possible to do something like the following with South in a schemamigration?
def forwards(self, orm):
## CREATION
# Adding model 'Added'
db.create_table(u'something_added', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('foo', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['something.Foo'])),
('bar', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['something.Bar'])),
))
db.send_create_signal(u'something', ['Added'])
## DATA
# Create Added for every Foo
for f in orm.Foo.objects.all():
self.prev_orm.Added.objects.create(foo=f, bar=f.bar)
## DELETION
# Deleting field 'Foo.bar'
db.delete_column(u'something_foo', 'bar_id')
See the prev_orm that would allow me to access to f.bar, and do all in one. I find that having to write 3 migrations for that is pretty heavy...
I know this is not the "way to do" but to my mind this would be honestly much cleaner.
Would there be a real problem to do so btw?
I guess your objective is to ensure that deletion does not run before the data-migration. For this you can use the dependency system in South.
You can break the above into three parts:
001_app1_addition_migration (in app 1)
then
001_app2_data_migration (in app 2, where the Foo model belongs)
and then
002_app1_deletion_migration (in app 1) with something like following:
class Migration:
depends_on = (
("app2", "001_app2_data_migration"),
)
def forwards(self):
## DELETION
# Deleting field 'Foo.bar'
db.delete_column(u'something_foo', 'bar_id')
First of all, the orm provided by South is the one that you are migrating to. In other words, it matches the schema after the migration is complete. So you can just write orm.Added instead of self.prev_orm.Added. The other implication of this fact is that you cannot reference foo.bar since it is not present in the final schema.
The way to get around that (and to answer your question) is to skip the ORM and just execute raw SQL directly.
In your case, the create statement that accesses the deleted row would look something like:
cursor.execute('SELECT "id", "bar_id" FROM "something_foo"')
for foo_id, bar_id in cursor.fetchall()
orm.Added.ojbects.create(foo_id=foo_id, bar_id=bar_id)
South migrations are using transaction management.
When doing several migrations at once, the code is similar to:
for migration in migrations:
south.db.db.start_transaction()
try:
migration.forwards(migration.orm)
south.db.db.commit_transaction()
except:
south.db.db.rollback_transaction()
raise
so... while it is not recommended to mix schema and data migrations, once you commit the schema with db.commit_transaction() the tables should be available for you to use. Be mindful to provide a backwards() method that does that correct steps backwards.
I'm following up in regards to a question that I asked earlier in which I sought to seek a conversion from a goofy/poorly written mysql query to postgresql. I believe I succeeded with that. Anyways, I'm using data that was manually moved from a mysql database to a postgres database. I'm using a query that looks like so:
UPDATE krypdos_coderound cru
set is_correct = case
when t.kv_values1 = t.kv_values2 then True
else False
end
from
(select cr.id,
array_agg(
case when kv1.code_round_id = cr.id
then kv1.option_id
else null end
) as kv_values1,
array_agg(
case when kv2.code_round_id = cr_m.id
then kv2.option_id
else null end
) as kv_values2
from krypdos_coderound cr
join krypdos_value kv1 on kv1.code_round_id = cr.id
join krypdos_coderound cr_m
on cr_m.object_id=cr.object_id
and cr_m.content_type_id =cr.content_type_id
join krypdos_value kv2 on kv2.code_round_id = cr_m.id
WHERE
cr.is_master= False
AND cr_m.is_master= True
AND cr.object_id=%s
AND cr.content_type_id=%s
GROUP BY cr.id
) t
where t.id = cru.id
""" % ( self.object_id, self.content_type.id)
)
I have reason to believe that this works well. However, this has lead to a new issue. When trying to submit, I get an error from django that states:
IntegrityError at (some url):
duplicate key value violates unique constraint "krypdos_value_pkey"
I've looked at several of the responses posted on here and I haven't quite found the solution to my problem (although the related questions have made for some interesting reading). I see this in my logs, which is interesting because I never explicitly call insert- django must handle it:
STATEMENT: INSERT INTO "krypdos_value" ("code_round_id", "variable_id", "option_id", "confidence", "freetext")
VALUES (1105935, 11, 55, NULL, E'')
RETURNING "krypdos_value"."id"
However, trying to run that results in the duplicate key error. The actual error is thrown in the code below.
# Delete current coding
CodeRound.objects.filter(
object_id=o.id, content_type=object_type, is_master=True
).delete()
code_round = CodeRound(
object_id=o.id,
content_type=object_type,
coded_by=request.user, comments=request.POST.get('_comments',None),
is_master=True,
)
code_round.save()
for key in request.POST.keys():
if key[0] != '_' or key != 'csrfmiddlewaretoken':
options = request.POST.getlist(key)
for option in options:
Value(
code_round=code_round,
variable_id=key,
option_id=option,
confidence=request.POST.get('_confidence_'+key, None),
).save() #This is where it dies
# Resave to set is_correct
code_round.save()
o.status = '3'
o.save()
I've checked the sequences and such and they seem to be in order. At this point I'm not sure what to do- I assume it's something on django's end but I'm not sure. Any feedback would be much appreciated!
This happend to me - it turns out you need to resync your primary key fields in Postgres. The key is the SQL statement:
SELECT setval('tablename_id_seq', (SELECT MAX(id) FROM tablename)+1);
It appears to be a known difference of behaviour between the MySQL and SQLite (they update the next available primary key even when inserting an object with an explicit id) backends, and other backends like Postgres, Oracle, ... (they do not).
There is a ticket describing the same issue. Even though it was closed as invalid, it provides a hint that there is a Django management command to update the next available key.
To display the SQL updating all next ids for the application MyApp:
python manage.py sqlsequencereset MyApp
In order to have the statement executed, you can provide it as the input for the dbshell management command. For bash, you could type:
python manage.py sqlsequencereset MyApp | python manage.py dbshell
The advantage of the management commands is that abstracts away the underlying DB backend, so it will work even if later migrating to a different backend.
I had an existing table in my "inventory" app and I wanted to add new records in Django admin and I got this error:
Duplicate key value violates unique constraint "inventory_part_pkey"
DETAIL: Key (part_id)=(1) already exists.
As mentioned before, I run the code below to get the SQL command to reset the id-s:
python manage.py sqlsequencereset inventory
Piping the python manage.py sqlsequencereset inventory | python manage.py dbshell to the shell was not working
So I copied the generated raw SQL command
Then opened pgAdmin3 https://www.pgadmin.org for postgreSQL and opened my db
Clicked on the 6. icon (Execute arbitrary SQL queries)
Copied the statement what was generated
In my case the raw SQL command was:
BEGIN;
SELECT setval(pg_get_serial_sequence('"inventory_signup"','id'), coalesce(max("id"), 1), max("id") IS NOT null) FROM "inventory_signup";
SELECT setval(pg_get_serial_sequence('"inventory_supplier"','id'), coalesce(max("id"), 1), max("id") IS NOT null) FROM "inventory_supplier";
COMMIT;
Executed it with F5.
This fixed everything.
In addition to zapphods answer:
In my case the indexing was indeed incorrect, since I had deleted all migrations, and the database probably 10-15 times when developing as I wasn't in the stage of migrating anything.
I was getting an IntegrityError on finished_product_template_finishedproduct_pkey
Reindex the table and restart runserver:
I was using pgadmin3 and for whichever index was incorrect and throwing duplicate key errors I navigated to the constraints and reindexed.
And then reindexed.
The solution is that you need to resync your primary key fields as reported by "Hacking Life" who wrote an example SQL code but, as suggested by "Ad N" is better to run the Django command sqlsequencereset to get the exact SQL code that you can copy and past or run with another command.
As a further improvement to these answers I would suggest to you and other reader to dont' copy and paste the SQL code but, more safely, to execute the SQL query generated by sqlsequencereset from within your python code in this way (using the default database):
from django.core.management.color import no_style
from django.db import connection
from myapps.models import MyModel1, MyModel2
sequence_sql = connection.ops.sequence_reset_sql(no_style(), [MyModel1, MyModel2])
with connection.cursor() as cursor:
for sql in sequence_sql:
cursor.execute(sql)
I tested this code with Python3.6, Django 2.0 and PostgreSQL 10.
If you want to reset the PK on all of your tables, like me, you can use the PostgreSQL recommended way:
SELECT 'SELECT SETVAL(' ||
quote_literal(quote_ident(PGT.schemaname) || '.' || quote_ident(S.relname)) ||
', COALESCE(MAX(' ||quote_ident(C.attname)|| '), 1) ) FROM ' ||
quote_ident(PGT.schemaname)|| '.'||quote_ident(T.relname)|| ';'
FROM pg_class AS S,
pg_depend AS D,
pg_class AS T,
pg_attribute AS C,
pg_tables AS PGT
WHERE S.relkind = 'S'
AND S.oid = D.objid
AND D.refobjid = T.oid
AND D.refobjid = C.attrelid
AND D.refobjsubid = C.attnum
AND T.relname = PGT.tablename
ORDER BY S.relname;
After running this query, you will need to execute the results of the query. I typically copy and paste into Notepad. Then I find and replace "SELECT with SELECT and ;" with ;. I copy and paste into pgAdmin III and run the query. It resets all of the tables in the database. More "professional" instructions are provided at the link above.
If you have manually copied the databases, you may be running into the issue described here.
I encountered this error because I was passing extra arguments to the save method in the wrong way.
For anybody who encounters this, try forcing UPDATE with:
instance_name.save(..., force_update=True)
If you get an error that you cannot pass force_insert and force_update at the same time, you're probably passing some custom arguments the wrong way, like I did.
This question was asked about 9 years ago, and lots of people gave their own ways to solve it.
For me, I put unique=True in my email custom model field, but while creating superuser I didn't ask for the email to be mandatory.
Now after creating a superuser my email field is just saved as blank or Null. Now this is how I created and saved new user
obj = mymodel.objects.create_user(username='abc', password='abc')
obj.email = 'abc#abc.com'
obj.save()
It just threw the error saying duplicate-key-value-violates in the first line because the email was set to empty by default which was the same with the admin user. Django spotted a duplicate !!!
Solution
Option1: Make email mandatory while creating any user (for superuser as well)
Option2: Remove unique=True and run migrations
Option3: If you don't know where are the duplicates, you either drop the column or you can clear the database using python manage.py flush
It is highly recommended to know the reason why the error occurred in your case.
I was getting the same error as the OP.
I had created some Django models, created a Postgres table based on the models, and added some rows to the Postgres table via Django Admin. Then I fiddled with some of the columns in the models (changing around ForeignKeys, etc.) but had forgotten to migrate the changes.
Running the migration commands solved my problem, which makes sense given the SQL answers above.
To see what changes would be applied, without actually applying them:
python manage.py makemigrations --dry-run --verbosity 3
If you're happy with those changes, then run:
python manage.py makemigrations
Then run:
python manage.py migrate
I was getting a similar issue and nothing seemed to be working. If you need the data (ie cant exclude it when doing dump) make sure you have turned off (commented) any post_save receivers. I think the data would be imported but it would create the same model again because of these. Worked for me.
You just have to go to pgAdmin III and there execute your script with the name of the table:
SELECT setval('tablename_id_seq', (SELECT MAX(id) FROM tablename)+1);
Based on Paolo Melchiorre's answer, I wrote a chunk as a function to be called before any .save()
from django.db import connection
def setSqlCursor(db_table):
sql = """SELECT pg_catalog.setval(pg_get_serial_sequence('"""+db_table+"""', 'id'), MAX(id)) FROM """+db_table+""";"""
with connection.cursor() as cursor:
cursor.execute(sql)
This is the right statement. Mostly, It happens when we insert rows with id field.
SELECT setval('tablename_id_seq', (SELECT MAX(id) FROM tablename));