I'm using Django 2.2 and my question is: does transaction.atomic roll back increments to a pk sequence?
Below is the background bug I wrote up that led me to this issue
I'm facing a really weird issue that I can't figure out and I'm hoping someone has faced a similar issue.
An insert using the django ORM .create() function is returning django.db.utils.IntegrityError: duplicate key value violates unique constraint "my_table_pkey" DETAIL: Key (id)=(5795) already exists.
Fine. But then I look at the table and no record with id=5795 exists!
SELECT * from my_table where id=5795;
shows (0 rows)
A look at the sequence my_table_id_seq shows that it has nonetheless incremented to show last_value = 5795 as if the above record was inserted. Moreover the issue does not always occur. A successful insert with different data is inserted at id=5796. (I tried reset the pk sequence but that didn't do anything, since it doesnt seem to be the problem anyway)
I'm quite stumped by this and it has caused us a lot of issues on one specific table. Finally I realize the call is wrapped in transaction.atomic and that a particular scenario may be causing a double insert with the same pk.
So my theory is: The transaction atomic is not rolling back the increment of the
Postgres sequences do not roll back. Every time they are touched by a statement they advance whether the statement succeeds or not. For more information see Notes section here Create Sequence.
I want to retrieving unique foreign key instances and ordering randomly
However, I got an error when I want to use order_by('?')
My query is like this:
qs=Course.objects.distinct('courseschedule__object_id').order_by('courseschedule__object_id')
this query is works fine, but right now I want to order randomly(to get random result every time),I try this
qs=qs.order_by('?')
I got this error:
django.db.utils.ProgrammingError: SELECT DISTINCT ON expressions must match initial ORDER BY expressions
Any idea how to fix it? My database is Postgres, I don't want do rawSQL.... I really appreciate you guys help!!!!
First of all, random order using the database is expensive and slow according to: querysetsDjango . Second, in my opinion instead of using the database to return a random order get your query from the database without distinct (because it also can be slow), shuffle the query using python methods for example: randomOrder and make a set() and then . This way you are not using the database to do your stuff but using pure python code.
While using Django 1.7 migrations, I came across a migration that worked in development, but not in production:
ValueError: Found wrong number (0) of constraints for table_name(a, b, c, d)
This is caused by an AlterUniqueTogether rule:
migrations.AlterUniqueTogether(
name='table_name',
unique_together=set([('a', 'b')]),
)
Reading up on bugs and such in the Django bug DB it seems to be about the existing unique_together in the db not matching the migration history.
How can I work around this error and finish my migrations?
(Postgres and MySQL Answer)
If you look at your actual table (use \d table_name) and look at the indexes, you'll find an entry for your unique constraint. This is what Django is trying to find and drop. But it can't find an exact match.
For example,
"table_name_...6cf2a9c6e98cbd0d_uniq" UNIQUE CONSTRAINT, btree (d, a, b, c)
In my case, the order of the keys (d, a, b, c) did not match the constraint it was looking to drop (a, b, c, d).
I went back into my migration history and changed the original AlterUniqueTogether to match the actual order in the database.
The migration then completed successfully.
I had a similar issue come up while I was switching over a CharField to become a ForeignKey. Everything worked with that process, but I was left with Django thinking it still needed to update the unique_together in a new migration. (Even though everything looked correct from inside postgres.) Unfortunately applying this new migration would then give a similar error:
ValueError: Found wrong number (0) of constraints for program(name, funder, payee, payer, location, category)
The fix that ultimately worked for me was to comment out all the previous AlterUniqueTogether operations for that model. The manage.py migrate worked without error after that.
"unique_together in the db not matching the migration history" - Every time an index is altered on a table it checks its previous index and drops it. In your case it is not able to fetch the previous index.
Solution-
1.Either you can generate it manually
2.Or revert to code where previous index is used and migrate.Then finally change to new index in your code and run migration.(django_migration files to be taken care of)
Also worth checking that you only have the expected number of Unique indexes for the table in question.
For example, if your table has multiple Unique indexes, then you should delete them to make sure you have only 1 (or whatever the number of expected Unique indexes is) pre-migration index present.
To check how many Unique indexes are there for a given table in PostgreSQL:
SELECT *
FROM information_schema.table_constraints AS c
WHERE
c.table_name = '<table_name>'
and c.constraint_type = 'UNIQUE'
Just in case someone runs into this and the previous answers haven't solved, In my case the issue was that when I modified the unique together constraint, the migration was attempted but the data didn't allow it (because of a more restrictive unique together constraint). However, the migration managed to delete the unique together constraint from the table leaving it in an inconsistent state. I had to migrate back to zero and re-apply the migration without data. Then it went through without problems.
In summary, make sure your data will be able to accept the new constraint before you apply the migration.
Find the latest migration file of the respective table, find unique
together, and replace current unique constraints fields.
Migrate database using ./manage.py migrate your_app_name.
Revert or undo the previous migrations file.
In my case problem was that the previous migration was not present in the table dajsngo_migrations. I added missing entry and then the new migration worked
Someone may get this issue while modifying the unique_together. Basically, the table state is not consistent with the migrations. You may need to add the previous constraints manually using MySQL shell.
incase one is using migrate with django and there is no data in the database, then drop the database and the do again python manage.py migrate`
I have code like this:
form = TestForm(request.POST)
form.save(commit=False).save()
This code sometimes work sometimes dont. Problem is in auto increment id.
When i have some data in db that is not written by django and i want to add data from django i get IntegrityError id already exists.
I i have 2 rows in db(not added by django) i need to click "add data" 3 times. After third time when id increment to 3 all is ok.
How to solve this?
These integrity errors appear, when your table sequence is not updated after new item is created. Or if sequence is out of sync with reality. For example - you import items from some source and the items also contain id, which is higher than your table index sequence indicates. I have not seen a case where django messes sequences up.
So what i guess happens is, that the other source that inserts data into your database, also inserts id's and sequence is not updated. Fix that and your problems should disappear.
Using Django with a PostgreSQL (8.x) backend, I have a model where I need to skip a block of ids, e.g. after giving out 49999 I want the next id to be 70000 not 50000 (because that block is reserved for another source where the instances are added explicitly with id - I know that's not a great design but it's what I have to work with).
What is the correct/safest place for doing this?
I know I can set the sequence with
SELECT SETVAL(
(SELECT pg_get_serial_sequence('myapp_mymodel', 'id')),
70000,
false
);
but when does Django actually pull a number from the sequence?
Do I override MyModel.save(), call its super and then grab me a cursor and check with
SELECT currval(
(SELECT pg_get_serial_sequence('myapp_mymodel', 'id'))
);
?
I believe that a sequence may be advanced by django even if saving the model fails, so I want to make sure whenever it hits that number it advances - is there a better place than save()?
P.S.: Even if that was the way to go - can I actually figure out the currval for save()'s session like this? if I grab me a connection and cursor, and execute that second SQL statement, wouldn't I be in another session and therefore not get a currval?
Thank you for any pointers.
EDIT: I have a feeling that this must be done at database level (concurrency issues) and posted a corresponding PostgreSQL question - How can I forward a primary key sequence in PostgreSQL safely?
As I haven't found an "automated" way of doing this yet, I'm thinking of the following workaround - it would be feasible for my particular situation:
Set the sequence with a MAXVALUE 49999 NO CYCLE
When 49999 is reached, the next save() will run into a postgres error
Catch that exception and reraise as a form error "you've run out of numbers, please reset to the next block then try again"
Provide a view where the user can activate the next block, i.e. execute "ALTER SEQUENCE my_seq RESTART WITH 70000 MAXVALUE 89999"
I'm uneasy about doing the restart automatically when catching the exception:
try:
instance.save()
except RunOutOfIdsException:
restart_id_sequence()
instance.save()
as I fear two concurrent save()'s running out of ids will lead to two separate restarts, and a subsequent violation of the unique constraint. (basically same concept as original problem)
My next thought was to not use a sequence for the primary key, but rather always specify the id explicitly from a separate counter table which I check/update before using its latest number - that should be safe from concurrency issues. The only problem is that although I have a single place where I add model instances, other parts of django or third-party apps may still rely on an implicit id, which I don't want to break.
But that same mechanism happens to be easily implemented on postgres level - I believe this is the solution:
Don't use SERIAL for the primary key, use DEFAULT my_next_id()
Follow the same logic as for "single level gapless sequence" - http://www.varlena.com/GeneralBits/130.php - my_next_id() does an update followed by a select
Instead of just increasing by 1, check if a boundary was crossed and if so, increase even further