I'm using oracle 12c database and want to test out one problem.
When carrying out web service request it returns underlying ORA-02292 error on constraint name (YYY.FK_L_TILSYNSOBJEKT_BEGRENSNING).
Here is SQL of the table with the constraint:
CONSTRAINT "FK_L_TILSYNSOBJEKT_BEGRENSNING" FOREIGN KEY ("BEGRENSNING")
REFERENCES "XXX"."BEGRENSNING" ("IDSTRING") DEFERRABLE INITIALLY DEFERRED ENABLE NOVALIDATE
The problem is, that when I try to delete the row manually with valid IDSTRING (in both tables) from parent table - it successfully does it.
What cause it to behave this way? Is there any other info I should give?
Not sure if it helps someone, since it was fairly stupid mistake, but i'll try to make it useful since people demand answers.
Keyword DEFERRABLE INITIALLY DEFERRED means that constraint is enforced at commit not at query run time as opposed to INITIALLY IMMEDIATE, which does the check right after you issue the query, however this keyword makes database bulk updates a bit slower (since every query in a transaction has to be checked by constraint, meanwhile in initial deference if it turns out there is an issue - whole bulk is rolled back, no additional unnecessary queries are issued and something can be done about it), hence used less often than initial deference.
Error ORA-02292 however is shown only for DELETE statements, knowing that its a bit easier to debug your statements.
Related
I'm using Django 2.2 and my question is: does transaction.atomic roll back increments to a pk sequence?
Below is the background bug I wrote up that led me to this issue
I'm facing a really weird issue that I can't figure out and I'm hoping someone has faced a similar issue.
An insert using the django ORM .create() function is returning django.db.utils.IntegrityError: duplicate key value violates unique constraint "my_table_pkey" DETAIL: Key (id)=(5795) already exists.
Fine. But then I look at the table and no record with id=5795 exists!
SELECT * from my_table where id=5795;
shows (0 rows)
A look at the sequence my_table_id_seq shows that it has nonetheless incremented to show last_value = 5795 as if the above record was inserted. Moreover the issue does not always occur. A successful insert with different data is inserted at id=5796. (I tried reset the pk sequence but that didn't do anything, since it doesnt seem to be the problem anyway)
I'm quite stumped by this and it has caused us a lot of issues on one specific table. Finally I realize the call is wrapped in transaction.atomic and that a particular scenario may be causing a double insert with the same pk.
So my theory is: The transaction atomic is not rolling back the increment of the
Postgres sequences do not roll back. Every time they are touched by a statement they advance whether the statement succeeds or not. For more information see Notes section here Create Sequence.
I have a database table, that has a column, which is being updated frequently (relatively).
The question is:
Is it more efficient to avoid always writing to the database, by reading the object first (SELECT ... WHERE), and comparing the values, to determine if an update is even necessary
or always just issue an update (UPDATE ... WHERE) without checking what's the current state.
I think that the first approach would be more hassle, as it consists of two DB operations, instead of just one, but we could also avoid an unnecessary write.
I also question if we should even think about this, as our db will most likely not reach the 100k records in this table anytime soon, so even if the update would be more costly, it wouldn't be an issue, but please correct me if I'm wrong.
The database is PostgreSQL 9.6
It will avoid I/O load on the database if you only perform the updates that are necessary.
You can include the test with the UPDATE, like in
UPDATE mytable
SET mycol = 'avalue'
WHERE id = 42
AND mycol <> 'avalue';
The only downside is that triggers will not be called unless the value really changes.
I'm Developing a MFC application (SDI) to update, add and delete a table in the database called security.
The problem is after updating one row in the table, the row is Updated ( i verified) then when I do another action (updating another row or deleting a row) the update is canceled. I really don't know if there is a problem with the CRecordset or the database itself.
//m_pSet is a an instance of a class based on CRecordSet:
m_pSet->Open();
m_pSet->Edit();
m_pSet->m_Security_Id = sec->SecurityId;
m_pSet->m_Security_Name = sec->SecurityName;
m_pSet->m_Security_Type_Id = sec->SecurityTypeStringToInt();
if (!m_pSet->Update())
{
AfxMessageBox(_T("Record not updated; no field values were set."));
}
In my experiences with Oracle and SQL Server there is a difference in the way commit statements happen. The behavior you are seeing implies that the Update is not implicitly committed.
In Oracle, commits are an explicit statement and need to be conducted after you have carried out some transaction.
In SQL Server, commits are implicit by default and do not need to be carried out after transactions.
That being said, this other other Stack Overflow Question and Answer appears to have two methods of making commits explicit in SQL Server, meaning without the commit, you may lose your transaction.
The first being that you can use BEGIN TRANSACTION to have the database wait for a commit statement. From what you have posted, it would seem this is not the case.
The other way to make commit statements explicit in SQL Server is by changing some settings on the databsae itself. Based on your line of thought I would check the settings referred to in the post noted here and ensure that you did not make commits implicit.
I can't detect any pattern, maybe 1 in each 1000 edits of a certain model returns an IntegrityError on a m2m field. Most of the times this field wasn't even modified. When a model is saved I believe django always wipes the m2m field and then re-adds the items, right? I saw django calls clear() and then add()s the items.
My code then fails with:
IntegrityError: duplicate key value violates unique constraint
"app_model_m2m_field_key" DETAIL: Key (model1_id, model2_id)=(597,
1009) already exists.
It seems like the add of items is performed before the items are cleared, which is very weird. I've tried to reproduce it but it's very hard, only happens occasionally. Any idea what could cause it? Could maybe setting auto commit solve this problem?
Thanks in advance
Most likely, you have two requests racing to commit similar changes at the same time.
Request 1 begins a transaction and DELETEs the existing M2M rows.
Request 2 begins a transaction and DELETEs the M2M rows with the same where clause. This blocks waiting for request 1's transaction to commit.
Request 1 re-INSERTs all the M2M rows and commits.
Request 2 resumes, and the delete succeeds without deleting any rows, because all rows that existed when the statement began have already been deleted.
Request 2 tries to re-INSERT an M2M row, but the database detects that it already exists and returns an error.
It's possible to fix this by upgrading to the SERIALIZABLE isolation level (instead of PostgreSQL's default of READ COMMITTED) but at the cost of even more exciting potential failure modes and worse performance.
I'm assuming you're right that Django is performing a DELETE followed by a series of INSERTs, although that wouldn't be a very good plan precisely because it exacerbates this kind of race.
The best plan is to identify what has actually changed and only ask the database to make those changes, because then if you get an integrity error it's because there was a real conflict that you probably couldn't do anything about anyway.
Using Django with a PostgreSQL (8.x) backend, I have a model where I need to skip a block of ids, e.g. after giving out 49999 I want the next id to be 70000 not 50000 (because that block is reserved for another source where the instances are added explicitly with id - I know that's not a great design but it's what I have to work with).
What is the correct/safest place for doing this?
I know I can set the sequence with
SELECT SETVAL(
(SELECT pg_get_serial_sequence('myapp_mymodel', 'id')),
70000,
false
);
but when does Django actually pull a number from the sequence?
Do I override MyModel.save(), call its super and then grab me a cursor and check with
SELECT currval(
(SELECT pg_get_serial_sequence('myapp_mymodel', 'id'))
);
?
I believe that a sequence may be advanced by django even if saving the model fails, so I want to make sure whenever it hits that number it advances - is there a better place than save()?
P.S.: Even if that was the way to go - can I actually figure out the currval for save()'s session like this? if I grab me a connection and cursor, and execute that second SQL statement, wouldn't I be in another session and therefore not get a currval?
Thank you for any pointers.
EDIT: I have a feeling that this must be done at database level (concurrency issues) and posted a corresponding PostgreSQL question - How can I forward a primary key sequence in PostgreSQL safely?
As I haven't found an "automated" way of doing this yet, I'm thinking of the following workaround - it would be feasible for my particular situation:
Set the sequence with a MAXVALUE 49999 NO CYCLE
When 49999 is reached, the next save() will run into a postgres error
Catch that exception and reraise as a form error "you've run out of numbers, please reset to the next block then try again"
Provide a view where the user can activate the next block, i.e. execute "ALTER SEQUENCE my_seq RESTART WITH 70000 MAXVALUE 89999"
I'm uneasy about doing the restart automatically when catching the exception:
try:
instance.save()
except RunOutOfIdsException:
restart_id_sequence()
instance.save()
as I fear two concurrent save()'s running out of ids will lead to two separate restarts, and a subsequent violation of the unique constraint. (basically same concept as original problem)
My next thought was to not use a sequence for the primary key, but rather always specify the id explicitly from a separate counter table which I check/update before using its latest number - that should be safe from concurrency issues. The only problem is that although I have a single place where I add model instances, other parts of django or third-party apps may still rely on an implicit id, which I don't want to break.
But that same mechanism happens to be easily implemented on postgres level - I believe this is the solution:
Don't use SERIAL for the primary key, use DEFAULT my_next_id()
Follow the same logic as for "single level gapless sequence" - http://www.varlena.com/GeneralBits/130.php - my_next_id() does an update followed by a select
Instead of just increasing by 1, check if a boundary was crossed and if so, increase even further