DELETE blocking INSERT - c++

OS : Solaris
Database : Informix
I have a process which has 2 threads:
Thread 1 dealing with new transactions and doing DB INSERTS
Thread 2 dealing with existing transactions and doing DB DELETES
PROBLEM
Thread 1 is continuously doing INSERTS(adding new transactions) on a table.
Thread 2 is continuously doing DELETES(removing expired transactions) from the same table based on primary key
INSERTS are failing because of Informix error 244 which are occurring due to page/table locking.
I guess, the DELETE is doing a Table lock instead of Row lock and preventing the INSERTs to work.
Is there any way I can prevent this deadlocking?
EDIT
I found another clue. The 244 error is caused by a SELECT query.
Both insert and delete operation does a select from a frequently updating table, before doing the operation.
Isolation is set as COMMITTED READ. When I manually do a select on this table from dbaccess, when the deletes are happening, I get the same error.

I would be very surprised if a DELETE was doing a full table lock when removing single elements by primary key. Rather, it is likely the longevity of one (or both) of the transactions themselves is eventually tripping a table lock due to the number of modified rows. In general, you can avoid deadlocks in volatile tables such as this by eliminating all but single-row operations in each transaction, and ensuring your transaction model is read-committed. At least thus has been my experience.

Related

Snowflake update statements locks the entire table and queues other update statements

We have a use case where we need to execute multiple update statements(each updating different set of rows) on the same table in snowflake without any latency caused by queuing time of the update queries. Currently, a single update statements takes about 1 min to execute and all the other concurrent update statements are locked (about 40 concurrent update statements) and are queued, so the total time including the wait time is around 1 hour but the expected time is around 2 mins( assuming all update statements execute at the same time and the size of warehouse supports 40 queries at the same time without any queueing).
What is the best solution to avoid this lock time? We've considered the following two options :-
Make changes in the application code to batch all the update statements and execute as one query - Not possible for our use case.
Have a separate table for each client (each update statement, updates rows for different clients in the table) - This way, all the update statements will be executing in separate table and there won't be any locks.
Is the second approach the best way to go or is there any other workaround that would help us reduce the latency of the queueing time?
The scenario is expected to happen since Snowflake locks table during update.
Option 1 is ideal to scale in data model. But since you can't make it happen, you can go by option 2.
You can also put all the updates in one staging table and do upsert in bulk - Delete and Insert instead of update. Check if you can afford the delay.
But if you ask me, snowflake should not be used for atomic update. It has to be an upsert (delete and insert and that too in bulk). Atomic updates will have limitations. Try going for a row based store like Citus or MySQL if your usecase allows this.

Dynamodb missing updates with concurrent requests?

I'm having trouble updating a single item many times at once. If I try to update an item with new attributes many times like so:
UpdateExpression: 'SET attribute.#uniqueId = :newAttribute'
not all of the updates go through. I tried sending 20 updates with unique ids and this resulted in only 15 new attributes. This also occurs in my local dynamodb instance. I assume that the updates are somehow overwriting each other in a "last update wins" scenario but I'm not sure. How can I solve this?
DynamoDB is eventually consistent on update, so "race conditions" are possible. If you want more strict logic in writes, take a look at transactions
Items are not locked during a transaction. DynamoDB transactions
provide serializable isolation. If an item is modified outside of a
transaction while the transaction is in progress, the transaction is
canceled and an exception is thrown with details about which item or
items caused the exception.
Your observation is very interesting, and contradicts observations made in the past in Are DynamoDB "set" values CDRTs? and Concurrent updates in DynamoDB, are there any guarantees? - in those issues people observed that concurrent writes to different set items or to different top-level attributes seem to not get overwritten. Neither case is exactly the same as what you tested (nested attributes), though, so it's not a definitive proof there was something wrong with your test, but it's still surprising.
Presentations made in the past by the DynamoDB developers suggested that in DynamoDB writes happen on a single node (the designated "leader" of the partition), and that this node can serialize the concurrent writes. This serialization is needed to allow conditional updates, counter increments, etc., to work safely with concurrent writes. Presumably, the same serialization could have also allowed multiple sub-attributes to be modified concurrently safely. If it doesn't, it might mean that this serialization is deliberately disabled for certain updates, perhaps all unconditional updates (without a ConditionExpression). This is very surprising, and should have been documented by Amazon...

DynamoDB Concurrency Issue

I'm building a system in which many DynamoDB (NoSQL) tables all contain data and data in one table accesses data in another table.
Multiple processes are accessing the same item in a table at the same time. I want to ensure that all of the processes have updated data and aren't trying to access that item at the exact same time because they are all updating the item with different data.
I would love some suggestions on this as I am stuck right now and don't know what to do. Thanks in advance!
Optimistic locking is a strategy to ensure that the client-side item that you are updating (or deleting) is the same as the item in Amazon DynamoDB. If you use this strategy, your database writes are protected from being overwritten by the writes of others, and vice versa.
With optimistic locking, each item has an attribute that acts as a version number. If you retrieve an item from a table, the application records the version number of that item. You can update the item, but only if the version number on the server side has not changed. If there is a version mismatch, it means that someone else has modified the item before you did. The update attempt fails, because you have a stale version of the item. If this happens, you simply try again by retrieving the item and then trying to update it. Optimistic locking prevents you from accidentally overwriting changes that were made by others. It also prevents others from accidentally overwriting your changes.
To support optimistic locking, the AWS SDK for Java provides the #DynamoDBVersionAttribute annotation. In the mapping class for your table, you designate one property to store the version number, and mark it using this annotation. When you save an object, the corresponding item in the DynamoDB table will have an attribute that stores the version number. The DynamoDBMapper assigns a version number when you first save the object, and it automatically increments the version number each time you update the item. Your update or delete requests succeed only if the client-side object version matches the corresponding version number of the item in the DynamoDB table.
ConditionalCheckFailedException is thrown if:
You use optimistic locking with #DynamoDBVersionAttribute and the version value on the server is different from the value on the client side.
You specify your own conditional constraints while saving data by using DynamoDBMapper with DynamoDBSaveExpression and these constraints failed.
Note
DynamoDB global tables use a “last writer wins” reconciliation between concurrent updates. If you use global tables, last writer policy wins. So in this case, the locking strategy does not work as expected.

Dynamo DB Optimistic Locking Behavior during Save Action

Scenario: We have a Dynamo DB table supporting Optimistic Locking with Version Number. Two concurrent threads are trying to save two different entries with the same primary key value to that Table.
Question: Will ConditionalCheckFailedException be thrown for the latter save action?
Yes, the second thread which tries to insert the same data would throw ConditionalCheckFailedException.
com.amazonaws.services.dynamodbv2.model.ConditionalCheckFailedException
As soon as the item is saved in database, the subsequent updates should have the version matching with the value on DynamoDB table (i.e. server side value).
save — For a new item, the DynamoDBMapper assigns an initial version
number 1. If you retrieve an item, update one or more of its
properties and attempt to save the changes, the save operation
succeeds only if the version number on the client-side and the
server-side match. The DynamoDBMapper increments the version number
automatically.
We had a similar use case in past but in our case, multiple threads reading first from the dynamoDB and then trying to update the values.
So finally there will be change in version by the time they read and they try to update the document and if you don't read the latest value from the DynamoDB then intermediate update will be lost(which is known as update loss issue refer aws-docs for more info).
I am not sure, if you have this use-case or not but if you have simply 2 threads trying to update the value and then if one of them get different version while their request reached to DynamoDB then you will get ConditionalCheckFailedException exception.
More info about this error can be found here http://grepcode.com/file/repo1.maven.org/maven2/com.michelboudreau/alternator/0.10.0/com/amazonaws/services/dynamodb/model/ConditionalCheckFailedException.java

Unit-testing PostgreSQL row-level locks

I am currently adding unit tests to a rather large quantity of PostgreSQL stored procedures, using pgTap.
Some of the procedures perform operations which lock rows explicitly. These locks are critical to the application.
How do I write tests that check that the rows that need to be locked have been, and that rows which shouldn't be locked aren't?
The only "clue" I have at the moment is the pgrowlocks extension, which allows a transaction to check for rows locked by another transaction. However, the current transaction doesn't seem to see its own locks, so I'd have to use something to synchronise two transaction, and unless I am quite mistaken, there's no way to do that using pgTap.
(note: using PostgreSQL 9.1)
If you can identify the ctid of the rows in question, and know which transaction should have the rows locked, maybe you could use the pageinspect extension and look at the tuple info flags and xmax? The info flags should indicate the row is locked, and xmax be set to the transaction id holding it.
How do I write tests that check that the rows that need to be locked have been, and that rows which shouldn't be locked aren't?
Open separate transation, try to lock the same row with NOWAIT, and catch the exception.
PostgreSQL has no support for autonomous transactions, so - to open separate transaction from within PgTAP test, you will have to resort to dblink or other similar extension.
PS. I found this link, where Robert Haas explains, why row-level tuples are not tracked in pg_locks:
(...) ungranted tuple locks show up in
pg_locks, but they disappear once granted. (PostgreSQL would run out
of lock table space on even a medium-sized SELECT FOR UPDATE query if
we didn't do this.)
On the other hand - I quite don't understand why you want to test for lock existence - it's guaranteed after succesful LOCK command.