My model represents users with unique names. In order to achieve that I store user and its name as 2 separate items using TransactWriteItems. The approximate structure looks like this:
PK | data
--------------------------------
userId#<userId> | {user data}
userName#<userName> | {userId: <userId>}
Data arrives to a lambda from a Kinesis stream. If one lambda invocation processes an "insert" event and another lambda request comes in about at the same time (the difference could be 5 milliseconds) the "update" event causes a TransactionConflictException: Transaction is ongoing for the item error.
Should I just re-try to run update again in a second or so? I couldn't really find a resolution strategy.
That implies you’re getting data about the same user in quick succession and both writes are hitting the same items. One succeeds while the other exceptions out.
Is it always duplicate data? If you’re sure it is, then you can ignore the second write. It would be a no-op.
Is it different data? Then you’ve got to decide how to handle that conflict. You’ll have one dataset in the database and a different dataset live in your code. That’s a business logic question not database question.
Related
I am using dynamoDB for a project. I have a use case where I maintain timeline for objects i.e. start and end time for an object and start time for next object. New objects can be added in between two existing objects(o1 & o2) in which I will have to update start time for next object in o1 and start time for next object in new object as start time of o2. This can cause problem in case two new objects are being added in between two objects and would probably require transactions. Can someone suggest how this can be handled?
Update: My data model looks like this:
objectId(Hash Key), startTime(Sort Key), endTime, nextStartTime
1, 1, 5, 4
1, 4, 6, 8
1, 8, 10, 9
So, it's possible a new entry comes in whose start time is 5. So, in transaction I will have to update nextStartTime for second entry to 5 and insert a new entry after the second entry which contains nextStartTime as start time of third entry. During this another entry might come in which also has start time between second and third entry(say 7 for eg.). Now I want the two transactions to be isolated of each other. In traditional SQL DBs it would be possible as second entry would be locked for the duration of transaction but Dynamo doesn't lock the items. So, I am wondering if I use transaction would the two transactions protect the data integrity.
DynamoDB supports optimistic locking. This is achieved via conditional writes.
You can do it manually by introducing a version attribute or you can use the one provided (hopefully) by your SDK. Here is a link to AWS docs.
TLDR
two objects have to update the same timeline at the same time
one will succeed the other will fail with a specific error
you will have to retry the failing one
Dynamo also has transactions. However, they are limited to 25 elements and consume 2x capacity units. If you can get away with an optimistic lock go for it.
Hope this was helpful
Update with more info on transactions
From this doc
Error Handling for Writing Write transactions don't succeed under the
following circumstances:
When a condition in one of the condition expressions is not met.
When a transaction validation error occurs because more than one
action in the same TransactWriteItems operation targets the same item.
When a TransactWriteItems request conflicts with an ongoing
TransactWriteItems operation on one or more items in the
TransactWriteItems request. In this case, the request fails with a
TransactionCanceledException.
When there is an insufficient provisioned capacity for the transaction to
be completed.
When an item size becomes too large (larger than 400 KB), or a local
secondary index (LSI) becomes too large, or a similar validation error
occurs because of changes made by the transaction.
When there is a user error, such as an invalid data format.
They claim that if there are two ongoing transactions on the same item, one will fail.
Why store the nextStartTime in the item? The nextStartTime is simply the start time of the next item, right? Seems like it'd be much easier to just pull the item as well as the next item to get the full picture at read-time. With a Query you can do this in one call, and so long as items are less than 2 KB in size it wouldn't even consume more RCUs than a get item would.
Simpler design, no cost for transactional writes, no need to do extensive testing on thread safety.
I'm having trouble updating a single item many times at once. If I try to update an item with new attributes many times like so:
UpdateExpression: 'SET attribute.#uniqueId = :newAttribute'
not all of the updates go through. I tried sending 20 updates with unique ids and this resulted in only 15 new attributes. This also occurs in my local dynamodb instance. I assume that the updates are somehow overwriting each other in a "last update wins" scenario but I'm not sure. How can I solve this?
DynamoDB is eventually consistent on update, so "race conditions" are possible. If you want more strict logic in writes, take a look at transactions
Items are not locked during a transaction. DynamoDB transactions
provide serializable isolation. If an item is modified outside of a
transaction while the transaction is in progress, the transaction is
canceled and an exception is thrown with details about which item or
items caused the exception.
Your observation is very interesting, and contradicts observations made in the past in Are DynamoDB "set" values CDRTs? and Concurrent updates in DynamoDB, are there any guarantees? - in those issues people observed that concurrent writes to different set items or to different top-level attributes seem to not get overwritten. Neither case is exactly the same as what you tested (nested attributes), though, so it's not a definitive proof there was something wrong with your test, but it's still surprising.
Presentations made in the past by the DynamoDB developers suggested that in DynamoDB writes happen on a single node (the designated "leader" of the partition), and that this node can serialize the concurrent writes. This serialization is needed to allow conditional updates, counter increments, etc., to work safely with concurrent writes. Presumably, the same serialization could have also allowed multiple sub-attributes to be modified concurrently safely. If it doesn't, it might mean that this serialization is deliberately disabled for certain updates, perhaps all unconditional updates (without a ConditionExpression). This is very surprising, and should have been documented by Amazon...
Consider the following architecture:
write -> DynamoDB table -> stream -> Lambda -> write metadata item to same table
It could be used for many, many awsome situations, e.g table and item level aggregations. I've seen this architecture promoted in several tech talks by official AWS engineers.
But doesn't writing metadata item add new item to stream and run Lambda again?
How to avoid infinite loop? Is there a way to avoid metadata write appearing in stream?
Or is spending 2 stream and Lambda requests inevitable with this architecture? (we're charged per request) I.e exit Lambda function early if it's metadata item.
As triggering an AWS Lambda function from a DynamoDB stream is a binary option (on/off), it's not possible to only trigger the AWS Lambda function for certain writes to the table. So your AWS Lambda function will be called again for the items it just wrote to the DynamoDB table. The important bit is to have logic in place in your AWS Lambda function to detect that it wrote that data and to not write data in that case again. Otherwise you'd get the mentioned infinite loop, which would be a really unfortunate situation, especially if it would went unnoticed.
Currently dynamo DB does not offer condition based subscription to stream, so yes Dynamo DB will execute your lambda function in an infinite loop, currently the only solution is to limit the time your lambda function execute, you can use multiple lambda functions, one lambda function would be there just to check whether a metadata was written or not, I'm sharing a cloud architecture diagram of how you can achieve it,
A bit late but hopefully people looking for a more demonstrative answer will find this useful.
Suppose you want to process records where you want to add to an item up to a certain threshold, you could have an if condition that checks that and processes or skips the record, e.g.
This code assumes you have an attribute "Type" for each of your entities / object types - this was recommended to me by Rick Houlihan himself but you could also check if an attribute exists i.e. "<your-attribute>" in record["dynamodb"]["NewImage"] - and you are designing with PK and SK as generic primary and sort key names.
threshold = (os.environ.get("THRESHOLD"))
def get_value():
response = table.query(KeyConditionExpression=Key('PK').eq(<your-pk>))
value = response['Items']['<your-attribute>'] if 'Items' in response else 0
return value
def your_aggregation_function():
# Your aggregation logic here
# Write back to the table with a put_item call once done
def lambda_handler(event, context):
for record in event['Records']:
if record['eventName'] != "REMOVE" and record["dynamodb"]["NewImage"]["Type'] == <your-entity-type>:
# Query the table to extract the attribute value
attribute_value = get_value(record["dynamodb"]["Keys"]["PK"]["S"])
if attribute_value < threshold:
# Send to your aggregation function
Having the conditions in place in the lambda handler (or you could change where to suit your needs) prevents the infinite loop mentioned.
You may want additional checks in the update expression to make sure two (or more) concurrent lambda are not writing the same object. I suggest you use a date = # timestamp defined in the lambda and add this in the SK, or if you cant, have an "EventDate" attribute in your item so that yo ucould add ConditionExpression or UpdateExpression SET if_not_exists(#attribute, :date)
The above will guarantee that your lambda is idempotent.
Background
In distributed systems messages can arrive in an out of order fashion. For example if message A is sent at time T1 and message B is sent at T2 there is a chance that B is received before A. This matters for example if A is a message such as "CustomerRegistered" and B is "CustomerUnregistered".
In other databases I'd typically write a tombstone if CustomerUnregistered is received for a customer that is not present in the database. I can then check if this tombstone exists when the CustomerRegistered message is received (and perhaps simply ignore this message depending on use case). I could of course do something similar with Datomic as well but I hope that maybe Datomic can help me so that I don't need to do this.
One potential solution I'm thinking of is this:
Can you perhaps retract a non-existing customer entity (CustomerUnregistered) and later when CustomerRegistered is received the customer entity is written at a time in history before the retraction? It would be neat (I think) if the :db/txInstant could be set to a timestamp defined in the message.
Question
How would one deal with this scenario in Datomic in an idiomatic way?
As a general principle, do not let your application code manipulate :db/txInstant. :db/txInstant represents the time at which you learned a fact, not the time at which it happened.
Maybe you should consider un-registration as adding a Datom about a customer (e.g via an instant-typed :customer/unregistered attribute) instead of retracting the Datoms of that customer (which means: "forget that this customer existed").
However, if retracting the datoms of customer is really the way you want to do things, I'd use a record which prevents the customer registration transaction to take place (which I'd enforce via a transaction function).
I have been looking at DynamoDB to create something close to a transaction. I was watching this video presentation: https://www.youtube.com/watch?v=KmHGrONoif4 in which the speaker shows around the 30 minute mark ways to make dynamodb operation close to ACID compliant as can be. He shows the best concept is to use dynamodb streams, but doesn't show a demo or an example. I have a very simple scenario I am look at and that is I have one Table called USERS. Each user has a list of friends. If two users no longer wish to be friends they must be removed from both of the user's entities (I can't afford for one friend to be deleted from one entity, and due to a crash for example, the second user entities friend attribute is not updated causing inconsistent data). I was wondering if someone could provide some simple walk-through oh of how to accomplish something like this to see how it all works? If code could be provided that would be great to see how it works.
Cheers!
Here is the transaction library that he is referring: https://github.com/awslabs/dynamodb-transactions
You can read through the design: https://github.com/awslabs/dynamodb-transactions/blob/master/DESIGN.md
Here is the Kinesis client library:
http://docs.aws.amazon.com/kinesis/latest/dev/developing-consumers-with-kcl.html
When you're writing to DynamoDB, you can get an output stream with all the operations that happen on the table. That stream can be consumed and processed by the Kinesis Client Library.
In your case, have your client remove it from the first user, then from the second user. In the Kinesis Client Library when you are consuming the stream and see a user removed, look at who he was friends with and go check/remove if needed - if needed the removal should probably done through the same means. It's not truly a transaction, and relies on the fact that KCL guarantees that the records from the streams will be processed.
To add to this confusion, KCL uses Dynamo to store where in the stream is at when processing and to checkpoint processed records.
You should try and minimize the need for transactions, which is a nice concept in a small scale, but can't really scale once you become very successful and need to support millions and billions of records.
If you are thinking in a NoSQL mind set, you can consider using a slightly different data model. One simple example is to use Global Secondary Index on a single table on the "friend-with" attribute. When you add a single record with a pair of friends, both the record and the index will be updated in a single action. Both table and index will be updated in a single action, when you delete the friendship record.
If you choose to use the Updates Stream mechanism or the Global Secondary Index one, you should take into consideration the "eventual consistency" case of the distributed system. The consistency can be achieved within milli-seconds, but it can also take longer. You should analyze the business implications and the technical measures you can take to solve it. For example, you can verify the existence of both records (main table as well as the index, if you found it in the index), before you present it to the user.