I am new to "concurrency" & "transactions" and I feel a little confused about backward/forward validation in optimistic concurrency control. Just take backward validation for an example. Suppose Tv is the transaction being validated and Ti is the committed transactions. I was wondering why we just check the Tv's read set vs.Ti's write set . Why don't we check Tv's write set vs.Ti's write set and Tv's write set vs.Ti's read set too? Since write-write and write-read are also conflict operations...Any explanation would be appreciated!
Validation uses the read-write conflict rules to ensure that the scheduling of a particular transaction is serially equivalent to all overlapping transations. This means that once entered the validation phase, no changes to read/write sets can be further performed.
There are 3 rules that need to be satisfied by any two transactions Ti and Tj, where i < j ( Ti entered validation phase before Tj):
Ti must not read objects written by Tj
Tj must not read objects written by Ti
Ti must not write objects written by Tj and
Tj must not write objects written by Ti
Backward validation assumes that all read operations of Ti were performed before validation of Tj started. This means that Ti is already in the validation phase. (rule 1 is satisfied)
During validation of Tj, the read set of Tj is checked against write set of Ti. If there is no overlap, then (rule 2 is satisfied).
If Rule 1 and Rule 2 are satified, Rule 3 is implicitly satisfied. All the changes commited will be done sequentially because Ti entered validation phase before Tj. Ti's write set will be validated and commited before Tj's write set.
backward validation of Tv:
read operations of earlier overlapping transactions (performed
before validation of Tv) cannot be affected by the writes ot Tv.
The validation checks Tv's read set against write sets of earlier
transactions, failing if there is any conflict;
forward validation of Tv:
write set of Tv is compared with the read sets of all overlapping
active transactions;
differently from backward validation, in forward validation there
are choices of which transaction to abort (Tv or any of the
conflicting active transactions);
Related
There are two parallel processes. Each process has two steps. The second step of the first process is always executed after the first step. The second step of the second process is performed only under a certain condition.
Activity diagram:
How to reflect an additional condition: to complete the second step of the second process, the first step of the first process must be completed.
I managed:
Flaws:
No match between fork and join
If the condition of the second process is not met, the token “hangs” before join
Having looked at your solution once more made me think that you saw issues, where there are none. You are worried about the hanging token, but that is no issue in this case. If P22 is bypassed, the token from P11 will go down directly to the join node. P11 and P12 will pass their token down also with no issue, thereby creating that ghost token which gets stuck in the middle right join. Since the lower join now has two tokens it will continue to the end where the activity is terminated. At that point any free running tokens (and even active actions) are terminated as well. All good.
I leave my former answer for further inspiration. But basically they will all be implemented in similar ways since they represent a gateway.
Original answer
I guess that using an event would be the best way:
This way D can only start (and finish) when the event has been received which is sent after As completion.
Another way would be to use an object that stores the finalization of action A and which is read by D.
Note that the diagonal connectors through a ready are ObjectFlows which UML does not per default distinguish optically (unlike SysML).
P. 374 of UML 2.5 states
Object tokens pass over ObjectFlows, carrying data through an Activity via their values, or carrying no data ( null tokens). A null token can still be passed along an ObjectFlow and used like any other token. For example, an Action can output a null token to explicitly indicate that it did not produce an optional value, and a downstream DecisionNode (see sub clause 15.3) can test for this and branch accordingly.
So you can see that as a buffer holding a token and no real data is needed to be stored. Basically that's the same as an event. Implementation wise you would use a semaphore or a stream to realize that, but of course at this level you would not care too much about such details.
So according to my understanding of bitcoin, we change the value of nonce to create new hashes for a block until we get a hash within the target.
But in case of ethereum "The nonce, a counter used to make sure each transaction can only be processed once" is incremented by one for each transaction according to my understanding, please correct me if I'm wrong.
My question is if we cannot use random values for nonce in ethereum block to change the hash values and get a value within target then what changes we make to the block data how do we change hash values to get a value within target?
The proof of work (PoW) algorithm works in the same way in bitcoin and ethereum. There is also nonce in ethereum block header. Official documentation, called yellow paper, in section 4.3 says:
(...) The block header contains several pieces of
information: (...)
nonce: A 64-bit value which, combined with the mixhash, proves that a sufficient amount of computation has been carried out on this block; formally
Hn.
In the same document, in section 4.2, is explained nonce for transaction.
Just to summarize:
In ethereum nonce appears in 2 places, in transaction and in block header. In transaction nonce works in the way you've described. In block header nonce works like in PoW. Both nonces are independent of each other.
I've been reading about the read-only lock-free transactions as implemented in Google Spanner and CockroachDB. Both claim to be implemented in a lock-free manner by making use of system clocks. Before getting to the question, here is my understanding (please skip the following section if you are aware of the machineries in both systems or just in CockroachDB):
Spanner's approach is simpler -- before committing a write transaction, Spanner picks the max timestamp across all involved shards as the commit timestamp, adds a wait, called commit wait, to for the max clock error before returning from it's write transaction. This means that all causally dependent transactions (both reads and writes) will have a timestamp value higher than the commit timestamp of the previous write. For read transactions, we pick the latest timestamp on the serving node. For example, if there was a write committed at timestamp 5, and the max clock error was 2, future writes and reads-only transactions will at least have a timestamp of 7.
CockroachDB on the other hand, does something more complicated. On writes, it picks the highest timestamp among all the involved shards, but does not wait. On reads, it assigns a preliminary read timestamp as the current timestamp on the serving node, then proceeds optimistically by reading across all shards and restarting the read transaction if any key on any shard reports a write timestamp that might imply uncertainty about whether the write causally preceeded the read transaction. It assumes that keys with write timestamps less than the timestamp for the read transaction either appeared before the read transaction or were concurrent with it. The uncertainty machinery kicks in on timestamps higher than the read transaction timestamp. For example, if there was a write committed at timestamp 8, and a read transaction was assigned timestamp 7, we are unsure about whether that write came before the read or after, so we restart the read transaction with a read timestamp of 8.
Relevant sources - https://www.cockroachlabs.com/blog/living-without-atomic-clocks/ and https://static.googleusercontent.com/media/research.google.com/en//archive/spanner-osdi2012.pdf
Given this implementation does CockroachDB guarantee that the following two transactions will not see a violation of serializability?
A user blocks another user, then posts a message that they don't want the blocked user to see as one write transaction.
The blocked user loads their friends list and their posts as one read transaction.
As an example, consider that the friends list and posts lived on different shards. And the following ordering happens (assuming a max clock error of 2)
The initial posts and friends list was committed at timestamp 5.
A read transaction starts at timestamp 7, it reads the friends list, which it sees as being committed at timestamp 5.
Then the write transaction for blocking the friend and making a post gets committed at 6.
The read transaction reads the posts, which it sees as being committed at timestamp 6.
Now, the transactions violate serializability becasue the read transaction saw an old write and a newer write in the same transaction.
What am I missing?
CockroachDB handles this with a mechanism called the timestamp cache (which is an unfortunate name; it's not much of a cache).
In this example, at step two when the transaction reads the friends list at timestamp 7, the shard that holds the friends list remembers that it has served a read for this data at t=7 (the timestamp requested by the reading transaction, not the last-modified timestamp of the data that exists) and it can no longer allow any writes to commit with lower timestamps.
Then in step three, when the writing transaction attempts to write and commit at t=6, this conflict is detected and the writing transaction's timestamp gets pushed to t=8 or higher. Then that transaction must refresh its reads to see if it can commit as-is at t=8. If not, an error may be returned and the transaction must be retried from the beginning.
In step four, the reading transaction completes, seeing a consistent snapshot of the data as it existed at t=7, while both parts of the writing transaction are "in the future" at t=8.
In an attempt to use Dynamodb for one of projects, I have a doubt regarding the strong consistency model of dynamodb. From the FAQs
Strongly Consistent Reads — in addition to eventual consistency, Amazon DynamoDB also gives you the
flexibility and control to request a strongly consistent read if your application, or an element of your application, requires it. A strongly consistent read returns a result that reflects all writes that received a successful response prior to the read.
From the definition above, what I get is that a strong consistent read will return the latest write value.
Taking an example: Lets say Client1 issues a write command on Key K1 to update the value from V0 to V1. After few milliseconds Client2 issues a read command for Key K1, then in case of strong consistency V1 will be returned always, however in case of eventual consistency V1 or V0 may be returned. Is my understanding correct?
If it is, What if the write operation returned success but the data is not updated to all replicas and we issue a strongly consistent read, how it will ensure to return the latest write value in this case?
The following link
AWS DynamoDB read after write consistency - how does it work theoretically? tries to explain the architecture behind this, but don't know if this is how it actually works? The next question that comes to my mind after going through this link is: Is DynamoDb based on Single Master, multiple slave architecture, where writes and strong consistent reads are through master replica and normal reads are through others.
Short answer: Writing successfully in strongly consistent mode requires that your write succeed on a majority of servers that can contain the record, therefore any future consistent reads will always see the same data, because a consistent read must read a majority of the servers that can contain the desired record. If you do not perform a strongly consistent read, the system will ask a random server for the record, and it is possible that the data will not be up-to-date.
Imagine three servers. Server 1, server 2 and server 3. To write a strongly consistent record, you pick two servers at minimum, and write the data. Let's pick 1 and 2.
Now you want to read the data consistently. Pick a majority of servers. Let's say we picked 2 and 3.
Server 2 has the new data, and this is what the system returns.
Eventually consistent reads could come from server 1, 2, or 3. This means if server 3 is chosen by random, your new write will not appear yet, until replication occurs.
If a single server fails, your data is still safe, but if two out of three servers fail your new write may be lost until the offline servers are restored.
More explanation:
DynamoDB (assuming it is similar to the database described in the Dynamo paper that Amazon released) uses a ring topology, where data is spread to many servers. Strong consistency is guaranteed because you directly query all relevant servers and get the current data from them. There is no master in the ring, there are no slaves in the ring. A given record will map to a number of identical hosts in the ring, and all of those servers will contain that record. There is no slave that could lag behind, and there is no master that can fail.
Feel free to read any of the many papers on the topic. A similar database called Apache Cassandra is available which also uses ring replication.
http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/decandia07dynamo.pdf
Disclaimer: the following cannot be verified based on the public DynamoDB documentation, but they are probably very close to the truth
Starting from the theory, DynamoDB makes use of quorums, where V is the total number of replica nodes, Vr is the number of replica nodes a read operation asks and Vw is the number of replica nodes where each write is performed. The read quorum (Vr) can be leveraged to make sure the client is getting the latest value, while the write quorum (Vw) can be leveraged to make sure that writes do not create conflicts.
Based on the fact that there are no write conflicts in DynamoDB (since these would have to be reconciliated from the client, thus being exposed in the API), we conclude that DynamoDB is using a Vw that respects the second law (Vw > V/2), probably just V/2+1 to reduce write latency.
Now regarding read quorums, DynamoDB provides 2 different kinds of read. The strongly consistent read uses a read quorum that respects the first law (Vr + Vw > V), probably just V/2 if we assume V/2+1 for writes as before. However, an eventually consistent read can use only a single random replica Vr = 1, thus being much quicker but giving zero guarantee around consistency.
Note: There's a possibility that the write quorum used does not respect the second law (Vw > V/2), but that would mean DynamoDB resolves automatically such conflicts (e.g. by selecting the latest one based on local time) without reconciliation from the client. But, I believe that this is really unlikely to be true, since there is no such reference in the DynamoDB documentation. Even in that case though, the rest reasoning stays the same.
You can find answer to your question here: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/APISummary.html
When you issue a strongly consistent read request, Amazon DynamoDB returns a response with the most up-to-date data that reflects updates by all prior related write operations to which Amazon DynamoDB returned a successful response.
In your example, if the updateItem request to update the value from v0 to v1 was successful, the subsequent strongly consistent read request will return v1.
Hope this helps.
I have located many resources on the web giving general overviews of MVCC (multi-version concurrency control) concepts, but no detailed technical references on exactly how it should work or be implemented. Are there any documents online, or books offline, that contain enough theory (and a bit of practical help, ideally) on which to base an implementation? I wish to emulate more or less what PostgreSQL does.
(For info I will be implementing it in SAS using SAS/Share - which provides some locking primitives and concurrent read/write access to the underlying data store, but nothing in the way of transaction isolation or proper DBMS features. If anyone is familiar with SAS/Share and thinks this is an impossible task, please shout!)
Transaction Processing: Concepts and Techniques and Transactional Information Systems: Theory, Algorithms, and the Practice of Concurrency Control and Recovery are authoritative source of transaction processing.
Both these books are also mentioned in PostgreSQL Wiki.
I wrote a blog post about this:
https://elliot.land/post/implementing-your-own-transactions-with-mvcc
A table in PostgreSQL can store multiple versions of the same row.
More, there are two additional columns:
tmin - marking the transaction id that inserted the row
tmax - marking the transaction id that deleted the row
The update is done by deleting and inserting a new record, and the VACUUM process collects the old versions that are no longer in use.
I implemented MVCC in Java. See transaction, runner and mvcc code.
Imagine each transaction gets a number timestamp that goes up for each transaction. We have transactions 1 and 2 in this example.
Transaction 1 reads A and writes value (A + 1). The snapshot isolation creates a temporary version of (A) which transaction 1 owns. The read timestamp of A is set to Transaction 1.
If transaction 2 comes along at the same time and reads A, it will also read the committed A -- it wont see A + 1 because it hasn't been committed. Transaction 2 can see versions of A that are == lastCommittedA and <= transaction 2.
At the time transaction 2 reads A, it will also check the read timestamp of A and see that a transaction 1 is there and check transaction 1 timestamp < transaction 2 timestamp. Because 1 < 2 then the transaction 2 will be aborted because it already depends on an old value of A.