If I have a load-balanced application that can be scaled to 2 or more instances, and each of those instances can commit events for the same NEventStore aggregate, will I run into ordering/race condition problems?
It looks like there is a unique index in the Commits table on BucketId, StreamId, and Commit Sequence. If two different instances of an application tried to commit changes to the same aggregate at the same time, the second one would receive a concurency exception, which could be handled. See more in this article: http://burnaftercoding.com/post/play-with-neventstore/
Related
Currently, I need to update two tables in concurrency and one table contains configurations and other table include items to the configuration link. Now whenever the configuration is updated, it provides me with the list of items that are belong to this configuration(it can be 100-1000 items or more). How can i update the dynamodb using transaction?
I need to update two tables in concurrency
See https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transaction-apis.html
TransactWriteItems is a synchronous and idempotent write operation that groups up to 25 write actions in a single all-or-nothing operation. These actions can target up to 25 distinct items in one or more DynamoDB tables within the same AWS account and in the same Region. The aggregate size of the items in the transaction cannot exceed 4 MB. The actions are completed atomically so that either all of them succeed or none of them succeeds.
I have configured dynamodb stream to trigger my lambda. When I update an item on dynamodb table, I see my lambda is triggered twice with two different event. The NewImage and OldImage are same in these two events. They are only different in eventID, ApproximateCreationDateTime, SequenceNumber etc.
And there is only 1 million second different based on the timestamp.
I updated the item via dynamodb console which means there should be only one action happened. Otherwise, it is impossible to update item twice within 1 million second via console.
Is it expected to see two events?
This would not be expected behaviour.
If you're seeing 2 separate events this would indicate 2 separate actions occurred. As theres a different time this indicates a secondary action has occurred.
From the AWS Documentation the following is true
DynamoDB Streams helps ensure the following:
Each stream record appears exactly once in the stream.
For each item that is modified in a DynamoDB table, the stream records appear in the same sequence as the actual modifications to the item.
This will likely be related to your application, ensure that you're not using multiple writes where you think there might be a single.
Also check your CloudTrail to see whether there are multiple API calls that you can see. I would imagine if you're using global tables there's a possibility of seeing a secondary api call as the contents of the item would be modified by the DynamoDB service.
I'm not sure how to achieve consistent read across multiple SELECT queries.
I need to run several SELECT queries and to make sure that between them, no UPDATE, DELETE or CREATE has altered the overall consistency. The best case for me would be something non blocking of course.
I'm using MySQL 5.6 with InnoDB and default REPEATABLE READ isolation level.
The problem is when I'm using RDS DataService beginTransaction with several executeStatement (with the provided transactionId). I'm NOT getting the full result at the end when calling commitTransaction.
The commitTransaction only provides me with a { transactionStatus: 'Transaction Committed' }..
I don't understand, isn't the commit transaction fonction supposed to give me the whole (of my many SELECT) dataset result?
Instead, even with a transactionId, each executeStatement is returning me individual result... This behaviour is obviously NOT consistent..
With SELECTs in one transaction with REPEATABLE READ you should see same data and don't see any changes made by other transactions. Yes, data can be modified by other transactions, but while in a transaction you operate on a view and can't see the changes. So it is consistent.
To make sure that no data is actually changed between selects the only way is to lock tables / rows, i.e. with SELECT FOR UPDATE - but it should not be the case.
Transactions should be short / fast and locking tables / preventing updates while some long-running chain of selects runs is obviously not an option.
Issued queries against the database run at the time they are issued. The result of queries will stay uncommitted until commit. Query may be blocked if it targets resource another transaction has acquired lock for. Query may fail if another transaction modified resource resulting in conflict.
Transaction isolation affects how effects of this and other transactions happening at the same moment should be handled. Wikipedia
With isolation level REPEATABLE READ (which btw Aurora Replicas for Aurora MySQL always use for operations on InnoDB tables) you operate on read view of database and see only data committed before BEGIN of transaction.
This means that SELECTs in one transaction will see the same data, even if changes were made by other transactions.
By comparison, with transaction isolation level READ COMMITTED subsequent selects in one transaction may see different data - that was committed in between them by other transactions.
I have been exploring WSO2 CEP for last couple of days.
I am considering a scenario where a single lookup table could be used in multiple execution plans. As far as I know, only way to store data all data is event table.
My questions are:
Can I load an event table once(may be by one execution plan) and share that table with other execution plans?
If answer of Q1 is NO, then it will be multiple copies of same data storing in different execution plans, right ? Is there any way to reduce this space utilization ?
If event table is not the correct solution what are other options ?
Thanks in Advance,
-Obaid
Event tables would work in your scenario. However, might you need to use RDBMS EventTable or Hazelcast EventTable instead of In-memory event tables. With them, you can share single table data with multiple execution plans.
If you want your data to be preserved even after server shutdown, you should use RDBMS EventTables (with this you can also access your table data using respective DB browsers, i.e., H2 browser, MySQL Workbench, etc...). If you just want to share a single event table with multiple execution plans at runtime, you can go ahead with Hazelcast EventTable.
I am working on scheduling some jobs using Control-M. My scenario is as below:
I have the following jobs - Job 1, Job 2, Job 3 and Job 4. All of them does an insert into the same table. I have to schedule all the four jobs to start at the same time. Since they are inserting into the same table, I am running into lock issues.
I cannot add a dependency between these jobs because I will be adding more jobs to this stream. Also, there are no logical dependencies between these jobs.
Also, all these jobs call the same script, but with different parameters.
Is there any way to handle this issue?
One way is to use the "Resources" properties for the tasks. If they all need the same exclusive or limited to 1 in quantity resource then they will get run one at a time.
You should use Control Resource, no Quantitative Resources.
Only write in the field Control Resources the name of the used table with the option of Exclusive active. This parameter should be add on every Job that can make lock on that table. You can keep the Exclusive un-selected for those Jobs that can use the table but don't lock it.
Control Resource and Quantitative Resources are not the same.