Doctrine Extension Loggable - log old values instead of new ones of field - doctrine-orm

I implemented the Loggable extensions of Doctrine. But now I have the following case. I want to track "status" of object. But the objects already exist in my db with corresponding status for each of them. When I update one, in the log_entry is inserted the first log for an object with the new value for its status. (Let say I chnage status from active to suspended and in the log entry is inserted suspended)
From this moment I cannot revert the "active" status, becouse it is not recorded nowhere. I can deal with that with several ways, but is there some option for that Loggable Extension that instead of inserting new version, as a lest record for object in logs to store the current version, before changes happen?

You can override getObjectChangeSetData from LoggableListener.
Old values are stored there in $changes array.

Related

Power Automate: how to catch which column was updated in Dataverse Connector

I'm starting from a "When a row is added, modified or deleted" connector, i'm passing in a switch connector that controls if the row is added, modified or deleted.
I'm then using the mail node to notify myself if a row is added, modified or deleted, in the case a row is added i have to include in the mail which fields of that row have been modified.
I can't find if this control is possible (check the row and compare it with the pre-modified version) and how to do it.
This is the embrional flow
As requested i'll try to be more detailed.
Please note that this is a POWER AUTOMATE FLOW so there is almost no code.
The CRUD connector takes 3 arguments:
-Change type (When an item is Added, Modified or Deleted)
-The table name (It's the Dataverse table name)
-The scope (Business Unit)
So i need to know if (for example in the output of this connector) there is a variable or other connector that contains which column changed and caused the trigger)
It's a question about the output or possible connectors related to the Dataverse CRUD node so there is NO CODE involved and no more "after-issue" flow specification needed to understand my request
A solution is to create a new field that keeps the current value of the original field and use trigger conditions to make your flow run only when those two fields don't match, meaning that the original field is updated and that its value has changed.

AWS DynamoDB checking uniqueness before adding new item

I want to check whether id(primary-key) of new item is uniq or not before adding into dynamoDB
what could be best option for both performance and cost wise.
Possible options to check uniqueness of primary-key can be...
1) Get (if empty array returns, it means there are no matching data. which also means it is uniq)
2) Scan (obvious, worst idea for both performance and cost)
3) Query
++ my another thought is, if there has any way to forcibly ignore incoming request in DynamoDB settings(discard incoming request or send error message), logic could be much simpler.
In normal RDB, if we try to add new item with existing primary key, Database will return error message without changing original data stored in database.
however, in DynamoDB, whether we Put item or Update item with existing primary key, it just silently changes original data stored in database.
have any idea?
As you mentioned, DynamoDB will update an item with the primary key you provide if it already exists. The article below shows you how you can make a conditional PUT request which will fail upon trying to insert an item that already exists (based on the primary key).
http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/API_PutItem.html
To prevent a new item from replacing an existing item, use a conditional expression that contains the attribute_not_exists function with the name of the attribute being used as the partition key for the table. Since every record must contain that attribute, the attribute_not_exists function will only succeed if no matching item exists.

DynamoDB Concurrency Issue

I'm building a system in which many DynamoDB (NoSQL) tables all contain data and data in one table accesses data in another table.
Multiple processes are accessing the same item in a table at the same time. I want to ensure that all of the processes have updated data and aren't trying to access that item at the exact same time because they are all updating the item with different data.
I would love some suggestions on this as I am stuck right now and don't know what to do. Thanks in advance!
Optimistic locking is a strategy to ensure that the client-side item that you are updating (or deleting) is the same as the item in Amazon DynamoDB. If you use this strategy, your database writes are protected from being overwritten by the writes of others, and vice versa.
With optimistic locking, each item has an attribute that acts as a version number. If you retrieve an item from a table, the application records the version number of that item. You can update the item, but only if the version number on the server side has not changed. If there is a version mismatch, it means that someone else has modified the item before you did. The update attempt fails, because you have a stale version of the item. If this happens, you simply try again by retrieving the item and then trying to update it. Optimistic locking prevents you from accidentally overwriting changes that were made by others. It also prevents others from accidentally overwriting your changes.
To support optimistic locking, the AWS SDK for Java provides the #DynamoDBVersionAttribute annotation. In the mapping class for your table, you designate one property to store the version number, and mark it using this annotation. When you save an object, the corresponding item in the DynamoDB table will have an attribute that stores the version number. The DynamoDBMapper assigns a version number when you first save the object, and it automatically increments the version number each time you update the item. Your update or delete requests succeed only if the client-side object version matches the corresponding version number of the item in the DynamoDB table.
ConditionalCheckFailedException is thrown if:
You use optimistic locking with #DynamoDBVersionAttribute and the version value on the server is different from the value on the client side.
You specify your own conditional constraints while saving data by using DynamoDBMapper with DynamoDBSaveExpression and these constraints failed.
Note
DynamoDB global tables use a “last writer wins” reconciliation between concurrent updates. If you use global tables, last writer policy wins. So in this case, the locking strategy does not work as expected.

Where are Ember Data's commitDetails' created/updated/deleted OrderedSets set?

Ember Data's Adapter saves edited records in different groups of Ember.OrderedSets, namely: commitDetails.created, commitDetails.updated, and commitDetails.deleted.
model.save() from model controller's createRecord() will be placed in the commitDetails.created group. model.save() from model controller's acceptChanges will placed be in the commitDetails.updated group. But I can't find in code where the placement association happens.
I know that they are instantiated in Ember Transaction's commit function (which calls Adapter's commit, in turn calling Adapter's save). Throughout this process, I can't figure out where exactly the records are sorted according to the created/updated/deleted criteria.
I'm not quite clear what you're asking, but if you're looking for where records get added to their appropriate commitDetails set, I believe this is the line you're looking for, in the commitDetails property itself.
Here's the relevant code.
forEach(records, function(record) {
if(!get(record, 'isDirty')) return;
record.send('willCommit');
var adapter = store.adapterForType(record.constructor);
commitDetails.get(adapter)[get(record, 'dirtyType')].add(record);
});
Let's walk through it.
forEach(records, function(record) {
if(!get(record, 'isDirty')) return;
The above says, for each record in the transaction, if it's not dirty, ignore it.
record.send('willCommit');
Otherwise, update its state to inFlight.
var adapter = store.adapterForType(record.constructor);
Get the record's adapter.
commitDetails.get(adapter)
Look up the adapter's created/updated/deleted trio object, which was instantiated at the top of this method here. It's simply an object with the 3 properties created, updated, and deleted, whose values are empty OrderedSets.
[get(record, 'dirtyType')]
Get the appropriate OrderedSet from the object we just obtained. For example, if the record we're on has been updated, get(record, 'dirtyType') will return the string updated. The brackets are just standard JavaScript property lookup, and so it grabs the updated OrderedSet from our trio object in the previous step.
.add(record);
Finally, add the record to the OrderedSet. On subsequent iterations of the loop, we'll add other records of the same type, so all created records get added to one set, all updated records get added to another set, and all deleted records get added to the third set.
What we end up with at the end of the entire method and return from the property is a Map whose keys are adapters, and whose values are these objects with the 3 properties created, updated, and deleted. Each of those, in turn, are OrderedSets of all the records in the transaction that have been created for that adapter, updated for that adapter, and deleted for that adapter, respectively.
Notice that this computed property is marked volatile, so it will get recomputed each time that someone gets the commitDetails property.

Is there a table which holds all possible states of a determined work Item type in TFS?

I'm developing a Time Tracking system in TFS so we can control how much time is spent in each task. I'm doing it by checking changes in work items states, and recording the time between states.
I'm using WCF and TFS2010 alert subscription.
Then I noticed the State column in the WorkItem table holds a string, instead of an ID pointing to a State.
With that in mind, I noticed I would have to parse each state and check if it corresponds to some string. And then, some day, someone might want to change the State name. Then we're doomed.
But before I hardcore (or put in some random config.xml)... let me ask, is there a table which holds all possible states of a determined work Item type in TFS?
The states of work item types are stored in the process template files. You can export the work item type to an xml file using witadmin.exe and see the allowed values of the "State" in there.
Programmatically, you can use the Microsoft.TeamFoundation.WorkItemTracking.Client namespace to get the WorkItemType object of your work item type, look for the FieldDefinition object of the "State" in the FieldDefinitions property, then get the possible states from the AllowedValues property of FieldDefinition class.