I want to perform a non-backwards compatible state upgrade using SignatureConstraint. If it were a backwards compatible change, for example adding a property, I'd just added a nullable property in the state and that would work. However I don't have any idea how should I act in the following scenarios:
Scenario1: A new non-null field is added to the state
Scenario2: A field was removed from state
Scenario3: A field was modified in the state. E.g. a field of type Date transformed into an object which contains that date and some other fields.
Scenario4: A field in the state was renamed.
The problem is that explicit upgrade does not support SignatureConstraint and I get the following error message Legacy contract does not satisfy the upgraded contract's constraint, so I need to find a solution for implicity upgrade.
ContractUpgradeFlow doesn't support the upgrade of state with SignatureConstraint. However, the flexibility of Signature Constraint allows you to add any CorDapps as long as it's signed by the same key. You could easily write a simple flow to mimic an ExplicitUpgrade for the scenario you mentioned.
Here is what you could do:
Add both the corDapps jar files (old and updated) in the nodes cordapps folder.
Write another cordapp with a flow that consumes your existing state and outputs the new state (the upgraded one).
Add this flow jar to the nodes cordapps folder.
Execute the new flow to consume the older states and output the upgraded state.
Points to Note:
Make sure to have the correct set of signers to avoid incorrect spending of the states.
This is just an overall idea. The actual way of doing this might get a little complicated depending on the contract rules for your Exit transaction of the state.
I would rather add a new upgrade command to cater to this scenario.
You could have got the overall idea and do the tweaking at your end to perform the upgrade of your usecase. Hope this helps!
As a workaround I made the incompatible change to become a compatible one. Here is how it works.
I've created a state which has propertiesV1 object. This object includes all the fields that CompanyState should.
#CordaSerializable
#BelongsToContract(CompanyContract::class)
data class CompanyState(
override val linearId: UniqueIdentifier,
val propertiesV1: CompanyV1?
) : LinearState
Now when I need to make an incompatible change in the properties, I just add another version of the object to the state.
#CordaSerializable
#BelongsToContract(CompanyContract::class)
data class CompanyState(
override val linearId: UniqueIdentifier,
val propertiesV1: CompanyV1?,
val propertiesV2: CompanyV2?
) : LinearState
Neither contract, nor flows are changed. They are just being updated to handle propertiesV2 field.
Related
I'm stuck with this problem on Master Data Service (MDS).
I have an entity that has two domain based to other two entities.
I created the first business rule with the first domain based and it works perfectly.
But when I try to create a second business rule with the second domain based, an error appears:
200095 : Cannot specify more than one entity in MetadataGet
400003 : The attribute reference is not valid. The attribute was not found.
400003 : The attribute reference is not valid. The attribute was not found.
Obviously the attribute is valid. In fact, if I delete the first business rules, the second one is published correctly.
I think that MDS block a second business rules if you try to apply to a second domain based attribute.
This happened to us as well, and it seems that this error only occurs if a specific set of actions is taken:
We first restored the MDS 2012 database on SQL Server 2017
We upgraded a database using MDS management tool. Mind that the multi-entity business rules work fine now - they return no errors upon saving, can be published and successfully evalueted
We then realized that we are missing some code changes, so we decided to create a full model package using MDSModelDeploy.exe in our old MDS 2012
We deployed that package using MDSModelDeploy deployupdate command. After that the existing multi-entity rules are failing to publish, you are also unable to create new rules based on different entities within one entity. Unfortunately, we have found no fix for it, as there are simpler ways around it.
At this point we took a step back, restored and upgraded the old database once again, and it turned out that the rules worked, so it got to be the package that has broken those. I do not know what your situation was, since when we created a fresh model in SQL 2017 all of the multi-entity based rules worked perfectly, so I am curious to know what steps should be taken to reproduce the error in your case.
The only possible approach I can think of to fix the situation in point 4, would be to create an MDSModelDeploy update package from the corrupted model and another one from a new, healthy model, and then compare how the XMLs of the multi-entity business rules are structured. We did not try this one though, since we found the workaround described previously.
I recently noticed there is a difference in Item Id for a Sitecore template field between 2 environments (Source and Target). Due to this, any data changes to the field value for the dataitem using the template is not reflecting to target Sitecore database.
Hence, we manually copy the value from source to target and which takes lot of time to sync the 2 environments. Any idea how to change the template field Item Id in Sitecore without data loss in target instance?
Thanks
The template fields have most likely been created manually on the different servers, as #AdrianIorgu has suggested. I am going to suggest that you don't worry about merging fields and tools.
What you really care about is the content on the PRODUCTION instance of your site (assuming that this is Target). In any other environment, content should be regarded throwaway.
With that in mind, create a package of the template from your PRODUCTION instance and the install that in the other environments, deleting the duplicate field from the Source instance. The GUIDs of the field should now match across all environments. Check this into your source control (using TDS or Unicorn or whatever). You can then correctly update any standard values and that will be reflect through the server when you deploy again.
If your other environments (dev/qa/pre-prod) result in data loss for that field then don't worry about it, restore a backup from PROD.
Most likely that happened because the field or the template was added manually on the second environment, without migrating the items using packages, serialization or a third-party tool like TDS or Unicorn.
As #SitecoreClimber mentioned above, you can use Razl to sync the two environments and see the differences, but I don't think you will be able to change the field's GUID, to have the two environments consistent, without any data loss. Depending on the volume of your data, fixing this can be tricky.
What I would do:
make sure the target instance has the right template by installing a package with the correct template from source (with a MERGE-MERGE operation), which will end up having a duplicate field name
write a SQL query to get a list of all the items that have value for that field and update the value to the new field
Warning: this SQL query below is just a sample to get you started, make sure you extend and test this properly before running on a CD instance
use YOUR_DATABASE
begin tran
Declare #oldFieldId nvarchar(100), #newFieldId nvarchar(100), #previousValue nvarchar(100), #newValue nvarchar(100)
set #oldFieldID = '75577384-3C97-45DA-A847-81B00500E250' //old field ID
set #newFieldID = 'A2F96461-DE33-4CC6-B758-D5183676509B' //new field ID
/* versionedFields */
Select itemId, fieldid, value
from [dbo].[versionedFields] f with (nolock)
where f.FieldId like #oldFieldID
For this kind of stuff I sugest you to use Sitecore Razl.
It's a tool for comparing and merging sitecore databases.
Razl allows developers to have a complete side by side comparison between two Sitecore databases; highlighting features that are missing or not up to date. Razl also gives developers the ability to simply move the item from one database to another.
Whether it's finding that one missing template, moving your entire database or just one item, Razl allows you to do it seamlessly and worry free.
It's not a free tool, you can check here how you can buy it:
https://www.razl.net/purchase.aspx
When a new record is created using Ember Data, then get("isDirty") returns true. But as yet, the user has made no changes to the record, and we can discard it without losing any of the user's work.
Is there any official, supported way to detect this situation, where a record has been created but no properties have been set?
(There's in incomplete answer to this question for a much older version of Ember Data, before it was substantially overhauled. The didSetProperty function still exists in current releases, but it's undocumented. Still, it might be a possible path to a solution if nothing official can be found.)
Internally, the changed properties are tracked by the _attributes property. You could do a check of
record.get('isNew') && Ember.keys(record._attributes).length === 0
to see that it has just been created and nothing has been changed on it.
Note that this is not meant to be part of the external API, but I'm not aware of any external API to accomplish this.
Explanation:
I'm using ember-data for a project of mine and I have a question that revolves around the possibility of dirtying an object and then setting its state to clean again on purpose - without commiting the changes. The scenario is this:
Say I've fetched an object via banana = App.Fruit.find('banana'); and it has a description of "Yellow fruit!". Using XHR long-polling (or WebSockets), I may receive an updated version of the object because of another user having changed the description to "A tasty yellow fruit!" at any given point in time after I fetched the original object.
Then, what I would like to do is to update the object to reflect the newly received data. For this, I've tried different approaches:
I've tried calling App.Store.load(App.Fruit, new_data);. First of all, this approach doesn't work and secondly, this is not really what I want. I could've made uncommitted changes to the object myself and in this case, it would be undesirable to just discard those (assuming the load() call would overwrite them).
I've tried looping through the new data, calling .set() - like so: banana.set('description', new_data.description); - in order to update the object properties with the new data (where applicable = not dirty). This works but it leaves the object in a dirtied state.
In order to make the object clean/updated again - and not have the adapter commit the changes! - I've taken a look at the states the object travels through. These are (at least):
Step 1: Initially, the object is in the rootState.loaded.saved state.
Step 2: Calling .set() on a property pushes it to the rootState.loaded.updated.uncommitted state.
Step 3: Calling App.store.commit(); returns the object to the rootState.loaded.saved state.
Therefore, I've tried to manually set the object state to saved after step 2 like so: banana.get('stateManager').goToState('saved');.
However, this doesn't work. The next time the store commits for any other reason, this maneuver produces an inFlightDirtyReasons is undefined error.
Question:
My question is: how can I manually change the state of a dirtied object back to clean (saved) again?
Solution for Ember Data 1.0.0-beta.7:
// changing to loaded.updated.inFlight, which has "didCommit"
record.send('willCommit');
// clear array of changed (dirty) model attributes
record.set('_attributes', {});
// changing to loaded.saved (hooks didCommit event in "inFlight" state)
record.send('didCommit');
I've searched the source code of Ember-data and I've found that loaded.saved state has a setup function that checks whether a model is clean, before setting "saved" state. If it is not clean, then it rejects a request to change state and returns to loaded.updated.uncommitted.
So you have to clean model._attributes array, which keeps attributes names and Ember will let you change state manually.
I know it isn't very good solution, because is needed to set private property of a model, but I've not found any other solutions yet.
Looking at ember-data the uncommitted state has a 'becameClean' event which consequently sets the record as loaded.saved.
This should do the trick
record.get('stateManager').send('becameClean');
Solution for Ember Data 2.6.1
record.send('pushedData');
set dirty record as loaded and saved
https://github.com/emberjs/data/blob/fec260a38c3f7227ffe17a3af09973ce2718acca/addon/-private/system/model/states.js#L250
It's an update to #Kamil-j's solution.
For Ember Data 2.0 which I am currently using I have to do the following:
record._internalModel.send('willCommit');
record._internalModel._attributes = {};
record._internalModel.send('didCommit');
As of 1.0.0.rc6.2....
This will move a model into the state of a model that has been saved.
record.get('stateManager').transitionTo('loaded.saved')
This will moves a model to a the state of a new model that has not been committed. Think new dirty model.
record.get('stateManager').transitionTo('loaded.created.uncommitted')
This will move a model into the sate of an old model that has been updated, think old dirty model:
record.get('stateManager').transitionTo('loaded.updated')
As of ember-data 1.0.0-beta.12:
record.transitionTo('loaded.saved');
It seems that record.get('stateManager') is not required anymore.
Here's what seems to work for Ember Data 1.0.0-beta.10:
record.set('currentState.stateName', 'root.loaded.saved');
record.adapterWillCommit();
record.adapterDidCommit();
record.set('currentState.isDirty', false);
Not sure if all those lines are required but just following what others have done prior to this.
Ember 2.9.1
record.set('currentState.isDirty', false);
Tested on Ember Data 2.9
pushedData action is the way to go but besides that the "originalValues" need to be reset as well.
Ember.assign(record.data, record._internalModel._attributes);
Ember.assign(record._internalModel._data, record._internalModel._attributes);
record.send('pushedData');
It looks like with newer versions everything methioned here got broken.
This worked for me with ember-data 1.0.0.beta4:
record.adapterWillCommit();
record.adapterDidCommit();
Another method that worked for me when using Ember Data 1.0.0-beta.18:
record.rollback()
This reversed the dirty attributes and returned the record to a clean state.
Seems like this may have been since deprecated in favor of record.rollbackAttributes: http://emberjs.com/api/data/classes/DS.Model.html#method_rollbackAttributes
I work on Ember data 1.13 so I used the following solution (which seems a mix between the one provided by #Martin Malinda and the other by #Serge):
// Ensure you have the changes inside the record
Object.assign(record.data, record._internalModel._attributes);
Object.assign(record._internalModel._data,record._internalModel._attributes);
// Using the DS.State you can first simulate the record is going to be saved
record.get('_internalModel').send('willCommit');
// Cleaning the prevous dirty attributes
record.get('_internalModel')._attributes = {};
// Mark the record as saved (root.loaded.created.uncommitted) even if it isn't for real
record.get('_internalModel').send('didCommit');
In this way, if we will call a further rollbackAttributes() on this record, if we will have some dirty attributes, the record will be reset to this last state (instead of having the original properties) which was exactly what I was looking for in my use case.
If we won't have any dirty attributes, nothing will change and we will keep the last attributes set using this code without having them rolled back to the original ones. Hope it helps.
Tested on Ember Data 3.8.0
Just an update to Martin Malinda's answer:
// Clear changed attributes list
record._internalModel._recordData._attributes = {};
// Trigger transition to 'loaded.saved' state
record.send('pushedData');
In my case I also needed to override serializer's normalize method.
I've got a quick question regarding the use of repositories. But the best way to ask is to show a bit of pseudocode and you guys tell me what the result should be
Get a record from the repository with ID of 1 (assume it exists)
Edit a couple of properties
Query the repository again for an item with ID of 1
Result = ??
Should I get the object with updated values or the object without (original state), bearing in mind that since updating the values of properties (step 2) I have not told the repository to update this record.
I think I should get a copy of the original item and not a reference to the edited version.
Please tell me what is correct.
Cheers
The repository pattern is suppose to act like a collection of your objects, so ideally I think it should return the same object instance which would have the updates in it.
Generally there is an identity map somewhere so your repositories can keep track of what has already been loaded. With an identity map, when you fetch an object with the same Id you should always get the already loaded object back regardless of how many times. This is how all more sophisticated ORMs work and is generally a good practice. An identity map helps keep things in sync while you are in the same transaction and saves you some data access.
NHibernate's session has an identity map it keeps track of so you don't have to worry about trying to implement your own in your repositories. Also I believe you can use NHibernate's stateless session if you want to load another instance without change tracking, but I'm not positive on that.
Judging from your past questions I'm assuming you are using LINQ/C#?
If you are using a DataContext and you haven't called SubmitChanges() then you should get back the original unchanged object.
Just tested it. I was wrong, you get back the changed object.
If you set ObjectTrackingEnabled = false on the DataContext you will get the unchanged object.