I am trying to create a custom data to a hopper that is reset survivable. I know that there is a PersistantDataContainer to store custom metadata. I'm getting a block with an event and then casting it to a hopper instance (of course shocking before casting). When I'm trying to set some data to the instance the data is lost once the method ends and the hopper instance is deleted. Is there any way to save the data to the actual hopper block and get it later event after a reset?
Hopper hopper = (Hopper) block.getState();
private final NamespacedKey KEY_SPEED = new NamespacedKey("me.exerosis", "speed");
if (!hopper.getPersistentDataContainer().has(KEY_SPEED, PersistentDataType.INTEGER))
{
Bukkit.broadcastMessage("new");
hopper.getPersistentDataContainer().set(KEY_SPEED, PersistentDataType.INTEGER, 1);
}
Every time this is called on the same block but different event "new" is broadcasted meaning that it is not saving the data.
You need to call BlockState#update
hopper.update();
Related
I want to load an entity by key, update it, save it back to Datastore and then return it to the client from an endpoint. However, I want to make absolutely sure that when I return the entity to the client, the updated entity has been saved and propagated across the datastore. This way, if the client queries for that entity with another endpoint immediately after, the updated one will return.
Using Objectify, this is what I have so far:
First I load the entity, update some values and then load it again by the entity key and return it. Will this second load and return of the entity be strongly consistent and reflect the new value?
// Load the entity by key
Key<Thing> thingKey = Key.create(Thing.class, id);
Thing thing = ofy().load().key(thingKey).now();
// Update some values
thing.setSomeProperty("new value");
// Save entity
ofy().save().entity(thing);
// Load entity by key - will this loaded entity be guaranteed to reflect the above update?
return ofy().load().key(thingKey).now();
Note: I do not want to return the local entity I set the new values to because I do not want the client to potentially query for the entity with the new updates and not get it because they haven't been committed yet due to eventual consistency.
Could this below be another option to achieve the same effect?
// Load the entity by key
Key<Thing> thingKey = Key.create(Thing.class, id);
Thing thing = ofy().load().key(thingKey).now();
// Update some values
thing.setSomeProperty("new value");
// Will this save wait until the entity is saved/propagated across the datastore and thus any queries to this entity after this statement will reflect the new update?
ofy().save().entity(thing).now();
// Just return the entity we updated
return thing
There is no difference between those two sequences. The last load() simply loads the same object out of the session cache. You might as well just return the object.
Keep in mind that you need to run this inside a transaction to be safe. Otherwise you run the risk that some other conflicting write will get overwritten and therefore lost between the load and the save.
return ofy().transact(() -> {
Key<Thing> thingKey = Key.create(Thing.class, id);
Thing thing = ofy().load().key(thingKey).now();
thing.setSomeProperty("new value");
ofy().save().entity(thing).now();
return thing;
}
BTW with Firestore, eventual consistency really isn't a thing anymore. But that doesn't mean that data won't necessarily be stale. Technically any piece of data can be stale the moment after it is fetched from the datastore. A transaction ensures a consistent view of the data (ie, all transactions are serializable so data will not be lost).
encodeSystemFields is supposed to be used when I keep records locally, in a database.
Once I export that data, must I do anything special when de-serializing it?
What scenarios should I act upon information in that data?
As a variation (and if not covered in the previous question), what does this information help me guard against? (data corruption I assume)
encodeSystemFields is useful to avoid having to fetch a CKRecord from CloudKit again to update it (barring record conflicts).
The idea is:
When you are storing the data for a record retrieved from CloudKit (for example, retrieved via CKFetchRecordZoneChangesOperation to sync record changes to a local store):
1.) Archive the CKRecord to NSData:
let record = ...
// archive CKRecord to NSData
let archivedData = NSMutableData()
let archiver = NSKeyedArchiver(forWritingWithMutableData: archivedData)
archiver.requiresSecureCoding = true
record.encodeSystemFieldsWithCoder(with: archiver)
archiver.finishEncoding()
2.) Store the archivedData locally (for example, in your database) associated with your local record.
When you want to save changes made to your local record back to CloudKit:
1.) Unarchive the CKRecord from the NSData you stored:
let archivedData = ... // TODO: retrieved from your local store
// unarchive CKRecord from NSData
let unarchiver = NSKeyedUnarchiver(forReadingWithData: archivedData)
unarchiver.requiresSecureCoding = true
let record = CKRecord(coder: unarchiver)
2.) Use that unarchived record as the base for your changes. (i.e. set the changed values on it)
record["City"] = "newCity"
3.) Save the record(s) to CloudKit, via CKModifyRecordsOperation.
Why?
From Apple:
Storing Records Locally
If you store records in a local database, use the encodeSystemFields(with:) method to encode and store the record’s metadata. The metadata contains the record ID and change tag which is needed later to sync records in a local database with those stored by CloudKit.
When you save changes to a CKRecord in CloudKit, you need to save the changes to the server's record.
You can't just create a new CKRecord with the same recordID, set the values on it, and save it. If you do, you'll receive a "Server Record Changed" error - which, in this case, is because the existing server record contains metadata that your local record (created from scratch) is missing.
So you have two options to solve this:
Request the CKRecord from CloudKit (using the recordID), make changes to that CKRecord, then save it back to CloudKit.
Use encodeSystemFields, and store the metadata locally, unarchiving it to create a "base" CKRecord that has all the appropriate metadata for saving changes to said CKRecord back to CloudKit.
#2 saves you network round-trips*.
*Assuming another device hasn't modified the record in the meantime - which is also what this data helps you guard against. If another device modifies the record between the time you last retrieved it and the time you try to save it, CloudKit will (by default) reject your record save attempt with "Server Record Changed". This is your clue to perform conflict resolution in the way that is appropriate for your app and data model. (Often, by fetching the new server record from CloudKit and re-applying appropriate value changes to that CKRecord before attempting the save again.)
NOTE: Any time you save/retrieve an updated CKRecord to/from CloudKit, you must remember to update your locally-stored archived CKRecord.
As of iOS 15 / Swift 5.5 this extension might be helpful:
public extension CKRecord {
var systemFieldsData: Data {
let archiver = NSKeyedArchiver(requiringSecureCoding: true)
encodeSystemFields(with: archiver)
archiver.finishEncoding()
return archiver.encodedData
}
convenience init?(systemFieldsData: Data) {
guard let una = try? NSKeyedUnarchiver(forReadingFrom: systemFieldsData) else {
return nil
}
self.init(coder: una)
}
}
I need to do a callout to webservice from my ApexController class. To do this, I have an asycn method with attribute #future (callout=true). The webservice call needs to refeence an object that gets populated in save call from VF page.
Since, static (future) calls does not all objects to be passed in as method argument, I was planning to add the data in a static Map and access that in my static method to do a webservice call out. However, the static Map object is getting re-initalized and is null in the static method.
I will really appreciate if anyone can give me some pointeres on how to address this issue.
Thanks!
Here is the code snipped:
private static Map<String, WidgetModels.LeadInformation> leadsMap;
....
......
public PageReference save() {
if(leadsMap == null){
leadsMap = new Map<String, WidgetModels.LeadInformation>();
}
leadsMap.put(guid,widgetLead);
}
//make async call to Widegt Webservice
saveWidgetCallInformation(guid)
//async call to widge webserivce
#future (callout=true)
public static void saveWidgetCallInformation(String guid) {
WidgetModels.LeadInformation cachedLeadInfo =
(WidgetModels.LeadInformation)leadsMap.get(guid);
.....
//call websevice
}
#future is totally separate execution context. It won't have access to any history of how it was called (meaning all static variables are reset, you start with fresh governor limits etc. Like a new action initiated by the user).
The only thing it will "know" is the method parameters that were passed to it. And you can't pass whole objects, you need to pass primitives (Integer, String, DateTime etc) or collections of primitives (List, Set, Map).
If you can access all the info you need from the database - just pass a List<Id> for example and query it.
If you can't - you can cheat by serializing your objects and passing them as List<String>. Check the documentation around JSON class or these 2 handy posts:
https://developer.salesforce.com/blogs/developer-relations/2013/06/passing-objects-to-future-annotated-methods.html
https://gist.github.com/kevinohara80/1790817
Side note - can you rethink your flow? If the starting point is Visualforce you can skip the #future step. Do the callout first and then the DML (if needed). That way the usual "you have uncommitted work pending" error won't be triggered. This thing is there not only to annoy developers ;) It's there to make you rethink your design. You're asking the application to have open transaction & lock on the table(s) for up to 2 minutes. And you're giving yourself extra work - will you rollback your changes correctly when the insert went OK but callout failed?
By reversing the order of operations (callout first, then the DML) you're making it simpler - there was no save attempt to DB so there's nothing to roll back if the save fails.
I am making a lot of async calls and using loadMany to preload the ember data store like this:
if(data.feed.activities.length > 0){
App.store.loadMany(App.Activity, data.feed.activities);
}
Some of my bindings are screwing up if I am readding the same item more than once which is a possibility.
Is there a way of not reloading the item if it is already in the store? I don't want to have to iterate over each item and check if that is possible.
This is from the load() documentation in store.js
"Load a new data hash into the store for a given id and type
combination. If data for that record had been loaded previously, the
new information overwrites the old. If the record you are loading data
for has outstanding changes that have not yet been saved, an exception
will be thrown."
As you can see, the new information overwrites the old, so it should be ok to reload the same data. Maybe you have another issue. Have you configured your id correctly?
Using doctrine 2.1 (and zend framework 1.11, not that it matters for this matter), how can I do post persist and post update actions, that involves re-saving to the db?
For example, creating a unique token based on the just generated primary key' id, or generating a thumbnail for an uploaded image (which actually doesn't require re-saving to the db, but still) ?
EDIT - let's explain, shall we ?
The above is actually a question regarding two scenarios. Both scenarios relate to the following state:
Let's say I have a User entity. When the object is flushed after it has been marked to be persisted, it'll have the normal auto-generated id of mysql - meaning running numbers normally beginning at 1, 2, 3, etc..
Each user can upload an image - which he will be able to use in the application - which will have a record in the db as well. So I have another entity called Image. Each Image entity also has an auto-generated id - same methodology as the user id.
Now - here is the scenarios:
When a user uploads an image, I want to generate a thumbnail for that image right after it is saved to the db. This should happen for every new or updated image.
Since we're trying to stay smart, I don't want the code to generate the thumbnail to be written like this:
$image = new Image();
...
$entityManager->persist($image);
$entityManager->flush();
callToFunctionThatGeneratesThumbnailOnImage($image);
but rather I want it to occur automatically on the persisting of the object (well, flush of the persisted object), like the prePersist or preUpdate methods.
Since the user uploaded an image, he get's a link to it. It will probably look something like: http://www.mysite.com/showImage?id=[IMAGEID].
This allows anyone to just change the imageid in this link, and see other user's images.
So in order to prevent such a thing, I want to generate a unique token for every image. Since it doesn't really need to be sophisticated, I thought about using the md5 value of the image id, with some salt.
But for that, I need to have the id of that image - which I'll only have after flushing the persisted object - then generate the md5, and then saving it again to the db.
Understand that the links for the images are supposed to be publicly accessible so I can't just allow an authenticated user to view them by some kind of permission rules.
You probably know already about Doctrine events. What you could do:
Use the postPersist event handler. That one occurs after the DB insert, so the auto generated ids are available.
The EventManager class can help you with this:
class MyEventListener
{
public function postPersist(LifecycleEventArgs $eventArgs)
{
// in a listener you have the entity instance and the
// EntityManager available via the event arguments
$entity = $eventArgs->getEntity();
$em = $eventArgs->getEntityManager();
if ($entity instanceof User) {
// do some stuff
}
}
}
$eventManager = $em->getEventManager():
$eventManager->addEventListener(Events::postPersist, new MyEventListener());
Be sure to check e. g. if the User already has an Image, otherwise if you call flush in the event listener, you might be caught in an endless loop.
Of course you could also make your User class aware of that image creation operation with an inline postPersist eventHandler and add #HasLifecycleCallbacks in your mapping and then always flush at the end of the request e. g. in a shutdown function, but in my opinion this kind of stuff belongs in a separate listener. YMMV.
If you need the entity id before flushing, just after creating the object, another approach is to generate the ids for the entities within your application, e. g. using uuids.
Now you can do something like:
class Entity {
public function __construct()
{
$this->id = uuid_create();
}
}
Now you have an id already set when you just do:
$e = new Entity();
And you only need to call EntityManager::flush at the end of the request
In the end, I listened to #Arms who commented on the question.
I started using a service layer for doing such things.
So now, I have a method in the service layer which creates the Image entity. After it calls the persist and flush, it calls the method that generates the thumbnail.
The Service Layer pattern is a good solution for such things.