NHibernate Load vs. Get behavior for testing - unit-testing

In simple tests I can assert whether an object has been persisted by whether it's Id is no longer at it's default value. But delete an object and want to check whether the object and perhaps its children are really not in the database, the object Id's will still be at their saved values.
So I need to go to the db, and I would like a helper assertion to make the tests more readable, which is where the question comes in. I like the idea of using Load to save the db call, but I'm wondering if the ensuing exceptions can corrupt the session.
Below are how the two assertions would look, I think. Which would you use?
Cheers,
Berryl
Get
public static void AssertIsTransient<T>(this T instance, ISession session)
where T : Entity
{
if (instance.IsTransient()) return;
var found = session.Get<T>(instance.Id);
if (found != null) Assert.Fail(string.Format("{0} has persistent id '{1}'", instance, instance.Id));
}
Load
public static void AssertIsTransient<T>(this T instance, ISession session)
where T : Entity
{
if (instance.IsTransient()) return;
try
{
var found = session.Load<T>(instance.Id);
if (found != null) Assert.Fail(string.Format("{0} has persistent id '{1}'", instance, instance.Id));
}
catch (GenericADOException)
{
// nothing
}
catch (ObjectNotFoundException)
{
// nothing
}
}
edit
In either case I would be doing the fetch (Get or Load) in a new session, free of state from the session that did the save or delete.
I am trying to test cascade behavior, NOT to test NHib's ability to delete things, but maybe I am over thinking this one or there is a simpler way I haven't thought of.

Your code in the 'Load'-section will always hit Assert.Fail, but never throw an exception as Load<T> will return a proxy (with the Id-property set - or populated from the 1st level cache) without hitting the DB - ie. ISession.Load will only fail, if you access a property other than your Id-property on a deleted entity.
As for your 'Get'-section - I might be mistaken, but I think that if you delete an entity in a session - and later try to use .Get in the same session - you will get the one in 1st level cache - and again not return null.
See this post for the full explanation about .Load and .Get.
If you really need to see if it is in your DB - use a IStatelessSession - or launch a child-ISession (which will have an empty 1st level cache.
EDIT: I thought of a bigger problem - your entity will first be deleted when the transaction is committed (when the session is flushed per default) - so unless you manually flush your session (not recommended), you will still have it in your DB.
Hope this helps.

Related

Save entity to Datastore and return it so that any further queries to the same entity are guaranteed to reflect the new updates

I want to load an entity by key, update it, save it back to Datastore and then return it to the client from an endpoint. However, I want to make absolutely sure that when I return the entity to the client, the updated entity has been saved and propagated across the datastore. This way, if the client queries for that entity with another endpoint immediately after, the updated one will return.
Using Objectify, this is what I have so far:
First I load the entity, update some values and then load it again by the entity key and return it. Will this second load and return of the entity be strongly consistent and reflect the new value?
// Load the entity by key
Key<Thing> thingKey = Key.create(Thing.class, id);
Thing thing = ofy().load().key(thingKey).now();
// Update some values
thing.setSomeProperty("new value");
// Save entity
ofy().save().entity(thing);
// Load entity by key - will this loaded entity be guaranteed to reflect the above update?
return ofy().load().key(thingKey).now();
Note: I do not want to return the local entity I set the new values to because I do not want the client to potentially query for the entity with the new updates and not get it because they haven't been committed yet due to eventual consistency.
Could this below be another option to achieve the same effect?
// Load the entity by key
Key<Thing> thingKey = Key.create(Thing.class, id);
Thing thing = ofy().load().key(thingKey).now();
// Update some values
thing.setSomeProperty("new value");
// Will this save wait until the entity is saved/propagated across the datastore and thus any queries to this entity after this statement will reflect the new update?
ofy().save().entity(thing).now();
// Just return the entity we updated
return thing
There is no difference between those two sequences. The last load() simply loads the same object out of the session cache. You might as well just return the object.
Keep in mind that you need to run this inside a transaction to be safe. Otherwise you run the risk that some other conflicting write will get overwritten and therefore lost between the load and the save.
return ofy().transact(() -> {
Key<Thing> thingKey = Key.create(Thing.class, id);
Thing thing = ofy().load().key(thingKey).now();
thing.setSomeProperty("new value");
ofy().save().entity(thing).now();
return thing;
}
BTW with Firestore, eventual consistency really isn't a thing anymore. But that doesn't mean that data won't necessarily be stale. Technically any piece of data can be stale the moment after it is fetched from the datastore. A transaction ensures a consistent view of the data (ie, all transactions are serializable so data will not be lost).

Google Cloud Datastore - get after insert in one request

I am trying to retrieve an entity immediately after it was saved. When debugging, I insert the entity, and check entities in google cloud console, I see it was created.
Key key = datastore.put(fullEntity)
After that, I continue with getting the entity with
datastore.get(key)
, but nothing is returned. How do I retrieve the saved entity within one request?
I've read this question Missing entities after insertion in Google Cloud DataStore
but I am only saving 1 entity, not tens of thousands like in that question
I am using Java 11 and google datastore (com.google.cloud.datastore. package)*
edit: added code how entity was created
public Key create.... {
// creating the entity inside a method
Transaction txn = this.datastore.newTransaction();
this.datastore = DatastoreOptions.getDefaultInstance().getService();
Builder<IncompleteKey> builder = newBuilder(entitykey);
setLongOrNull(builder, "price", purchase.getPrice());
setTimestampOrNull(builder, "validFrom", of(purchase.getValidFrom()));
setStringOrNull(builder, "invoiceNumber", purchase.getInvoiceNumber());
setBooleanOrNull(builder, "paidByCard", purchase.getPaidByCard());
newPurchase = entityToObject(this.datastore.put(builder.build()));
if (newPurchase != null && purchase.getItems() != null && purchase.getItems().size() > 0) {
for (Item item : purchase.getItems()) {
newPurchase.getItems().add(this.itemDao.save(item, newPurchase));
}
}
txn.commit();
return newPurchase.getKey();
}
after that, I am trying to retrieve the created entity
Key key = create(...);
Entity e = datastore.get(key)
I believe that there are a few issues with your code, but since we are unable to see the logic behind many of your methods, here comes my guess.
First of all, as you can see on the documentation, it's possible to save and retrieve an entity on the same code, so this is not a problem.
It seems like you are using a transaction which is right to perform multiple operations in a single action, but it doesn't seem like you are using it properly. This is because you only instantiate it and close it, but you don't put any operation on it. Furthermore, you are using this.datastore to save to the database, which completely neglects the transaction.
So you either save the object when it has all of its items already added or you create a transaction to save all the entities at once.
And I believe you should use the entityKey in order to fetch the added purchase afterwards, but don't mix it.
Also you are creating the Transaction object from this.datastore before instantiating the latter, but I assume this is a copy-paste error.
Since you're creating a transaction for this operation, the entity put should happen inside the transaction:
txn.put(builder.builder());
Also, the operations inside the loop where you add the purchase.getItems() to the newPurchase object should also be done in the context of the same transaction.
Let me know if this resolves the issue.
Cheers!

Getting "Error: Assertion Failed: calling set on destroyed object" when trying to rollback a deletion

I am looking into how to show proper deletion error message in ember when there is an error coming back from the server. I looked at this topic and followed its suggestion:
Ember Data delete fails, how to rollback
My code is just like it, I return a 400 and my catch fires and logs, but nothing happens, when I pause it in the debugger though and try to rollback, I get Error: Assertion Failed: calling set on destroyed object
So A) I cannot rollback B) the error is eaten normally.
Here is my code
visitor.destroyRecord().then(function() {
console.log('SUCCESS');
}).catch(function(response) {
console.log('failed to remove', response);
visitor.rollback();
});
In case it's relevant, my model does have multiple relationships. What am I doing wrong? Ember-data version is 1.0.0.8 beta (previous one from the release a few days ago).
Thanks in advance.
EDIT
I discovered now that the record actually is restored currently inside the cache according to ember inspector, but the object will not reappear in the rendering of the visitors. I need some way to force it to reload into the template...
After destroyRecord, the record is gone and the deletion cannot be rolled back. The catch clause will just catch a server error. If you want the record back, and think it's still on the server, you'll have to reload it.
See the following comment on deleteRecord from the Ember Data source:
Marks the record as deleted but does not save it. You must call
`save` afterwards if you want to persist it. You might use this
method if you want to allow the user to still `rollback()` a
delete after it was made.
This implies that a rollback after save is not possible. There is also no sign anywhere I can see in the Ember Data code for somehow reverting a record deletion when the DELETE request fails.
In theory you might be able to muck with the isDeleted flag, or override various internal hooks, but I'd recommend against that unless you really know how things work.
Try reloading the model after the rollback. It will reload from the server but it was the only way around this that I could find.
visitor.destroyRecord().then(function() {
console.log('SUCCESS');
}).catch(function(response) {
console.log('failed to remove', response);
visitor.rollback();
visitor.reload().then(function(vis)
{
console.log('visitor.reload :: ' + JSON.stringify(vis));
});
});
Hope that helps.

SFDC Apex Code: Access class level static variable from "Future" method

I need to do a callout to webservice from my ApexController class. To do this, I have an asycn method with attribute #future (callout=true). The webservice call needs to refeence an object that gets populated in save call from VF page.
Since, static (future) calls does not all objects to be passed in as method argument, I was planning to add the data in a static Map and access that in my static method to do a webservice call out. However, the static Map object is getting re-initalized and is null in the static method.
I will really appreciate if anyone can give me some pointeres on how to address this issue.
Thanks!
Here is the code snipped:
private static Map<String, WidgetModels.LeadInformation> leadsMap;
....
......
public PageReference save() {
if(leadsMap == null){
leadsMap = new Map<String, WidgetModels.LeadInformation>();
}
leadsMap.put(guid,widgetLead);
}
//make async call to Widegt Webservice
saveWidgetCallInformation(guid)
//async call to widge webserivce
#future (callout=true)
public static void saveWidgetCallInformation(String guid) {
WidgetModels.LeadInformation cachedLeadInfo =
(WidgetModels.LeadInformation)leadsMap.get(guid);
.....
//call websevice
}
#future is totally separate execution context. It won't have access to any history of how it was called (meaning all static variables are reset, you start with fresh governor limits etc. Like a new action initiated by the user).
The only thing it will "know" is the method parameters that were passed to it. And you can't pass whole objects, you need to pass primitives (Integer, String, DateTime etc) or collections of primitives (List, Set, Map).
If you can access all the info you need from the database - just pass a List<Id> for example and query it.
If you can't - you can cheat by serializing your objects and passing them as List<String>. Check the documentation around JSON class or these 2 handy posts:
https://developer.salesforce.com/blogs/developer-relations/2013/06/passing-objects-to-future-annotated-methods.html
https://gist.github.com/kevinohara80/1790817
Side note - can you rethink your flow? If the starting point is Visualforce you can skip the #future step. Do the callout first and then the DML (if needed). That way the usual "you have uncommitted work pending" error won't be triggered. This thing is there not only to annoy developers ;) It's there to make you rethink your design. You're asking the application to have open transaction & lock on the table(s) for up to 2 minutes. And you're giving yourself extra work - will you rollback your changes correctly when the insert went OK but callout failed?
By reversing the order of operations (callout first, then the DML) you're making it simpler - there was no save attempt to DB so there's nothing to roll back if the save fails.

How should I do post persist/update actions in doctrine 2.1, that involves re-saving to the db?

Using doctrine 2.1 (and zend framework 1.11, not that it matters for this matter), how can I do post persist and post update actions, that involves re-saving to the db?
For example, creating a unique token based on the just generated primary key' id, or generating a thumbnail for an uploaded image (which actually doesn't require re-saving to the db, but still) ?
EDIT - let's explain, shall we ?
The above is actually a question regarding two scenarios. Both scenarios relate to the following state:
Let's say I have a User entity. When the object is flushed after it has been marked to be persisted, it'll have the normal auto-generated id of mysql - meaning running numbers normally beginning at 1, 2, 3, etc..
Each user can upload an image - which he will be able to use in the application - which will have a record in the db as well. So I have another entity called Image. Each Image entity also has an auto-generated id - same methodology as the user id.
Now - here is the scenarios:
When a user uploads an image, I want to generate a thumbnail for that image right after it is saved to the db. This should happen for every new or updated image.
Since we're trying to stay smart, I don't want the code to generate the thumbnail to be written like this:
$image = new Image();
...
$entityManager->persist($image);
$entityManager->flush();
callToFunctionThatGeneratesThumbnailOnImage($image);
but rather I want it to occur automatically on the persisting of the object (well, flush of the persisted object), like the prePersist or preUpdate methods.
Since the user uploaded an image, he get's a link to it. It will probably look something like: http://www.mysite.com/showImage?id=[IMAGEID].
This allows anyone to just change the imageid in this link, and see other user's images.
So in order to prevent such a thing, I want to generate a unique token for every image. Since it doesn't really need to be sophisticated, I thought about using the md5 value of the image id, with some salt.
But for that, I need to have the id of that image - which I'll only have after flushing the persisted object - then generate the md5, and then saving it again to the db.
Understand that the links for the images are supposed to be publicly accessible so I can't just allow an authenticated user to view them by some kind of permission rules.
You probably know already about Doctrine events. What you could do:
Use the postPersist event handler. That one occurs after the DB insert, so the auto generated ids are available.
The EventManager class can help you with this:
class MyEventListener
{
public function postPersist(LifecycleEventArgs $eventArgs)
{
// in a listener you have the entity instance and the
// EntityManager available via the event arguments
$entity = $eventArgs->getEntity();
$em = $eventArgs->getEntityManager();
if ($entity instanceof User) {
// do some stuff
}
}
}
$eventManager = $em->getEventManager():
$eventManager->addEventListener(Events::postPersist, new MyEventListener());
Be sure to check e. g. if the User already has an Image, otherwise if you call flush in the event listener, you might be caught in an endless loop.
Of course you could also make your User class aware of that image creation operation with an inline postPersist eventHandler and add #HasLifecycleCallbacks in your mapping and then always flush at the end of the request e. g. in a shutdown function, but in my opinion this kind of stuff belongs in a separate listener. YMMV.
If you need the entity id before flushing, just after creating the object, another approach is to generate the ids for the entities within your application, e. g. using uuids.
Now you can do something like:
class Entity {
public function __construct()
{
$this->id = uuid_create();
}
}
Now you have an id already set when you just do:
$e = new Entity();
And you only need to call EntityManager::flush at the end of the request
In the end, I listened to #Arms who commented on the question.
I started using a service layer for doing such things.
So now, I have a method in the service layer which creates the Image entity. After it calls the persist and flush, it calls the method that generates the thumbnail.
The Service Layer pattern is a good solution for such things.