Does redisson rcache use hashcode similar to the java map for containsKey() op? I am seeing behavior where when key is there but containsKey returns false. I added a breakpoint on the hashcode method of the key object. I don't see it getting called.
Related
I'm dealing with the Cloudfoundry Java client for the following use case:
I perform a request that returns a Mono<Void>
On success of this Mono, I want to perform an optional operation that returns a Mono<String>
For deciding when to perform the second operation, I'm using filter, but it doesn't seem to work
So, it looks like this:
Mono<Void> service = createService();
Mono<String> serviceKey = service.filter( x -> someBoolean)
.map( x -> someKey)
.flatMap(key -> {
Mono<String> key = serviceKey(key);
return key;
});
serviceKey.blockOptional() //returns Empty
My expectation would be that, when service succeeds and the filter operation is succesfull, the second call serviceKey would happen. However, I saw with the debugger that the code inside flatMap nevers get executed.
The javadoc for Mono#filter states:
If this Mono is valued, test the result and replay it if predicate returns true. Otherwise complete without value.
Not sure how to understand that... Question is, how can I chain successful operations when the first one returns a Mono<Void>?
I just want to perform the second one when the first is succesful, and return an empty Mono when the filter predicates fails.
Mono<Void> means "Will either complete without value or error" because you can't instantiate Void type.
What you need is then operator, it ignores the previous result and "switches" the flow to the provided Mono.
There is also thenMany in case you need to "switch" it to Flux.
My problem - process try change entity that already changed and have newest version id. When i do flush() in my code in UnitOfWork's commit() rising OptimisticLockException and catching in same place by catch-all block. And in this catch doctrine closing EntityManager.
If i want skip this entity and continue with another from ArrayCollection, i should not use flush()?
Try recreate EntityManager:
}catch (OptimisticLockException $e){
$this->em = $this->container->get('doctrine')->getManager();
echo "\n||OptimisticLockException.";
continue;
}
And still get
[Doctrine\ORM\ORMException]
The EntityManager is closed.
Strange.
If i do
$this->em->lock($entity, LockMode::OPTIMISTIC, $entity->getVersion());
and then do flush() i get OptimisticLockException and closed entity manager.
if i do
$this->getContainer()->get('doctrine')->resetManager();
$em = $doctrine->getManager();
Old data unregistered with this entity manager and i even can't write logs in database, i get error:
[Symfony\Component\Debug\Exception\ContextErrorException]
Notice: Undefined index: 00000000514cef3c000000002ff4781e
You should check entity version before you try to flush it to avoid exception. In other words you should not call flush() method if the lock fails.
You can use EntityManager#lock() method for checking whether you can flush entity or not.
/** #var EntityManager $em */
$entity = $em->getRepository('Post')->find($_REQUEST['id']);
// Get expected version (easiest way is to have the version number as a hidden form field)
$expectedVersion = $_REQUEST['version'];
// Update your entity
$entity->setText($_REQUEST['text']);
try {
//assert you edit right version
$em->lock($entity, LockMode::OPTIMISTIC, $expectedVersion);
//if $em->lock() fails flush() is not called and EntityManager is not closed
$em->flush();
} catch (OptimisticLockException $e) {
echo "Sorry, but someone else has already changed this entity. Please apply the changes again!";
}
Check the example in Doctrine docs optimistic locking
Unfortunately, nearly 4 years later, Doctrine is still unable to recover from an optimistic lock properly.
Using the lock function as suggested in the doc doesn't work if the db was changed by another server or php worker thread. The lock function only makes sure the version number wasn't changed by the current php script since the entity was loaded into memory. It doesn't read the db to make sure the version number is still the expected one.
And even if it did read the db, there is still the potential for a race condition between the time the lock function checks the current version in the db and the flush is performed.
Consider this scenario:
server A reads the entity,
server B reads the same entity,
server B updates the db,
server A updates the db <== optimistic lock exception
The exception is triggered when flush is called and there is nothing that can be done to prevent it.
Even a pessimistic lock won't help unless you can afford to loose performance and actually lock your db for a (relatively) long time.
Doctrine's solution (update... where version = :expected_version) is good in theory. But, sadly, Doctrine was designed to become unusable once an exception is triggered. Any exception. Every entity is detached. Even if the optimistic lock can be easily solved by re-reading the entity and applying the change again, Doctrine makes it very hard to do so.
As others have said, sometimes EntityManager#lock() is not useful. In my case, the Entity version may change during the same request.
If EntityManager closes after flush(), I proceed like this:
if (!$entityManager->isOpen()) {
$entityManager = EntityManager::create(
$entityManager->getConnection(),
$entityManager->getConfiguration(),
$entityManager->getEventManager()
);
// ServiceManager shoud be aware of this change
// this is for Zend ServiceManager your shoud adapt this part to your usecase
$serviceManager = $application->getServiceManager();
$serviceManager->setAllowOverride(true);
$serviceManager->setService(EntityManager::class, $entityManager);
$serviceManager->setAllowOverride(false);
// Then you should manually reload every Entity you need (or repeat the whole set of actions)
}
users#create is mapped as the callback method for omniauth. After the user is created in a call to User.from_omniauth (which takes an instance of OmniAuth::AuthHash as the sole argument) and assigned to the #user variable, I make the following call:
session[:remember_token] = #user.remember_token
However, upon inspection, the value of session[:remember_token] immediately after this line of code is nil (the output of a call to puts session[:remember_token]). I'm using Figaro and have ENV['SECRET_KEY_BASE'] set. Any ideas on why sessions just aren't being set?
I need to do a callout to webservice from my ApexController class. To do this, I have an asycn method with attribute #future (callout=true). The webservice call needs to refeence an object that gets populated in save call from VF page.
Since, static (future) calls does not all objects to be passed in as method argument, I was planning to add the data in a static Map and access that in my static method to do a webservice call out. However, the static Map object is getting re-initalized and is null in the static method.
I will really appreciate if anyone can give me some pointeres on how to address this issue.
Thanks!
Here is the code snipped:
private static Map<String, WidgetModels.LeadInformation> leadsMap;
....
......
public PageReference save() {
if(leadsMap == null){
leadsMap = new Map<String, WidgetModels.LeadInformation>();
}
leadsMap.put(guid,widgetLead);
}
//make async call to Widegt Webservice
saveWidgetCallInformation(guid)
//async call to widge webserivce
#future (callout=true)
public static void saveWidgetCallInformation(String guid) {
WidgetModels.LeadInformation cachedLeadInfo =
(WidgetModels.LeadInformation)leadsMap.get(guid);
.....
//call websevice
}
#future is totally separate execution context. It won't have access to any history of how it was called (meaning all static variables are reset, you start with fresh governor limits etc. Like a new action initiated by the user).
The only thing it will "know" is the method parameters that were passed to it. And you can't pass whole objects, you need to pass primitives (Integer, String, DateTime etc) or collections of primitives (List, Set, Map).
If you can access all the info you need from the database - just pass a List<Id> for example and query it.
If you can't - you can cheat by serializing your objects and passing them as List<String>. Check the documentation around JSON class or these 2 handy posts:
https://developer.salesforce.com/blogs/developer-relations/2013/06/passing-objects-to-future-annotated-methods.html
https://gist.github.com/kevinohara80/1790817
Side note - can you rethink your flow? If the starting point is Visualforce you can skip the #future step. Do the callout first and then the DML (if needed). That way the usual "you have uncommitted work pending" error won't be triggered. This thing is there not only to annoy developers ;) It's there to make you rethink your design. You're asking the application to have open transaction & lock on the table(s) for up to 2 minutes. And you're giving yourself extra work - will you rollback your changes correctly when the insert went OK but callout failed?
By reversing the order of operations (callout first, then the DML) you're making it simpler - there was no save attempt to DB so there's nothing to roll back if the save fails.
I have a list of objects. They are JPA "Location" entities.
List<Location> locations;
I have a stateless EJB which loops thru the list and persists each one.
public void createLocations() {
List<Locations> locations = getListOfJPAManagedLocationEntities(); // I'm leaving out the details of this because it has nothing to do with the issue
for(Location location : locations) {
em.persist(location);
}
}
The code works fine. I do not have any problems.
However, the issue is: I want this to be an all-or-none transaction. Currently, each time thru the for loop, the persist() method will insert a new row into the database. Suppose I have 100 location objects and the 54th object has something wrong with it and an exception is thrown. There will be 53 records inserted into the database. What I want is: all of them must succeed before any of them succeed.
I'm using the latest & greatest version of Java EE6, EJB 3.x., and JPA 2. My persistence.xml uses JTA.
<persistence-unit name="myPersistenceUnit" transaction-type="JTA">
And I like having JTA.
I do not want to stop using JTA.
90% of the time JTA does exactly what I want it to do. But in this case, I doesn't seem to.
My understanding of JTA must be inaccurate because I always thought the beginning and end of the EJB method marked the boundaries of the JTA transaction (assume only one method is in-play as I've shown above). By my logic, the transaction would not end until the for-loop is done and the method returns, and then at that point the records are persisted.
I'm using the JTDS driver for SqlServer 2008. Perhaps the database doesn't want to insert a record without immediately committing it. The entity id is defined like this:
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
I've checked the spec., and it is not proper to call the various "UserTransaction" or "getTransaction()" methods in a JTA environment.
So what can I do?
Thanks.
If you use JTA and container managed transactions the default behavior for an session EJB method call is to run in a transaction (is like annotating it with #TransactionAttribute(TransactionAttributeType.REQUIRED). That means that your code already runs in a transaction and will do what you expect: if an exception occurs at row 54 all previous inserted rows will be rolled-back. You can go ahead and test it by throwing yourself an exception at some point in the loop. Note that if you throw a checked exception declared by your method you can specify what the container should do when that exception occurs. You need to annotate the exception class with #ApplicationException (rollback=true).
if there was a duplicate entry while looping then it will continue without problems and when compiler reaches this line em.flush(); after the loop then it will throw an exception and rollback the transaction.
I'm using JBoss. Set your datasource in your standalone.xml or domain.xml to have
<datasource jta="true" ...>
Seems obvious, but I obviously set it wrong a long time ago and forgot about it.