Should the Entity Manager be closed by ourself in EJB? - web-services

I am new to EJB. I want to know that the EnityManger should be closed by ourself (em.close() )of a stateless or statefull sessionbeans in EJB 3.0 ( may be inside a method where #preDestroy annotation is used )? Is it closed by the ejb container, release its resources and we don't need to dwell EnitityManger after retrieving required DB data? What are the services we should stop or close ourselves ?

In EJB there is normally no need to do any of that.
An entity manager is by default container managed and its associated persistence context is transaction scoped. This means there is no need to either create or close the entity manager, nor is there any need to begin, commit or rollback anything.
After the method that starts a transaction (which happens transparently as well) completes, the transaction scoped persistence context is guaranteed to be flushed (all outstanding updates are written to the DB) and cleaned (the L1 cache is destroyed) as well as any other resources involved with that entity manager.
A standard example:
#Stateless
public class CustomerService {
#PersistenceContext
private EntityManager entityManager;
public void addCustomer(Customer customer) {
entityManager.persist(customer);
}
}
Note that if you really wanted, you could use an application managed entity manager by injecting a factory instead and obtaining the entity manager from it. In that situation you would indeed need to do any closing yourself. If you would also combine this with bean managed transactions and the extended persistence context, you'd be in a situation where even in EJB you'd need to do everything yourself. But this is very rare, and only provided to you as an option. It's not the default.

Related

Akka Durable State restore

I have an existing Akka Typed application, and am considering adding in support for persistent actors, using the Durable State feature. I am not currently using cluster sharding, but plan to implement that sometime in the future (after implementing Durable State).
I have read the documentation on how to implement Durable State to persist the actor's state, and that all makes sense. However, there does not appear to be any information in that document about how/when an actor's state gets recovered, and I'm not quite clear as to what I would need to do to recover persisted actors when the entire service is restarted.
My current architecture consists of an HTTP service (using AkkaHTTP), a "dispatcher" actor (which is the ActorSystem's guardian actor, and currently a singleton), and N number of "worker" actors, which are children of the dispatcher. Both the dispatcher actor and the worker actors are stateful.
The dispatcher actor's state contains a map of requestId->ActorRef. When a new job request comes in from the HTTP service, the dispatcher actor creates a worker actor, and stores its reference in the map. Future requests for the same requestId (i.e. status and result queries) are forwarded by the dispatcher to the appropriate worker actor.
Currently, if the entire service is restarted, the dispatcher actor is recreated as a blank slate, with an empty worker map. None of the worker actors exist anymore, and their status/results can no longer be retrieved.
What I want to accomplish when the service is restarted is that the dispatcher gets recreated with its last-persisted state. All of the worker actors that were in the dispatcher's worker map should get restored with their last-persisted states as well. I'm not sure how much of this is automatic, simply by refactoring my actors as persistent actors using Durable State, and what I need to do explicitly.
Questions:
Upon restart, if I create the dispatcher (guardian) actor with the same name, is that sufficient for Akka to know to restore its persisted state, or is there something more explicit that I need to do to tell it to do that?
Since persistent actors require the state to be serializable, will this work with the fact that the dispatcher's worker map references the workers by ActorRef? Are those serializable, or do I need to switch it to referencing them by name?
If I leave the references to the worker actors as ActorRefs, and the service is restarted, will those ActorRefs (that were restored as part of the dispatcher's persisted state) continue to work, and will the worker actors' persisted states be automatically restored? Or, again, do I need to do something explicit to tell it to revive those actors and restore their states.
Currently, since all of the worker actors are not persisted, I assume that their states are all held in memory. Is that true? I currently keep all workers around indefinitely so that the results of their work (which is part of their state) can be retrieved in the future. However, I'm worried about running out of memory on the server. I'd like to have workers that are done with their work be able to be persisted to disk only, kind of "putting them to sleep", so that the results of their work can be retrieved in the future, without taking up memory, days or weeks later. I'd like to have control over when an actor is "in memory", and when it's "on disk only". Can this Durable State persistence serve as a mechanism for this? If so, can I kill an actor, and then revive it on demand (and restore its state) when I need it?
The durable state is stored keyed by an akka.persistence.typed.PersistenceId. There's no necessary relationship between the actor's name and its persistence ID.
ActorRefs are serializable (the included Jackson serializations (CBOR or JSON) do it out of the box; if using a custom serializer, you will need to use the ActorRefResolver), though in the persistence case, this isn't necessarily that useful: there's no guarantee that the actor pointed to by the ref is still there (consider, for instance, if the JVM hosting that actor system has stopped between when the state was saved and when it was read back).
Non-persistent actors (assuming they're not themselves directly interacting with some persistent data store: there's nothing stopping you from having an actor that reads state on startup from somewhere else (possibly stashing incoming commands until that read completes) and writes state changes... that's basically all durable state is under the hood) keep all their state in memory, until they're stopped. The mechanism of stopping an actor is typically called "passivation": in typed you typically have a Passivate command in the actor's protocol. Bringing it back is then often called "rehydration". Both event-sourced and durable-state persistence are very useful for implementing this.
Note that it's absolutely possible to run a single-node Akka Cluster and have sharding. Sharding brings a notion of an "entity", which has a string name and is conceptually immortal/eternal (unlike an actor, which has a defined birth-to-death lifecycle). Sharding then has a given entity be incarnated by at most one actor at any given time in a cluster (I'm ignoring the multiple-datacenter case: if multiple datacenters are in use, you're probably going to want event sourced persistence). Once you have an EntityRef from sharding, the EntityRef will refer to whatever the current incarnation is: if a message is sent to the EntityRef and there's no living incarnation, a new incarnation is spawned. If the behavior for that TypeKey which was provided to sharding is a persistent behavior, then the persisted state will be recovered. Sharding can also implement passivation directly (with a few out-of-the-box strategies supported).
You can implement similar functionality yourself (for situations where there aren't many children of the dispatcher, a simple map in the dispatcher and asks/watches will work).
The Akka Platform Guide tutorial works an example using cluster sharding and persistence (in this case, it's event sourced, but the durable state APIs are basically the same, especially if you ignore the CQRS bits).

How to externalize akka sharded actor state to redis or ignite?

I am very new to Akka clustering and working on a proof of concept. In my case i have an actor which is running on a cluster and the actor has state as a Map[String,Any]. So, for any request the actor receives it based on the incoming message it create a new entity actor and the data map. The problem here is the map is in memory right now. Is it possible to store the sharded actor state somewhere in redis or ignite ?
You should probably start by having a look at akka-persistence (the persistence module included in akka). The snapshotting part is meant to persist the state directly, but you have to start with the command/event-sourcing part, the snapshotting part being an optional enhancement.
Then you can combine this with automatic passivation of your sharded actors after a certain inactivity timeout.
With the above, you'll have a solution that persists the state of your actors in an external storage system to free up memory, restoring your actor's state whenever they come back to life.
Last step would be to see which storage backends are available for akka-persistence and match your requirements, you can implement your own of course.

Java Jersey REST Webservice: Not possible to create a singleton bean over all cluster nodes

i'd like to have an singleton object in my Jersey 1.19.1 webservice, which is the same instance over all my Glassfish nodes. This is my current implementation:
#Singleton
#ApplicationScoped
#Stateless
public class ValueObject {
public long downloads = 0;
}
and
#Path("downloads")
public class Downloads {
#InjectParam
private ValueObject singleton;
}
The counter is increased when a file is downloaded.
After downloading a file and asking for the downloadCounter 1 and 0 is returned depending on which of the two Glassfish nodes processed the request.
My goal is to get always 1. How can i achieve that?
Without #ApplicationScoped or using #Stateful instead of #Stateless leads to the same result.
Regards
John
This is not possible with GlassFish. As discussed in this StackOverflow answer, the EJB #Singleton annotation will have one instance per JVM, as per the EJB 3.1 spec:
A Singleton session bean is a session bean component that is instantiated once per application. In cases where the container is distributed over many virtual machines, each application will have one bean instance of the Singleton for each JVM
The answer also mentions that WildFly 10 has a mechanism to support this, but this is a proprietary solution, and not one found in GlassFish.
A solution is currently being investigated for Payara Server, though this is not yet implemented.

spring concurrency how spring handles multiple requests at the same time

Suppose if 1000 requests are sent to the browser at the same time by 1000 different users, how spring serves each request. As spring bean default scope uhh s singleton how it works.
Its a good question - You can use scope="prototype" for each of your controller bean ,which will make your Bean essentially NON SINGLETON... but do keep in mind "NOT" to have instance variable defined. I have faced situation in past where even though I kept the scope=prototype my instance variables were shared between various user session.
Stick to Immutiblity

Persistence of service for multiple requests

I had originally thought that a particular "service interface" and in my example one that inherits from ServiceStack.ServiceInterface.Service is recreated with every request. I recently found out that perhaps it's not the case, as dependencies( and members in general) are retaining internal state. I was just hoping if someone could point me in the right direction as to expected behavior and why it behaves this way. I'm primarily interested in two scenarios, one for IIS hosting, and one for VS hosting / debugging.
A Service in ServiceStack is registered and autowired like any other IOC dependency. Every request a new instance of the Service is created and autowired with its dependencies.
Whether the service is autowired with existing instances or not depends on how the dependency is registered, e.g if you use the built-in Funq IOC:
By default dependencies are registered with Singleton scope, i.e. it will inject the same instance everytime:
container.Register<IFoo>(c => new Foo());
container.Register<IFoo>(c => new Foo()).ReusedWithin(ReuseScope.Container);
You can also specify RequestScope so a new instance is created and injected per request:
container.Register<IFoo>(c => new Foo()).ReusedWithin(ReuseScope.Request);
Finally there's transient scope where a new instance is created and injected each time:
container.Register<IFoo>(c => new Foo()).ReusedWithin(ReuseScope.None);
To re-cap a new instance of your service is indeed created per request, but whether you get a new instance or not depends on the Registration scope of your dependencies:
public class MyService : Service {
public IFoo Foo { get; set; } // auto-wired using above registration rules
}