Akka default dispatcher having many waiting threads - akka

Akka application spins a lot of new dispatchers overtime instead of reusing .
The old dispatchers are going back to waiting state and never been reused or terminated.
We use akka's default dipatcher with fork-join executors. we do override the default parellism parameter between our actors based on actors purpose.
Is there a configuration that we are missing that causing this issue?

Related

Akka Durable State restore

I have an existing Akka Typed application, and am considering adding in support for persistent actors, using the Durable State feature. I am not currently using cluster sharding, but plan to implement that sometime in the future (after implementing Durable State).
I have read the documentation on how to implement Durable State to persist the actor's state, and that all makes sense. However, there does not appear to be any information in that document about how/when an actor's state gets recovered, and I'm not quite clear as to what I would need to do to recover persisted actors when the entire service is restarted.
My current architecture consists of an HTTP service (using AkkaHTTP), a "dispatcher" actor (which is the ActorSystem's guardian actor, and currently a singleton), and N number of "worker" actors, which are children of the dispatcher. Both the dispatcher actor and the worker actors are stateful.
The dispatcher actor's state contains a map of requestId->ActorRef. When a new job request comes in from the HTTP service, the dispatcher actor creates a worker actor, and stores its reference in the map. Future requests for the same requestId (i.e. status and result queries) are forwarded by the dispatcher to the appropriate worker actor.
Currently, if the entire service is restarted, the dispatcher actor is recreated as a blank slate, with an empty worker map. None of the worker actors exist anymore, and their status/results can no longer be retrieved.
What I want to accomplish when the service is restarted is that the dispatcher gets recreated with its last-persisted state. All of the worker actors that were in the dispatcher's worker map should get restored with their last-persisted states as well. I'm not sure how much of this is automatic, simply by refactoring my actors as persistent actors using Durable State, and what I need to do explicitly.
Questions:
Upon restart, if I create the dispatcher (guardian) actor with the same name, is that sufficient for Akka to know to restore its persisted state, or is there something more explicit that I need to do to tell it to do that?
Since persistent actors require the state to be serializable, will this work with the fact that the dispatcher's worker map references the workers by ActorRef? Are those serializable, or do I need to switch it to referencing them by name?
If I leave the references to the worker actors as ActorRefs, and the service is restarted, will those ActorRefs (that were restored as part of the dispatcher's persisted state) continue to work, and will the worker actors' persisted states be automatically restored? Or, again, do I need to do something explicit to tell it to revive those actors and restore their states.
Currently, since all of the worker actors are not persisted, I assume that their states are all held in memory. Is that true? I currently keep all workers around indefinitely so that the results of their work (which is part of their state) can be retrieved in the future. However, I'm worried about running out of memory on the server. I'd like to have workers that are done with their work be able to be persisted to disk only, kind of "putting them to sleep", so that the results of their work can be retrieved in the future, without taking up memory, days or weeks later. I'd like to have control over when an actor is "in memory", and when it's "on disk only". Can this Durable State persistence serve as a mechanism for this? If so, can I kill an actor, and then revive it on demand (and restore its state) when I need it?
The durable state is stored keyed by an akka.persistence.typed.PersistenceId. There's no necessary relationship between the actor's name and its persistence ID.
ActorRefs are serializable (the included Jackson serializations (CBOR or JSON) do it out of the box; if using a custom serializer, you will need to use the ActorRefResolver), though in the persistence case, this isn't necessarily that useful: there's no guarantee that the actor pointed to by the ref is still there (consider, for instance, if the JVM hosting that actor system has stopped between when the state was saved and when it was read back).
Non-persistent actors (assuming they're not themselves directly interacting with some persistent data store: there's nothing stopping you from having an actor that reads state on startup from somewhere else (possibly stashing incoming commands until that read completes) and writes state changes... that's basically all durable state is under the hood) keep all their state in memory, until they're stopped. The mechanism of stopping an actor is typically called "passivation": in typed you typically have a Passivate command in the actor's protocol. Bringing it back is then often called "rehydration". Both event-sourced and durable-state persistence are very useful for implementing this.
Note that it's absolutely possible to run a single-node Akka Cluster and have sharding. Sharding brings a notion of an "entity", which has a string name and is conceptually immortal/eternal (unlike an actor, which has a defined birth-to-death lifecycle). Sharding then has a given entity be incarnated by at most one actor at any given time in a cluster (I'm ignoring the multiple-datacenter case: if multiple datacenters are in use, you're probably going to want event sourced persistence). Once you have an EntityRef from sharding, the EntityRef will refer to whatever the current incarnation is: if a message is sent to the EntityRef and there's no living incarnation, a new incarnation is spawned. If the behavior for that TypeKey which was provided to sharding is a persistent behavior, then the persisted state will be recovered. Sharding can also implement passivation directly (with a few out-of-the-box strategies supported).
You can implement similar functionality yourself (for situations where there aren't many children of the dispatcher, a simple map in the dispatcher and asks/watches will work).
The Akka Platform Guide tutorial works an example using cluster sharding and persistence (in this case, it's event sourced, but the durable state APIs are basically the same, especially if you ignore the CQRS bits).

When should Akka BackoffSupervisor OnStop be used instead of OnFailure?

We have been using Akka BackoffSupervisor for a few cluster singleton actors that are critical for the system and should always be present. These actors are persistent so in case DB failure they need to be restarted. Our code used OnFailure condition. However when the child persistence failed due the database error both the child and backoff supervisor were stopped and not restarted.
Should we use OnStop instead of OnFailure? It's not explained when each of the should be used and since the child actor fails in case of the database error our assumption was that we could use OnFailure. But perhaps since an actor is unconditionally terminated on persitence error OnFailure is never invoked and OnStopp should be used instead?
And the answer is "yes": in case the supervised actor is stopped unconditionally, it triggers the event OnStop that can be monitored to start a new incarnation of it.

Is there a way to achieve service downgrade in akka cluster sharding?

I'm trying to build up an Akka cluster ShardRegion that might need to be downgraded in the production environment when a bug occurs. However, instead of unregistering it by calling
ClusterClientReceptionist.get(nodeActorSystem).unregisterService(shardRegion)
which will terminate the ShardRegion and its child actors after all messages are consumed before PoisonPill, my sharding child actors have their internal state and purposes that need to be accomplished. I need an elegant way to slowly downgrade the process with the ShardRegion to let any session in-between finish, e.g. any new message with a different EntityId will be sent elsewhere.
I haven't yet found any means to downgrade it or just simply stop any new sharding AkkaActor to prop up on the ShardRegion.Is this even achievable in Akka Cluster ShardRegion?
You can accomplish part of this by specifying a custom stopMessage. The shard region will send this command to the entity actors when they are to be passivated or rebalanced. The default is PoisonPill, but a custom one allows the entity actors to do whatever they need to do to shut down (they do need to eventually stop themselves in this scenario).
If you're triggering a rebalance, the messages to the shard will be buffered until all the active entities in that shard have stopped, which may qualify as "any new message with a different entity ID will be sent elsewhere". Note that messages which are being sent outside of cluster sharding (i.e. directly between entity actors) will still be delivered normally (until said entity actors stop).

Akka actor read state from database

I would like to ask for pattern when it is required to init actor state from database. I have a DAO which return Future[...] and normaly to be non blocking message should be send to actor on future complete. But this case is different. I cannot receive any message from actor mailbox before initialization complete. Is the only way to block actor thread while waiting for database future complete?
The most trivial approach would be to define two receive methods, e.g. initializing and initialized, starting with def receive = initializing. Inside your inizializing context, you could simply send back a message like InitializationNotReady to tell the other actor that he should try it again later. After your actor is initialized, you switch your context with context become initialized to the new state, where you can operate normally.
After all, another good aproach could be to have a look at Akka Persistence. It enables stateful actors to persist their internal state so that it can be recovered when an actor is started, restarted after a JVM crash or by a supervisor, or migrated in a cluster.
In you case, you can restore your state from a database, since their are multiple storage options for Akka persistence. you can find them here. After that recovery, you can receive messages like you're used to it.

does new instance of actor get created when too many msg?

I recently learned about the akka,but some idea I can't grasp.
my question is, if there are too many message in queue,will a new actor be created?
in many framework,for example, one http-requet message coming,and the framework found that the current "worker" are busy,so the framework will create another "worker " to process the new message in another thread.
but it seems the akka doesn't do this way,there is only one actor instance.
so I think the "busy actor" will bocking the queue, which will hit the throughout and performance , am I correct?
Each Actor stores their messages in a Mailbox.
http://doc.akka.io/docs/akka/current/scala/mailboxes.html
The default mailbox is unbounded and non-blocking. If your actor cannot process messages quickly enough, their mailbox balloons in size and consumes increasing amounts of RAM. You can configure Akka to use a bounded, blocking Mailbox which will block the sender when over capacity.
If you would like to dynamically manage a pool of actors, look into Routing strategies.
http://doc.akka.io/docs/akka/2.4.1/scala/routing.html
You can create a Router Actor that receives messages and passes them to routee actors. The Router also manages the routee pool and can dynamically generate routees as needed.
Also, if using Future and callback asynchronous execution, your actors will not block on http requests.
TL;DR:
If you send messages faster than your Actor can process them, eventually your application will start dropping messages.
Longer answer:
As I understand, every Akka Actor has a Queue associated with it, which holds all the messages it receives.
If you send messages to this Actor, faster than the Actor can process them, eventually the queue will overflow, since messages on the queue are kept in ram.
It is not possible to spawn another Actor, on the fly. Since the messages on the queue are processed in order. This ordering will be broken if more than one consumer exists.
I would suggest you take a look at Akka Streams, this is a higher level API built on top of actors, and guards you against this kind of thing by providing backpressure throughout your system. This means that if the actor you're sending messages to is slower than whoever is producing the messages, the consumer will ask the producer to slow down, and will not overflow your Actor's queue.