I have the following config
akka{
actor {
deployment{
/my-router {
dispatcher = akka.actor.my-dispatcher
router = round-robin-pool
nr-of-instances = 100
cluster {
enabled = on
max-nr-of-instances-per-node = 30
}
}
}
my-dispatcher {
type = Dispatcher
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 4
parallelism-factor = 2.0
parallelism-max = 20
}
}
}
}
I've found(with help of visualVM) that there are no threads of my-dispatcher used. However, if I specify my-dispatcher via .withDispatcher("akka.actor.my-dispatcher") when I create props for my router via FromConfig I can observe that threads. I can state that I observe those threads because I see threads with name like this: actorSystenName-akka.actor.my-dispatcher-8
So questions are:
How to set dispatcher for a router via config?
Will this dispatcher be used for routees(that are obviously childs of a router)?
What are the differences between specifying dispatcher via config and via withDispatcher?
I've also tried to surround config's dispatcher setting in "", but still didn't observe threads in visualVM with dispatcher's name, so do threads' names have such pattern {actorSystenName}-{dispatcher}-{number}?
EDIT
I've found that pool-dispatcher property can be used for setting router's dispatcher for children(routees). But FromConfig which extends Pool lacks overriding of usePoolDispatcher method. So 1 more question: is this made(usePoolDispatcher is not overriden in FromConfig) intentioally or FromConfig is not designed for such usage?
Related
I have an actor that represent worker for heavy long time job:
class Worker extends Actor{
override def receive: Receive = {
case "doJob" =>
Thread.sleep(999999)
sender ! "JobResult"
}
}
I would have limit job queue and explicitly reject user, if queue is full. What is best practice to implement this logic. Should I use bounding mailboxes or some dispatcher, that monitoring job queue? Something like this:
class Dispatcher(worker:ActorRef) extends Actor{
val MAX_JOBS = 10
var jobs = 0
override def receive: Receive = {
case "newJob" =>
if (jobs >= MAX_JOBS) sender ! "Try later"
jobs+=1
worker ! "doJob"
case "JobResult" =>
jobs-=1
}
}
Also I not sure how to properly handle failures in that case...
I think the best practice is to use a bounded mailbox for the worker actor.
Then you can configure the bounded in a configuration like this:
bounded-mailbox { mailbox-type = "akka.dispatch.BoundedMailbox"
mailbox-capacity = 1000 mailbox-push-timeout-time = 10s }
akka.actor.mailbox.requirements {
"akka.dispatch.BoundedMessageQueueSemantics" = bounded-mailbox
}
You can also specify the type of mailbox you want (from the built-in types) or create your own custom mailbox with its own traits and specification. (which messages are treated first etc..)
I think that in your scenario it is best:
1. To create your own mailbox with its own error mechanism and custom cleanup/bounding limitations. (based on bounded mailbox traits)
3. Attach your dispatcher to the custom mailbox you created through configuration.
In my system I want an actor A to send the same messages to actors B,C,D.
Instead of creating the three actors, I was thinking of just combining their behaviors with an And behavior contaminator, and then passing that behavior to A.
If I do this, how many actors will get created? Will I get just one actor with three behaviors in it, or three actors with separate behaviors?
Here is my real code using the non-And approach, for concreteness (see how ReplyGenerator gets passed the references to other actors):
object Foobar {
def foobar(): Behavior[Request] =
ContextAware[Request] {
context =>
val foo1 = context.spawn(Props(Foo1.behavior()), "foo1")
val foo2 = context.spawn(Props(Foo2.behavior()), "foo2")
val foo3 = context.spawn(Props(Foo3.behavior()), "foo3")
val generator = context.spawn(Props(ReplyGenerator.behavior(List(foo1, foo2, foo3))),
"generator")
Static {
case request: Request =>
generator ! request
}
}
}
and here is the ReplyGenerator behavior that sends the same message to all subscribers:
object ReplyGenerator {
def behavior(subscribers: List[ActorRef[Reply]]): Behavior[Request] = {
Static {
case request: Request =>
subscribers.foreach(_ ! Reply.empty)
}
}
Considering that I want the actors foo 1,2,3 to run in parallel, can And combinator be used here instead?
Thank you.
If you mean execution parallelism then you’ll have to create separate Actors (by separate spawn calls) as you do in the example code—using And will only create a single Actor that runs the contained behaviors one after the other.
Is there a way for setting configuration for remote actor selection simliar to the remote actor creation as specified in Akka docs:
akka {
actor {
deployment {
/sampleActor {
remote = "akka.tcp://sampleActorSystem#127.0.0.1:2553"
}
}
}
}
I prefer not to define custom variable for that.
system.actorSelection("sampleActor")
There are only two forms of the actor selection method, from the docs:
def actorSelection(path: ActorPath): ActorSelection
Construct an akka.actor.ActorSelection from the given path, which is
parsed for wildcards (these are replaced by regular expressions
internally). No attempt is made to verify the existence of any part of
the supplied path, it is recommended to send a message and gather the
replies in order to resolve the matching set of actors.
def actorSelection(path: String): ActorSelection
Construct
an akka.actor.ActorSelection from the given path, which is parsed for
wildcards (these are replaced by regular expressions internally). No
attempt is made to verify the existence of any part of the supplied
path, it is recommended to send a message and gather the replies in
order to resolve the matching set of actors.
And an ActorPath is just created from a string anyway:
def
fromString(s: String): ActorPath
Parse string as actor path; throws java.net.MalformedURLException if unable to do so.
So there isn't a direct way to do actor selection just by setting a particular value in config. However it is quite easy to pull a value from config and use it for actor selection. Given the config:
akka {
actor {
selections: {
sampleActor: {
path: "akka.tcp://sampleActorSystem#127.0.0.1:2553/user/sampleActor"
}
}
}
}
You could use:
val sampleActorSelection =
system.actorSelection(
system.settings.config.getString("akka.actor.selections.sampleActor.path"))
If this was a method you found yourself using frequently, you could use an implicit class to add a helper method to system:
implicit class ActorSystemExtension(system: ActorSystem) {
def actorSelectionFromConfig(actorName: String): ActorSelection {
system.actorSelection(
system.settings.config.getString(s"akka.actor.selections.${actorName}.path"))
}
}
I am brand new to Akka but my understanding about the Stop directive is that it is used inside of SupervisorStrategies when the child should be considered permanently out of service, but there is a way to handle the total outage.
If that understanding is correct, then what I would like to do is have some kind of a “backup actor” that should be engaged after the normal/primary child is stopped and used from that point forward as a fallback. For example, say I have a parent actor who has a child actor - Notifier - whose job it is to send emails. If the Notifier truly dies (say, the underlying mail server goes offline), a backup to this actor might be another actor, say, QueueClient, that sends the notification request to a message broker, where the message will be queued up and replayed at a later time.
How can I define such a SupervisorStrategy to have this built in fault tolerance/actor backup inside of it? Please show code examples, its the only way I will learn!
Overriding Supervisor Strategies beyond the default directives is not commonly done, and not really necessary in your case. A solution would be to watch the child actor from the parent, and when the parent finds that the child is stopped, engage the backup actor.
import akka.actor.SupervisorStrategy.Stop
import akka.actor._
class Parent extends Actor {
var child: ActorRef = context.actorOf(Props[DefaultChild])
context.watch(child)
def receive = {
case Terminated(actor) if actor == child =>
child = context.actorOf(Props[BackupChild])
}
override def supervisorStrategy = OneForOneStrategy() {
case ex: IllegalStateException => Stop
}
}
class DefaultChild extends Actor {
def receive = { case _ => throw new IllegalStateException("whatever") }
}
class BackupChild extends Actor {
def receive = { case _ => }
}
I'm learnin about remote actors in Akka 2.1 and I tried to adapt the counter example provided by Typesafe.
I implemented a quick'n'dirty UI from the console to send ticks. And to quit with asking(and showing the result) the current count.
The idea is to start a master node that will run the Counter actor and some client node that will send messages to it through remoting. However I'd like to achieve this through configuration and minimal changes to code. So by changing the configuration local actors could be used.
I found this blog entry about similar problem where it was necessary that all API calls go through one actor even though there are many instances running.
I wrote similar configuration but I cant get it to work. My current code does use remoting but it creates a new actor on the master for each new node and I can't get it to connect to existing actor without explicitly giving it the path(and defying the point of configuration). However this is not what I want since state cannot be shared between JVMs this way.
Full runnable code available through a git repo
This is my config file
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
deployment {
/counter {
remote = "akka://ticker#127.0.0.1:2552"
}
}
}
remote {
transport = "akka.remote.netty.NettyRemoteTransport"
log-sent-messages = on
netty {
hostname = "127.0.0.1"
}
}
}
And full source
import akka.actor._
import akka.pattern.ask
import scala.concurrent.duration._
import akka.util.Timeout
import scala.util._
case object Tick
case object Get
class Counter extends Actor {
var count = 0
val id = math.random.toString.substring(2)
println(s"\nmy name is $id\ni'm at ${self.path}\n")
def log(s: String) = println(s"$id: $s")
def receive = {
case Tick =>
count += 1
log(s"got a tick, now at $count")
case Get =>
sender ! count
log(s"asked for count, replied with $count")
}
}
object AkkaProjectInScala extends App {
val system = ActorSystem("ticker")
implicit val ec = system.dispatcher
val counter = system.actorOf(Props[Counter], "counter")
def step {
print("tick or quit? ")
readLine() match {
case "tick" => counter ! Tick
case "quit" => return
case _ =>
}
step
}
step
implicit val timeout = Timeout(5.seconds)
val f = counter ? Get
f onComplete {
case Failure(e) => throw e
case Success(count) => println("Count is " + count)
}
system.shutdown()
}
I used sbt run and in another window sbt run -Dakka.remote.netty.port=0 to run it.
I found out I can use some sort of pattern. Akka remote allows only for deploying on remote systems(can't find a way to make it look up on remote just through configuration..am I mistaken here?).
So I can deploy a "scout" that will pass back the ActorRef. Runnable code available on the original repo under branch "scout-hack". Because this feels like a hack. I will still appreciate configuration based solution.
The actor
case object Fetch
class Scout extends Actor{
def receive = {
case Fetch => sender ! AkkaProjectInScala._counter
}
}
Counter actor creating is now lazy
lazy val _counter = system.actorOf(Props[Counter], "counter")
So it only executes on the master(determined by the port) and can be fetched like this
val counter: ActorRef = {
val scout = system.actorOf(Props[Scout], "scout")
val ref = Await.result(scout ? Fetch, timeout.duration) match {
case r: ActorRef => r
}
scout ! PoisonPill
ref
}
And full config
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
deployment {
/scout {
remote = "akka://ticker#127.0.0.1:2552"
}
}
}
remote {
transport = "akka.remote.netty.NettyRemoteTransport"
log-sent-messages = on
netty {
hostname = "127.0.0.1"
}
}
}
EDIT: I also found a clean-ish way: check configuration for "counterPath" anf if present actorFor(path) else create actor. Nice and you can inject the master when running and code is much cleaner than with the "scout" but it still has to decide weather to look up or create an actor. I guess this cannot be avoided.
I tried your git project and it actually works fine, aside from a compilation error, and that you must start the sbt session with -Dakka.remote.netty.port=0 parameter to the jvm, not as parameter to run.
You should also understand that you don't have to start the Counter actor in both processes. In this example it's intended to be created from the client and deployed on the server (port 2552). You don't have to start it on the server. It should be enough to create the actor system on the server for this example.