Akka Actor Creation Issue - akka

I am trying to create an actor that can be accessed remotely and locally.
The actor created below throws an exception. Any idea?
val myLocalActor2 = system.actorOf(Props[ActorNodes], name =
"akka://JtsSystem#127.0.0.1:2552/MyOwnRef")
Is this the best and only way to programmatically create and actor that is both local and remote?

The "name" in actorOf is just a path segment. that will identify the newly created actor. See here for more information on that: http://doc.akka.io/docs/akka/2.0.1/general/addressing.html
Also, having something which is both local and remote doesn't make sense, but I assume you mean that it should be local and be accessible from some remote node? If so, just create it with actorOf, and other nodes may look it up using "actorFor":
val remoteActor = system.actorFor("akka://CalculatorApplication#127.0.0.1:2552/user/simpleCalculator")
Read more about it here: http://doc.akka.io/docs/akka/2.0.1/scala/remoting.html
In general, please read the documentation, we've poured so many hours into it and it feels wasteful when people don't read it.

Got it working (apart for the global registry):
case class ActorMsg(val msg: String)
sealed class ActorNodes extends Actor {
override def receive = {
case ActorMsg(msg) => println("Actor Msg " + msg)
case _ => println("Everything else")
}
}
object JtsListener extends App {
val sys = "JtsSystem"
val system = ActorSystem(sys)
// println("System: " + system.settings)
val myLocalActor1 = system.actorOf(Props[ActorNodes], "MyLocalRef")
println("MyLocalActor 1: " + myLocalActor1 + " has path " + myLocalActor1.path)
myLocalActor1 ! new ActorMsg("Hello")
val myLocalActor2 = system.actorFor("akka://"+sys+"#127.0.0.1:2552/user/MyLocalRef");
println("MyLocalActor 2: " + myLocalActor2 + " has path " + myLocalActor2.path)
myLocalActor2 ! new ActorMsg("Hello Again")
}
with application.conf (I am looking at doing this programmatically on ActorSystem
Thx.
The next step would be to broadcast that information so that all services are aware of where each actor is.. probably using the idea outlined there:
http://blog.vasilrem.com/even-simpler-scalability-with-akka-through-re

Related

How to restart an Akka actor by self?

When a child actor receives a custom RESTART message, the actor should restart itself.
(The purpose is to reset the actor member variables, reload external state from db, but not clear the actor internal message queue)
To implement the restart, one workaround is the child actor throws a custom exception, and the parent actor configures its OneForOneStrategy to restart the child actor for this specific exception type.
I'm wondering, if there's a more straightforward approach to do the restart?
The purpose is to reset the actor member variables, reload external state from db
I guess, this is probably the biggest issue, because loading external state might take time and also blocking operation, hence result of the operation is or should be Future[] - so while this future loading your actor should ignore all other messages, until state from DB will be received.
I think ActorCell#become method might help you in this case - so you can change receive method to another, which will ignore rest of messages, except message with DB state or data, and then switch back to regular receive.
Please, see code example below:
import akka.actor.Actor
import akka.pattern._
import scala.concurrent.Future
import scala.collection.mutable
// Database API and external state model example
case class DbExternalState()
trait Database {
def loadExternalState: Future[DbExternalState]
}
import RestartActor._
class RestartActor(database: Database) extends Actor {
private var state = ActorState()
private val suspendedMessages = mutable.Queue[Any]()
override def receive: Receive = defaultReceive
private def defaultReceive: Receive = {
case Restart => restartActorStart()
}
/**
* Wait until message with internal state received and ignore all the other messages (put back un queue)
*/
private def suspendedReceive: Receive = {
case ExternalStateLoaded(state) => restartActorFinish(state)
case message => suspendedMessages.enqueue(message)
}
private def restartActorStart(): Unit = {
import context.dispatcher
context.become(suspendedReceive)
database.loadExternalState.map(ExternalStateLoaded) pipeTo self
}
private def restartActorFinish(dbExternalState: DbExternalState): Unit = {
state = ActorState.initial(dbExternalState)
context.become(defaultReceive) // Return to normal message handling flow
suspendedMessages.foreach(message => self ! message)
suspendedMessages.clear()
}
}
object RestartActor {
// Restart
case object Restart
case class ExternalStateLoaded(state: DbExternalState)
case class ActorState(internalState: List[String] = Nil, externalState: DbExternalState = DbExternalState())
object ActorState {
def initial(externalState: DbExternalState): ActorState = ActorState(externalState = externalState)
}
}
Please, let me know suggestions were correct.
I hope this helps!

Akka persistence receiveRecover receives snapshots that are from other actor instances

I am experiencing unexpected behaviour when using Akka persistence. I am fairly new to Akka so apologies in advance if I have missed something obvious.
I have an actor called PCNProcessor. I create an actor instance for every PCN id I have. The problem I experience is that when I create the first actor instance, all works fine and I receive the Processed response. However, when I create further PCNProcessor instances using different PCN ids, I get the Already processed PCN response.
Essentially, for some reason the snapshot stored as part of the first PCN id processor is reapplied to the subsequent PCN id instances even though it does not relate to that PCN and the PCN id is different. To confirm this behaviour, I printed out a log in the receiveRecover, and every subsequent PCNProcessor instance receives snapshots that do not belong to it.
My question is:
Should I be storing the snapshots in a specific way so that they are keyed against the PCN Id? And then should I be filtering away snapshots that are not related to the PCN in context?
Or should the Akka framework be taking care of this behind the scenes, and I should not have to worry about storing snapshots against the PCN id.
Source code for the actor is below. I do use sharding.
package com.abc.pcn.core.actors
import java.util.UUID
import akka.actor._
import akka.persistence.{AtLeastOnceDelivery, PersistentActor, SnapshotOffer}
import com.abc.common.AutoPassivation
import com.abc.pcn.core.events.{PCNNotProcessedEvt, PCNProcessedEvt}
object PCNProcessor {
import akka.contrib.pattern.ShardRegion
import com.abc.pcn.core.PCN
val shardName = "pcn"
val idExtractor: ShardRegion.IdExtractor = {
case ProcessPCN(pcn) => (pcn.id.toString, ProcessPCN(pcn))
}
val shardResolver: ShardRegion.ShardResolver = {
case ProcessPCN(pcn) => pcn.id.toString
}
// shard settings
def props = Props(classOf[PCNProcessor])
// command and response
case class ProcessPCN(pcn: PCN)
case class NotProcessed(reason: String)
case object Processed
}
class PCNProcessor
extends PersistentActor
with AtLeastOnceDelivery
with AutoPassivation
with ActorLogging {
import com.abc.pcn.core.actors.PCNProcessor._
import scala.concurrent.duration._
context.setReceiveTimeout(10.seconds)
private val pcnId = UUID.fromString(self.path.name)
private var state: String = "not started"
override def persistenceId: String = "pcn-processor-${pcnId.toString}"
override def receiveRecover: Receive = {
case SnapshotOffer(_, s: String) =>
log.info("Recovering. PCN ID: " + pcnId + ", State to restore: " + s)
state = s
}
def receiveCommand: Receive = withPassivation {
case ProcessPCN(pcn)
if state == "processed" =>
sender ! Left(NotProcessed("Already processed PCN"))
case ProcessPCN(pcn)
if pcn.name.isEmpty =>
val error: String = "Name is invalid"
persist(PCNNotProcessedEvt(pcn.id, error)) { evt =>
state = "invalid"
saveSnapshot(state)
sender ! Left(NotProcessed(error))
}
case ProcessPCN(pcn) =>
persist(PCNProcessedEvt(pcn.id)) { evt =>
state = "processed"
saveSnapshot(state)
sender ! Right(Processed)
}
}
}
Update:
After logging out the metadata for the received snapshot, I can see the problem is that the snapshotterId is not resolving properly and is always being set to pcn-processor-${pcnId.toString} without resolving the bit in italics.
[INFO] [06/06/2015 09:10:00.329] [ECP-akka.actor.default-dispatcher-16] [akka.tcp://ECP#127.0.0.1:2551/user/sharding/pcn/16b3d4dd-9e0b-45de-8e32-de799d21e7c5] Recovering. PCN ID: 16b3d4dd-9e0b-45de-8e32-de799d21e7c5, Metadata of snapshot SnapshotMetadata(pcn-processor-${pcnId.toString},1,1433577553585)
I think you are misusing the Scala string interpolation feature.
Try in the following way:
override def persistenceId: String = s"pcn-processor-${pcnId.toString}"
Please note the use of s before the string literal.
Ok fixed this by changing the persistence id to the following line:
override def persistenceId: String = "pcn-processor-" + pcnId.toString
The original in string version:
override def persistenceId: String = "pcn-processor-${pcnId.toString}"
only works for persisting to journal but not for snapshots.

Using Thread.sleep() inside an foreach in scala

I've a list of URLs inside a List.
I want to get the data by calling WS.url(currurl).get(). However, I want add a delay between each request. Can I add Thread.sleep() ? or is there another way of doing this?
one.foreach {
currurl => {
import play.api.libs.ws.WS
println("using " + currurl)
val p = WS.url(currurl).get()
p.onComplete {
case Success(s) => {
//do something
}
case Failure(f) => {
println("failed")
}
}
}
}
Sure, you can call Thread.sleep inside your foreach function, and it will do what you expect.
That will tie up a thread, though. If this is just some utility that you need to run sometimes, then who cares, but if it's part of some server you are trying to write and you might tie up many threads, then you probably want to do better. One way you could do better is to use Akka (it looks like you are using Play, so you are already using Akka) to implement the delay -- write an actor that uses scheduler.schedule to arrange to receive a message periodically, and then handle one request each time the message is read. Note that Akka's scheduler itself ties up a thread, but it can then send periodic messages to an arbitrary number of actors.
You can do it with scalaz-stream
import org.joda.time.format.DateTimeFormat
import scala.concurrent.duration._
import scalaz.stream._
import scalaz.stream.io._
import scalaz.concurrent.Task
type URL = String
type Fetched = String
val format = DateTimeFormat.mediumTime()
val urls: Seq[URL] =
"http://google.com" :: "http://amazon.com" :: "http://yahoo.com" :: Nil
val fetchUrl = channel[URL, Fetched] {
url => Task.delay(s"Fetched " +
s"url:$url " +
s"at: ${format.print(System.currentTimeMillis())}")
}
val P = Process
val process =
(P.awakeEvery(1.second) zipWith P.emitAll(urls))((b, url) => url).
through(fetchUrl)
val fetched = process.runLog.run
fetched.foreach(println)
Output:
Fetched url:http://google.com at: 1:04:25 PM
Fetched url:http://amazon.com at: 1:04:26 PM
Fetched url:http://yahoo.com at: 1:04:27 PM

Creating a lazy actor router in Akka with timeout

I have spent the last two days learning Actors, and I want to create an expiring cache. Now we use a tenant model so I want each tenant to be represented by an actor. I would like these actors to be created when required, and timeout after a period of being idle.
To solve this I have mocked up the following as I am unaware of a provided solution, and am looking for any critique or validation of the approach.
//A simple Message carrying just the name of the actor
case class Message(name:String)
//Actor that will expire after a timeout period and stop itself
class ExpireActor extends Actor {
val id = Random.nextInt(1000)
context.setReceiveTimeout(100 milliseconds)
def receive ={
case Message(_) => println("Message: " + id + " " + System.currentTimeMillis())
case ReceiveTimeout => {
println("Timeout: " + id + " " + System.currentTimeMillis())
self ! PoisonPill
}
}
}
//Router for creating actors on demand
case class LazyRouter() extends RouterConfig {
def routerDispatcher: String = Dispatchers.DefaultDispatcherId
def supervisorStrategy: SupervisorStrategy = SupervisorStrategy.defaultStrategy
def createRoute(routeeProvider: RouteeProvider): Route = {
{
case (sender, Message(name)) ⇒
routeeProvider.context
.child(name)
.map(a => List(Destination(sender, a)))
.getOrElse{
synchronized {
routeeProvider.context
.child(name) //Dont want to call sync until I have to, so need to check existence again
.map(a => List(Destination(sender, a)))
.getOrElse{
val ref = routeeProvider.context.actorOf(Props[ExpireActor], name)
routeeProvider.registerRoutees(List(ref))
List(Destination(sender, ref))
}
}
}
}
}
}
I'm not sure I'm fully on board with your approach. Actors are very lightweight. When they are not doing anything they are not costing you anything CPU wise. Why not just pre-create all of the cache actors (for each possible tenant) before hand so the router does not have to have that scary synchronized block around whether or not it needs to create the routee. Then, instead of stopping the actors when they are idle for a specific amount of time, just clear out their internal state (which I'm assuming is the cached data) if you want to free up that memory. This will greatly simplify your code and make it more reliable (and probably faster to boot).

Akka 2.1 Remote: sharing actor across systems

I'm learnin about remote actors in Akka 2.1 and I tried to adapt the counter example provided by Typesafe.
I implemented a quick'n'dirty UI from the console to send ticks. And to quit with asking(and showing the result) the current count.
The idea is to start a master node that will run the Counter actor and some client node that will send messages to it through remoting. However I'd like to achieve this through configuration and minimal changes to code. So by changing the configuration local actors could be used.
I found this blog entry about similar problem where it was necessary that all API calls go through one actor even though there are many instances running.
I wrote similar configuration but I cant get it to work. My current code does use remoting but it creates a new actor on the master for each new node and I can't get it to connect to existing actor without explicitly giving it the path(and defying the point of configuration). However this is not what I want since state cannot be shared between JVMs this way.
Full runnable code available through a git repo
This is my config file
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
deployment {
/counter {
remote = "akka://ticker#127.0.0.1:2552"
}
}
}
remote {
transport = "akka.remote.netty.NettyRemoteTransport"
log-sent-messages = on
netty {
hostname = "127.0.0.1"
}
}
}
And full source
import akka.actor._
import akka.pattern.ask
import scala.concurrent.duration._
import akka.util.Timeout
import scala.util._
case object Tick
case object Get
class Counter extends Actor {
var count = 0
val id = math.random.toString.substring(2)
println(s"\nmy name is $id\ni'm at ${self.path}\n")
def log(s: String) = println(s"$id: $s")
def receive = {
case Tick =>
count += 1
log(s"got a tick, now at $count")
case Get =>
sender ! count
log(s"asked for count, replied with $count")
}
}
object AkkaProjectInScala extends App {
val system = ActorSystem("ticker")
implicit val ec = system.dispatcher
val counter = system.actorOf(Props[Counter], "counter")
def step {
print("tick or quit? ")
readLine() match {
case "tick" => counter ! Tick
case "quit" => return
case _ =>
}
step
}
step
implicit val timeout = Timeout(5.seconds)
val f = counter ? Get
f onComplete {
case Failure(e) => throw e
case Success(count) => println("Count is " + count)
}
system.shutdown()
}
I used sbt run and in another window sbt run -Dakka.remote.netty.port=0 to run it.
I found out I can use some sort of pattern. Akka remote allows only for deploying on remote systems(can't find a way to make it look up on remote just through configuration..am I mistaken here?).
So I can deploy a "scout" that will pass back the ActorRef. Runnable code available on the original repo under branch "scout-hack". Because this feels like a hack. I will still appreciate configuration based solution.
The actor
case object Fetch
class Scout extends Actor{
def receive = {
case Fetch => sender ! AkkaProjectInScala._counter
}
}
Counter actor creating is now lazy
lazy val _counter = system.actorOf(Props[Counter], "counter")
So it only executes on the master(determined by the port) and can be fetched like this
val counter: ActorRef = {
val scout = system.actorOf(Props[Scout], "scout")
val ref = Await.result(scout ? Fetch, timeout.duration) match {
case r: ActorRef => r
}
scout ! PoisonPill
ref
}
And full config
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
deployment {
/scout {
remote = "akka://ticker#127.0.0.1:2552"
}
}
}
remote {
transport = "akka.remote.netty.NettyRemoteTransport"
log-sent-messages = on
netty {
hostname = "127.0.0.1"
}
}
}
EDIT: I also found a clean-ish way: check configuration for "counterPath" anf if present actorFor(path) else create actor. Nice and you can inject the master when running and code is much cleaner than with the "scout" but it still has to decide weather to look up or create an actor. I guess this cannot be avoided.
I tried your git project and it actually works fine, aside from a compilation error, and that you must start the sbt session with -Dakka.remote.netty.port=0 parameter to the jvm, not as parameter to run.
You should also understand that you don't have to start the Counter actor in both processes. In this example it's intended to be created from the client and deployed on the server (port 2552). You don't have to start it on the server. It should be enough to create the actor system on the server for this example.