I noticed that actor at first sent message about state change and later really has been changed this state. It's correct?
class MyActor extends Actor {
def receive = idle(Set.empty)
def idle(isInSet: Set[String]): Receive = {
case Add(key) =>
// sending the result as a message back to our actor
validate(key).map(Validated(key, _)).pipeTo(self)
// waiting for validation
context.become(waitForValidation(isInSet, sender()))
}
def waitForValidation(set: Set[String], source: ActorRef): Receive = {
case Validated(key, isValid) =>
val newSet = if (isValid) set + key else set
// sending acknowledgement of completion
source ! Continue
Here occurs sending notification
// go back to idle, accepting new requests
context.become(idle(newSet))
and later changed state
case Add(key) =>
sender() ! Rejected
}
def validate(key: String): Future[Boolean] = ???
}
// Messages
case class Add(key: String)
case class Validated(key: String, isValid: Boolean)
case object Continue
case object Rejected
You should probably consider moving become() before pipeTo(self) if you want the actor to receive the message in the waitForValidation state:
context.become(waitForValidation(isInSet, sender()))
validate(key).map(Validated(key, _)).pipeTo(self)
I agree that piping the message will put it in the queue, and by the time the object gets to processing it the object should be in the new state, but most of the examples I have seen call the become before piping just to be on the safe side.
Related
Following akka documentation, for example in section 'The actor lifecycle', for me output is not as documented. What I get is:
first started
second started
second stopped
Code is:
object StartStopActorMain extends App {
val first = ActorSystem(StartStopActor1(), "firstActor")
first ! "stop"
}
object StartStopActor1 {
def apply() =
Behaviors.setup(context => new StartStopActor1(context))
}
class StartStopActor1(context: ActorContext[String]) extends
AbstractBehavior[String](context) {
println("first started")
context.spawn(StartStopActor2(), "second")
override def onMessage(msg: String): Behavior[String] =
msg match {
case "stop" => Behaviors.stopped
}
override def onSignal: PartialFunction[Signal, Behavior[String]] = {
case PostStop =>
println("first stopped")
this
}
}
object StartStopActor2 {
def apply() =
Behaviors.setup(context => new StartStopActor2(context))
}
class StartStopActor2(context: ActorContext[String]) extends
AbstractBehavior[String](context) {
println("second started")
override def onMessage(msg: String): Behavior[String] = Behaviors.unhandled
override def onSignal: PartialFunction[Signal, Behavior[String]] = {
case PostStop =>
println("second stopped")
this
}
}
Anything I am missing here? I copied code from there itself.
With the amount of information you provide it is impossible to answer your question. But my best guess is that your JVM exits before the first actor has a change to print its stop message.
Edit
It may also be that the Akka documentation is wrong: the first actor replaces its behavior with Behaviors.stopped, thus the PostStop signal is not delivered to the StartStopActor1 behavior but to the stopped behavior. I remember implementing it this way a few years back, with the rationale that the PostStop hook is not necessary when the actor voluntarily terminates: any code that you would want to run for PostStop can also be run before returning Behaviors.stopped.
What is the best way to get the current value of an infinite stream which aggregates values and by definition never complete
Source.repeat(1)
.scan(0)(_+_)
.to(Sink.ignore)
I would like to query from Akka HTTP the current counter value. Should I use dynamic stream ? The broadcastHub and then from Akka http subscribe to the infinite stream on GET request ?
One solution could be to use an actor to keep the state you need. Sink.actorRef will wrap an existing actor ref in a sink, e.g.
class Keeper extends Actor {
var i: Int = 0
override def receive: Receive = {
case n: Int ⇒ i = n
case Keeper.Get ⇒ sender ! i
}
}
object Keeper {
case object Get
}
val actorRef = system.actorOf(Props(classOf[Keeper]))
val q = Source.repeat(1)
.scan(0)(_+_)
.runWith(Sink.actorRef(actorRef, PoisonPill))
val result = (actorRef ? Keeper.Get).mapTo[Int]
Note that backpressure is not preserved when using Sink.actorRef. This can be improved by using Sink.actorRefWithAck. More about this can be found in the docs.
One possibility is using Sink.actorRefWithBackpressure.
Imagina having the following Actor to store the state coming from a Stream:
object StremState {
case object Ack
sealed trait Protocol extends Product with Serializable
case object StreamInitialized extends Protocol
case object StreamCompleted extends Protocol
final case class WriteState[A](value: A) extends Protocol
final case class StreamFailure(ex: Throwable) extends Protocol
final case object GetState extends Protocol
}
class StremState[A](implicit A: ClassTag[A]) extends Actor with ActorLogging {
import StremState._
var state: Option[A] = None
def receive: Receive = {
case StreamInitialized =>
log.info("Stream initialized!")
sender() ! Ack // ack to allow the stream to proceed sending more elements
case StreamCompleted =>
log.info("Stream completed!")
case StreamFailure(ex) =>
log.error(ex, "Stream failed!")
case WriteState(A(value)) =>
log.info("Received element: {}", value)
state = Some(value)
sender() ! Ack // ack to allow the stream to proceed sending more elements
case GetState =>
log.info("Fetching state: {}", state)
sender() ! state
case other =>
log.warning("Unexpected message '{}'", other)
}
}
This actor can be then used in a Sink of a Stream as follows:
implicit val tm: Timeout = Timeout(1.second)
val stream: Source[Int, NotUsed] = Source.repeat(1).scan(0)(_+_)
val receiver = system.actorOf(Props(new StremState[Int]))
val sink = Sink.actorRefWithBackpressure(
receiver,
onInitMessage = StremState.StreamInitialized,
ackMessage = StremState.Ack,
onCompleteMessage = StremState.StreamCompleted,
onFailureMessage = (ex: Throwable) => StremState.StreamFailure(ex)
)
stream.runWith(sink)
// Ask for Stream current state to the receiver Actor
val futureState = receiver ? GetState
I'm trying to listen to sqs using akka streams and i get messages from it's q
using this code snippet:
of course this code snippet get messages one-by-one (then ack it):
implicit val system = ActorSystem()
implicit val mat = ActorMaterializer()
implicit val ec = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(ioThreadPoolSize))
val awsSqsClient: AmazonSQSAsync = AmazonSQSAsyncClientBuilder
.standard()
.withCredentials(new ClasspathPropertiesFileCredentialsProvider())
.withEndpointConfiguration(new EndpointConfiguration(sqsEndpoint, configuration.regionName))
.build()
val future = SqsSource(sqsEndpoint)(awsSqsClient)
.takeWhile(_ => true)
.mapAsync(parallelism = 2)(m => {
val msgBody = SqsMessage.deserializeJson(m.getBody)
msgBody match {
case Right(body) => val id = getId(body) //do some stuff with the message may save state according the id
}
Future(m, Ack())
})
.to(SqsAckSink(sqsEndpoint)(awsSqsClient))
.run()
my question is:
can i get several messages, and save them for example in a stateful map for latter use?
for example that after receiving 5 messages (all of them will saved (per state))
then if specific condition happens i will ack them all, and if not they will return into queue (will happen anyway because visibility timeout)?
thanks.
Could be that you're looking for grouped (or groupedWithin) combinator. These allow you to batch messages and process them in groups. groupedWithin allows you to release a batch after a certain time in case it hasn't yet reached your determined size. Docs reference here.
In a subsequent check flow you can perform any logic you need, and emit the sequence in case you want the messages to be acked, or not emit them otherwise.
Example:
val yourCheck: Flow[Seq[MessageActionPair], Seq[MessageActionPair], NotUsed] = ???
val future = SqsSource(sqsEndpoint)(awsSqsClient)
.takeWhile(_ => true)
.mapAsync(parallelism = 2){ ... }
.grouped(5)
.via(yourCheck)
.mapConcat(identity)
.to(SqsAckSink(sqsEndpoint)(awsSqsClient))
.run()
I have the following actor setup, using Akka actors (2.10)
A -spawn-> B -spawn-> C
A -sendWork-> B -sendWork-> C
C -sendResults-> A (repeatedly)
However, at some point A notices that it should change the workload sent to B/C because C is sending a large number of messages that turn out to be useless. However, in such situations C's inbox seems to be very full, and/or C may be blocked.
How can A tell B to shutdown C immediately? Losing the state and messages of both B and C is acceptable, so destroying them and spawning new ones is an option.
Given the actors are started the way you described, then using stop in the right way will do what you require. According to the docs, calling stop will both:
1) stop additional messages from going into the mailbox (sent to deadletter)
2) take the current contents of the mailbox and also ship that to deadletter (although this is based on mailbox impl, but the point is they won't be processed)
Now if the actor will need to completely finish the message it's currently processing before it's all the way stopped, so if it's "stuck", stopping (or anything for that matter) won't fix that, but I don't think that's the situation you are describing.
I pulled a little code sample together to demonstrate. Basically, A will send a message to B to start sending work to C. B will flood C with some work and C will send the results of that work back to A. When a certain number of responses have been received by A, it will trigger a stop of B and C by stopping B. When B is completely stopped, it will then restart the process over again, up to 2 total times because it stops itself. The code looks like this:
case object StartWork
case class DoWork(i:Int, a:ActorRef)
case class WorkResults(i:Int)
class ActorA extends Actor{
import context._
var responseCount = 0
var restarts = 0
def receive = startingWork
def startingWork:Receive = {
case sw # StartWork =>
val myb = actorOf(Props[ActorB])
myb ! sw
become(waitingForResponses(myb))
}
def waitingForResponses(myb:ActorRef):Receive = {
case WorkResults(i) =>
println(s"Got back work results: $i")
responseCount += 1
if (responseCount > 200){
println("Got too many responses, terminating children and starting again")
watch(myb)
stop(myb)
become(waitingForDeath)
}
}
def waitingForDeath:Receive = {
case Terminated(ref) =>
restarts += 1
if (restarts <= 2){
println("children terminated, starting work again")
responseCount = 0
become(startingWork)
self ! StartWork
}
else{
println("too many restarts, stopping self")
context.stop(self)
}
}
}
class ActorB extends Actor{
import concurrent.duration._
import context._
var sched:Option[Cancellable] = None
override def postStop = {
println("stopping b")
sched foreach (_.cancel)
}
def receive = starting
def starting:Receive = {
case sw # StartWork =>
val myc = context.actorOf(Props[ActorC])
sched = Some(context.system.scheduler.schedule(1 second, 1 second, self, "tick"))
become(sendingWork(myc, sender))
}
def sendingWork(myc:ActorRef, a:ActorRef):Receive = {
case "tick" =>
for(j <- 1 until 1000) myc ! DoWork(j, a)
}
}
class ActorC extends Actor{
override def postStop = {
println("stopping c")
}
def receive = {
case DoWork(i, a) =>
a ! WorkResults(i)
}
}
It's a little rough around the edges, but it should show the point that cascading the stop from B through to C will stop C from sending responses back to A even though it still had messages in the mailbox. I hope this is what you were looking for.
As I understand it, an actor can be sent a message "fire and forget" style with the ! operator, or "Send-And-Receive-Future" style with the ? operator. An actor that is passed a message via ? must call a self.reply or the sender will receive a timeout exception. On the other hand, an actor that is passed a message via ! cannot have self.reply if the message is not being passed from another actor.
My question is, is the Actor supposed to know at compile time whether it will be invoked with ! or ? ??? Or if the necessity of self.reply can be determined at runtime, how can this be determined? Perhaps self.tryReply is involved, but the akka documentation seems to imply that a failed attempt to reply is an error case, whereas if the sender is not an actor, it is not really an error to fail to reply if the message is passed with !
Edit:
Here's some code:
package akTest
import akka.actor.Actor
object Main1 {
val worker = Actor.actorOf[ak].start()
def main(args: Array[String]) {
val resp = worker ? "Hi"
resp.get
println(resp)
}
}
class ak extends Actor {
def receive = {
case msg:String => {
val response = "Received: " + msg
println(response)
response
}
}
}
This gets
Exception in thread "main" akka.dispatch.FutureTimeoutException: Futures timed out after [4995] milliseconds
So I add a self.reply to the actor:
class ak extends Actor {
def receive = {
case msg:String => {
val response = "Received: " + msg
println(response)
self.reply(response)
response
}
}
}
This change fixes the timeout error. But now if I have a Main2 which sends a fire and forget message:
object Main2 {
val worker = Actor.actorOf[ak].start()
def main(args: Array[String]) {
val resp = worker ! "Hi"
println(resp)
}
}
, a new error is produced: [ERROR] [2/1/12 2:04 PM] [akka:event-driven:dispatcher:global-1] [LocalActorRef]
No sender in scope, can't reply.
How can I write my actor to eliminate the coupling between its manner of response and the sender's method of invoking? I don't want to have 1 version of the actor for ! and a second version of the actor for ?
if senderFuture.isDefined then you have a future to reply to