I completed the first Akka assignment for the Coursera Reactive Programming Class (week five - binary trees).
My question is about Akka itself.
My app runs correctly, but I notice a lot of non-fatal dead letter warnings. Here is one:
[INFO] [01/16/2014 15:09:41.668] [PostponeSpec-akka.actor.default-dispatcher-23] [akka://PostponeSpec/user/$c/$b/$a/$b/$a/$a/$b/$b/$a/$a] Message [akka.dispatch.sysmsg.Terminate] from Actor[akka://PostponeSpec/user/$c/$b/$a/$b/$a/$a/$b/$b/$a/$a#570299303] to Actor[akka://PostponeSpec/user/$c/$b/$a/$b/$a/$a/$b/$b/$a/$a#570299303] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
I notice others have asked about this, and the official answer is that this isn't a problem, it's just verbose information that can be ignored and hidden by updating the logging settings.
I understand the advice to simply ignore this, but it still seems like a sloppy flaw on Akka's part. In this simple learning exercise, I am confident that my actors never get sent a message after they initiate a graceful shutdown. Akka should not be putting anything in the dead letter queue in these idealized circumstances. What is the justification for these dead letters? I also see it that the dead letter message isn't one that my app explicitly sends, but an internal message.
As someone who took the course as well,and asked questions in the course feed I recall the following: the child actor may decide to stop itself,but its parent may decide to do the same thing. At this point there's an inherent race condition between the parent's termination and the delivery of the Terminate(child) message,if the parent managed to stop itself prior to receiving the message it will end in the dead letters queue.
Can I restart the Concurrency Agent object after it done his work?
Short answer is No.
If you look at the life cycle described here, you'll see the following:
Agents have a set life cycle. The concurrency::agent_status
enumeration defines the various states of an agent. The following
illustration is a state diagram that shows how agents progress from
one state to another. In this illustration, solid lines represent
methods that you call from your application; dotted lines represent
methods that are called from the runtime.
This shows clearly that once your agent has entered the done or cancelled state, there's no way back.
Also, if you look at the agent::start documentation, you see this:
Moves an agent from the agent_created state to the agent_runnable state, and schedules it for execution.
and this:
An agent that has been canceled cannot be started.
Although this doesn't mention the done state, I've found from experience that once it's done, it's done. The state sequence diagram shows a one-way trip for all paths.
I am working on a system that uses multiple worker threads inside of a JavaFX Task. The Callable objects on these threads inside of the Task use PropertyChangeSupport to communicate certain state change information back to listeners (e.g. intermediate results). I am using PropertyChangeListeners to monitor these changes and create derivative objects off of them that are accessed by other objects. Once the Task is finished I am using JavaFX to display information, some of which is gleaned from the PropertyChange events that are emitted.
My question is: is there a potential for a race condition between the Task finishing and the PropertyChangeEvents getting processed (which I would assume would happen on the JavaFX application thread, but not completely sure).
As a concrete example, consider an image that is getting split into chunks for processing in multiple steps. At each step, an intermediate image is generated and a propertyChange event is getting fired for that intermediate image. At the end of processing, I want to be able to display the final image as well as all the images generated in the meantime in a JavaFX Scene. Will the propertyChange events all get processed before the FX thread repaints/refreshes?
I realize that the JavaFX documentation has an example with the Task api doc discussing returning intermediate results (JavaFX Task API Documentation). That example uses the JavaFX Observable* objects. I would think that PropertyChangeEvents would run on the same thread similar to the FX observable object and as such there should not be a race condition between finishing a non-FX thread and getting results on the FX thread, but thought I would see if there is anything I might not be thinking of.
Thanks in advance for any discussion or thoughts.
Chooks
You are correct that PropertyChangeEvents will run on the same thread as the FX observable objects. This is, however, a different thread from the Task itself.
However, you don't have any guarantee that all of the propertyChange events will get processed before the FX thread repaints/refreshes. In fact, various parts of the display could get repainted multiple times between different propertyChange events, depending on how long they take and the specific timing involved. Also, other FX events could be interspersed between the propertyChange events and the repainting as well. However, you should be guaranteed that any UI elements updated by any given propertyChange event will be eventually repainted at some time after they are updated. So the display will eventually "catch up" to any changes that are made by your propertyChange handlers, and will eventually repaint any areas that were changed.
I'm working on a robot control program that is based on a state machine. While the program uses Qt State Machine Framework, I also attempted to implement it using Boost.statechart (BS) as a theoretical exercise and a way to learn / evaluate the library.
In Qt version I used the following pattern in several places: a compound state has parallel nested sub-graphs, each of which eventually reaches a final state. When all parallel sub-states finish, the parent state emits "finished()" signal, which causes the machine to transit to the next top-level state. E.g. (Beware: pseudo diagram):
Idle -calibRequest-> Calibrate( calibrate_camera | calibrate_arm ) -finished-> Idle
and calibrate_* states in turn have nested states inside them like S -trigger[calibrated?]-> F where F is a final state. When both calibrate_* states reach their respective F states finished signal causes the state machine to transit into Idle.
Qt's parallel child state are analogous to BS's orthogonal nested states. At first I though "termination" was BS's analogue to final states, but in fact it isn't. It's more like "terminate the state machine unless there is still some orthogonal thing going somewhere" - once you terminate all orthogonal states the parent state terminates as well without any chance to transit. Posting events upon termination doesn't help either since there is no state that these events could be delivered to.
I ended up implementing "final states" which post a notification event when reached and reacting on this event in the parent state - checking if all orthogonal states have reached their final events and transiting then. Which is basically reimplementation of Qt State Machine's approach, but which has to be redone each time I need this pattern. But may be I'm just so used to one way of achieving this effect that I don't see an alternative?
I'm using Qt framework which has by default non-blocking I/O to develop an application navigating through several web pages (online stores) and carrying out different actions on these pages. I'm "mapping" specific web page to a state machine which I use to navigate through this page.
This state machine has these transitions;
Connect, LogIn, Query, LogOut, Disconnect
and these states;
Start, Connecting, Connected, LoggingIn, LoggedIn, Querying, QueryDone, LoggingOut, LoggedOut, Disconnecting, Disconnected
Transitions from *ing to *ed states (Connecting->Connected), are due to LoadFinished asynchronous network events received from network object when currently requested url is loaded. Transitions from *ed to *ing states (Connected->LoggingIn) are due to events send by me.
I want to be able to send several events (commands) to this machine (like Connect, LogIn, Query("productA"), Query("productB"), LogOut, LogIn, Query("productC"), LogOut, Disconnect) at once and have it process them. I don't want to block waiting for the machine to finish processing all events I sent to it. The problem is they have to be interleaved with the above mentioned network events informing machine about the url being downloaded. Without interleaving machine can't advance its state (and process my events) because advancing from *ing to *ed occurs only after receiving network type of event.
How can I achieve my design goal?
EDIT
The state machine I'm using has its own event loop and events are not queued in it so could be missed by machine if they come when the machine is busy.
Network I/O events are not posted directly to neither the state machine nor the event queue I'm using. They are posted to my code (handler) and I have to handle them. I can forward them as I wish but please have in mind remark no. 1.
Take a look at my answer to this question where I described my current design in details. The question is if and how can I improve this design by making it
More robust
Simpler
Sounds like you want the state machine to have an event queue. Queue up the events, start processing the first one, and when that completes pull the next event off the queue and start on that. So instead of the state machine being driven by the client code directly, it's driven by the queue.
This means that any logic which involves using the result of one transition in the next one has to be in the machine. For example, if the "login complete" page tells you where to go next. If that's not possible, then the event could perhaps include a callback which the machine can call, to return whatever it needs to know.
Asking this question I already had a working design which I didn't want to write about not to skew answers in any direction :) I'm going to describe in this pseudo answer what the design I have is.
In addition to the state machine I have a queue of events. Instead of posting events directly to the machine I'm placing them in the queue. There is however problem with network events which are asynchronous and come in any moment. If the queue is not empty and a network event comes I can't place it in the queue because the machine will be stuck waiting for it before processing events already in the queue. And the machine will wait forever because this network event is waiting behind all events placed in the queue earlier.
To overcome this problem I have two types of messages; normal and priority ones. Normal ones are those send by me and priority ones are all network ones. When I get network event I don't place it in the queue but instead I send it directly to the machine. This way it can finish its current task and progress to the next state before pulling the next event from the queue of events.
It works designed this way only because there is exactly 1:1 interleave of my events and network events. Because of this when the machine is waiting for a network event it's not busy doing anything (so it's ready to accept it and does not miss it) and vice versa - when the machine waits for my task it's only waiting for my task and not another network one.
I asked this question in hope for some more simple design than what I have now.
Strictly speaking, you can't. Because you only have state "Connecting", you don't know whether you need top login afterwards. You'd have to introduce a state "ConnectingWithIntentToLogin" to represent the result of a "Connect, then Login" event from the Start state.
Naturally there will be a lot of overlap between the "Connecting" and the "ConnectingWithIntentToLogin" states. This is most easily achieved by a state machine architecture that supports state hierarchies.
--- edit ---
Reading your later reactions, it's now clear what your actual problem is.
You do need extra state, obviously, whether that's ingrained in the FSM or outside it in a separate queue. Let's follow the model you prefer, with extra events in a queue. The rick here is that you're wondering how to "interleave" those queued events vis-a-vis the realtime events. You don't - events from the queue are actively extracted when entering specific states. In your case, those would be the "*ed" states like "Connected". Only when the queue is empty would you stay in the "Connected" state.
If you don't want to block, that means you don't care about the network replies. If on the other hand the replies interest you, you have to block waiting for them. Trying to design your FSM otherwise will quickly lead to your automaton's size reaching infinity.
How about moving the state machine to a different thread, i. e. QThread. I would implent a input queue in the state machine so I could send queries non blocking and a output queue to read the results of the queries. You could even call back a slotted function in your main thread via connect(...) if a result of a query arrives, Qt is thread safe in this regard.
This way your state machine could block as long as it needs without blocking your main program.
Sounds like you just want to do a list of blocking I/O in the background.
So have a thread execute:
while( !commands.empty() )
{
command = command.pop_back();
switch( command )
{
Connect:
DoBlockingConnect();
break;
...
}
}
NotifySenderDone();