When is Akka's default system ready in Play? - akka

I was writing an application in Play 2.3.7 and when trying to create an actor (using the default Akka.system() of Play) inside the beforeStart overriden method of the Global object, the application crashes with some infinite recursive call of beforeStart, ultimately throwing an exception due to Global object not being initialized. If I create this actor inside the onStart method, then everything goes well.
My "intuition" was: "ok, this actor must be ready before the application receives the first request, so it must be created on beforeStart, not in onStart".
When is Akka.system() ready to use?

Akka.system returns an ActorSystem held by the AkkaPlugin. Therefore, if you want to use it, you must do so after the AkkaPlugin has been initialized. The AkkaPlugin is given priority 1000, which means its started after most other internal plugins (database, evolutions, ..). The Global plugin has priority 10000, which means the AkkaPlugin is available there (and for any plugin with priority > 1000).
Note the warning in the docs about beforeStart:
Called before the application starts.
Resources managed by plugins, such as database connections, are likely not available at this point.

You have to start this in onStart() because beforeStart() is called too early - way before anything like Akka (which is actually a plugin) or any database connections are created. In fact, the documentation for GlobalSettings states:
Resources managed by plugins, such as database connections, are likely not available at this point.
The general guidance (confirmed by this thread) is that onStart() is the place to create your actors. And in practice, that has worked for me as well.

Related

Akka equivalent of Spring InitializingBean

I have written some actor classes and I find that I have to get a handle into the lifecycle of these entities. For example whenever my actor is initialized I would like a method to be called so that I can setup some listeners on message queues (or open db connections etc).
Is there an equivalent of this? The equivalent I can think of is Spring's InitialisingBean and DisposableBean
This is a typical scenario where you would override methods like preStart(), postStop(), etc. I don't see anything wrong with this.
Of course you have to be aware of the details - for example postStop() is called asynchronously after actor.stop() is invoked while preStart() is called when an Actor is started. This means that potentially slow/blocking things like DB interaction should be kept to a minimum.
You can also use the Actor's constructor for initialization of data.
As Matthew mentioned, supervision plays a big part in Akka - so you can instruct the supervisor to perform specific stuff on events. For example the so-called DeathWatch - you can be notified if one of the actors "you are watching upon" dies:
context.watch(child)
...
def receive = {
case Terminated(`child`) => lastSender ! "finished"
}
An Actor is basically two methods -- a constructor, and onMessage(Object): void.
There's nothing in its lifecycle that naturally provides for "wiring" behavior, which leaves you with a few options.
Use a Supervisor actor to create your other actors. A Supervisor is responsible for watching, starting and restarting Actors on failure -- and therefore it is often valuable to have a Supervisor that understands the state of integrated systems to avoid continously restarting. This Supervisor would create and manage Service objects (possibly via Spring) and pass them to Actors.
Use your preferred Initialization technique at the time of Actor construction. It's tricky but you can certainly combine Spring with Actors. Just be aware that should a Supervisor restart your actor, you'll need to be able to resurrect its desired state from whatever content you placed in the Props object you used to start it in the first place.
Wire everything on-demand. Open connections on demand when an Actor starts (and cache them as necessary). I find I do this fairly often -- and I let the Actor fail when its connections no longer work. The supervisor will restart the Actor, which will recreate all connections.
Something important things to remember:
The intent of Actor model is that Actors don't run continuously -- they only run when there are messages provided to them. If you add a message listener to an Actor, you are essentially adding new threads that can access that actor. This can be a problem if you use supervision -- a restarted actor may leak that thread and this may in turn cause the actor not to be garbage collected. It can also be a problem because it introduces a race condition, and part of the value of actors is avoiding that.
An Actor that does I/O is, from the perspective of the actor system, blocking. If you have too many Actors doing I/O at the same time, you will exhaust your Dispatcher's thread pool and lock up the system.
A given Actor instance can operate on many different threads over its lifetime, but will only operate on one thread at a time. This can be confusing to some messaging systems -- for example, JMS' Spec asserts that a Session not be used on multiple threads, and many JMS interpret this as "can only run on the thread on which it was started." You may see warnings, or even exceptions, resulting from this.
For these reasons, I prefer to use non-actor code to do some of my I/O. For example, I'll have an incoming message listener object whose responsibility is to take JMS messages off a queue, use them to create POJO messages, and send tells to the Actor system. Alternately, I'll use an Actor, but place that actor on a custom Dispatcher that has thread pinning enabled. This assures that that Actor will only run on a specific thread and won't block up the system that other non-I/O actors are using.

MFC UI Automation graceful shutdown

Our MFC app hangs during shutdown if any UI Automation client is active (Such as Inspect. Windows Eyes, UI Spy etc.)
The reason is BOOL AFXAPI AfxOleCanExitApp() returns false if any Ole Objects exist. The app then goes into hidden server mode.
I have seen similar posts dealing with Document objects. The general solution is to set the object count to 0, close normally then set the count back in the OnClose of the main frame.
This is a poor solution for UI Automation. It causes memory leaks and invalid objects in the Client app ( Inspect actually crashes after a time).
Has anyone seen a proper way to tell UI clients this server is going away and release all objects?
There is no real good way to shut down graceful. There is no graceful way to stop any server when it is still in use. You can only do necessary cleanup.
You have Connections to you objects. What is graceful if you cut them? You can use CoDisconnectObject for every object. But there is no difference when you terminate the application. Also using this function doesn't reduce the objects lock count! But you can delete the object without getting a crash with an access from the other COM clients.
The draw back: CoDisconnectObject only works for external links. If you have internal COM pointers the object, they are not affected. So those may still use your object...
When you really find every object that has an external connection you can destroy it. And if you have no internal COM-pointers you can delete your objects even with a usage count !=0. But in lots of cases I have other dependent COM-objects that are linked...
The only real good way to terminate gracefully is to stop all applications that use your application as a server first! And exit after this is done... ;)
So if you want to force a shutdown. Disconnect what you can. Free as many resources you know. Than ignore the applications lock count and exit. Memory is freed, even if the debug version will report a leak. Problematic are only other resources (files, mutexes, system objects...) that may need a better handling as closing the application...

POCO C++: Can you have multiple logging hierarchies?

I am looking for a unique logging solution using the POCO C++ library.
I will try to explain our design and the issue we are facing.
We have a TCPServerConnectionFactory that spawns a new environment (new set of objects) with each new connection. The spawned environment gets a new socket and has a listening thread. If a message comes in to an established connection a pthread will handle the message. Each useful message that comes in will contain an identifier that will identify all actions that happen until this process is completed by closing the connection and destructing the set of objects that were created for this connection.
Many connections may happen simultaneously. Before moving to a pthreaded environment we were able to use Thread::setName along with the %T identifier to clearly see which log messages were coming from which connection. Now that we are multi-threaded we need a new solution.
I have been unable to find a clean way to make each object that gets spawned through the life of a connection aware of our unique identifier. A global would get overwritten by a new transaction. Passing the ID to each new object would be messy.
My next attempt was to use the POCO Logger channel framework to save each connection's logs to a new file named by the unique identifier we would receive in a message. The issue here is that if a new connection comes in during the life of another, it will overwrite the channel properties and start pointing logs to a different file.
Using the Logger framework, is there a way for me to have a new Logger hierarchy per connection? Basically, we need the set of objects spawned by the new connection to all use the same logging properties, and to not affect any other set of objects logging properties.
Any insight as to a proper way to share the identifier among all objects created during the life of a connection would be useful as well.
Thanks!
If you want to store tiny amounts of information then use a singleton instance of your logger along with a mutex and a semaphore to avoid deadlock / livelock issues.But if you're expecting lots of connections, blocking on the mutex would slow things down therefore you should consider using 1 logger instance per connection.
In case you're going singleton consider using std::mutex since it has built-in deadlock protection.

akka actor failure affect on the os process

If say the code that my actor uses (a code I have no control over) throws an unhandled exception, could that result into the whole actor system process to crash or each actor is running in some kind of special container?
To clarify more, in my use case, I want each actor to load (at run time) some user written code/lib and call some interface methods on them. These libs maybe buggy and can potentially result in my actor system os process to die or halt or something like that. I mean what if the code that actor calls does something that halt (like accessing a remote resource by a buggy client or a dead loop) or even call Enviroment.exit() or something of bad nature.
I mean if my requirement is to allow each actor to load code that I do not have control over, how can I guard my actor system against them? Do I even have to do this?
One way that I can think the whole actor system OS process guard itself against these third party code is to run each actor inside some kind of a container or event have one actor system per actor on the local machine that my actor controls? Do I have to go this far or akka already takes care of this for me and any failure at actor level would not jeopardize the whole actor system and its process??
If the JVM process dies, the JVM process dies. You get around that by using Akka Cluster so you can observe and react to node failures.

XmlHttpRequest bug?

I'm writing a program that among other things needs to download a file given its URL. I'm too lazy to implement the Http/Https protocols manually, so that I needed some library/object/function that'll do the job.
Critical requirement: The download must be asynchronous. That is, the thread that issued the download must be able to do something else "while" downloading the file, plus the download must be able to be aborted anytime without any barbaric side effects (such as internal call to TerminateThread).
Nice-to-have requirements:
Should be able to download the file "into memory". Means - read the contents of the file as they arrive, not necessarily save it into some "file system" file.
It'd be nice to have some convenient Win32 progress notification mechanism (waitable event, semahpore, completion port, etc.), rather than just periodically polling the download status.
I've chosen the XmlHttpRequest COM object to do the work. It seemed to work fine enough, plus it supported asynchronous mode.
However I noticed that after some period it just stops working.
That is, after several successful file downloads it stops downloading anything.
I periodically poll it to get its status, it reports "in-progress", but nothing actually happens, and there's no network activity. Moreover, when the same process creates another instance of XmlHttpRequest object to perform new downloads - the effect is the same. The object reports "in progress", whereas it doesn't even try to connect to the server (according to network sniffers and system TCP state).
The only way to make this object work back is to restart the process. This makes me suspect that there's a sort of a bug (sorry, I meant undocumented feature) in the object. Also it's not a bug at the level of an individual object, since the problem persists when the object is destroyed and another one is created. It's probably some global state of the DLL that implements this object.
Does anyone know something about this? Is this a known bug?
I'm pretty sure there's no chance that I have another bug in my code, because of which it seems to me to be the bug is in the XmlHttpRequest. I've done enoughtests and spent time with the debugger to conclude without reasonable doubt that it's just the object stops working.
BTW, while the object should work, I do all the waiting via MsgWaitXXXX API calls. So that if this object needs the message loop to work properly (for instance, it may create a hidden notification window and bind it to a socket via WSAAsyncSelect) - I give it the opportunity.
I know from my own experiences that the Microsoft implementation of the XmlHttpRequest falls short of full compliance with the draft standard. In particular the standard mandates that streamed data should be able to be extracted in ready state '3' (Receiving) which IE deliberately ignores.
Unfortunately I have not seen what you are describing despite using XmlHttpRequest objects extensively for long polling purposes.