Which are the approach (simplest possible) that we can use to get notified for Power Status changes (for instance when computer goes to sleep, hibernate, etc.) in Linux based systems?
I will need this mainly for persisting some state before sleeping, and of course, restoring that state once the computer wakes up.
You can get all these events by just configuring your acpid to send them via socket, for example.
There's an official specification document that describes all possible events and circumstances. An extensive read though.
Related
I am evaluating whether to use Akka and akka persistent as key toolkit of certain project, in which there is a complex background running process (might be triggered by Quartz at a fixed time per day).
The back-ground running process will communicate with many different external services via HTTP communication, will generates many encrypted files locally, and transfer them via SFTP.
From business perspective:
The service is mission critical, roughly it will charge N million users' money from their bank cards automatically and help them purchase some fund product.
From technical perspective:
Each of the external service might not be available with whatever reason, such as network issues, the external service might running out of their resources(i.e. jdbc connections).
Our service might be killed, restarted, re-deployed due to urgent reason or crashed with some unexpected errors.
Once the process was restarted with an incomplete job, then it needs to gracefully complete them with different strategies, such as redo, confirm external system business state, and resume from certain check point.
I was reading from official AkkaScala.PDF, and some youtube conference videos, all of them were mentioning, actor's state can be restored by replaying the events from journal after JVM crash.
But it must be a stupid question, since i did not find it was being discussed:
Imagine there were 1000 persistent actors living in the service, and the service's JVM crashed and restarted, who should be in charge of triggering re-create those 1000 persistent actors in the newly created actor system in both single process mode and clustered mode? And how? Or what articles should I read first?
You should read basics of Akka Persistence and Akka Persistence Query. But probably, first thing that comes to my mind is to use Akka Persistence Query AllPersistenceIdsQuery or CurrentPersistenceIdsQuery. It will give you all persistence id's which you can use to reignite your persistent actors. Persistent actors by specific persistent id will replay all events from event store journal. You can take snapshots to speed up recovery. Your event store will probably be some kind of database (e.g. Cassandra). Considering that your persistent actor has specific mutable state, it will be brought back to its last state after the recovery. Recovery might take some time.
I am looking to build a system that is able to process a stream of requests that needs a long processing time say 5 min each. My goal is to speed up request processing with minimal resource footprint which at times can be a burst of messages.
I can use something like a service bus to queue the request and have multiple process (a.k.a Actors in akka) that can subscribe for a message and start processing. Also can have a watchdog that looks at the queue length in the service bus and create more actors/ actor systems or stop a few.
if I want to do the same in the Actor system like Akka.net how can this be done. Say something like this:
I may want to spin up/stop new Remote Actor systems based on my request queue length
Send the message to any one of the available actor who can start processing without having to check who has the bandwidth to process on the sender side.
Messages should not be lost, and if the actor fails, it should be passed to next available actor.
can this be done with the Akka.net or this is not a valid use case for the actor system. Can some one please share some thoughts or point me to resources where I can get more details.
I may want to spin up/stop new Remote Actor systems based on my request queue length
This is not supported out of the box by Akka.Cluster. You would have to build something custom for it.
However Akka .NET has pool routers which are able to resize automatically according to configurable parameters. You may be able to build something around them.
Send the message to any one of the available actor who can start processing without having to check who has the bandwidth to process on the sender side.
If you look at Akka .NET Routers, there are various strategies that can be used to assign work. SmallestMailbox is probably the closest to what you're after.
Messages should not be lost, and if the actor fails, it should be passed to next available actor.
Akka .NET supports At Least Once Delivery. Read more about it in the docs or at the Petabridge blog.
While you may achieve some of your goals with Akka cluster, I wouldn't advise that. From your requirements it clearly states that your concerns are oriented about:
Reliable message delivery (where service buses and message queues are better option). There are a lot of solutions here, depending on your needs i.e. MassTransit, NServiceBus or queues (RabbitMQ).
Scaling workers (which is infrastructure problem and it's not solved by actor frameworks themselves). From what you've said, you don't even even need a cluster.
You could use akka for building a message processing logic, like workers. But as I said, you don't need it if your goal is to replace existing service bus.
I'm trying to monitor the network sessions on server withe event driven programming (and not polling on /proc/net/tcp or udp).
I was able to find this article but it only provide one time look at the current state and not an event on each change (LISTEN, ESTABLISHED...).
Is it possible to use this like in this article that monitors processes changes but on network connections?
If not, is there any other API that I can use in order to achive this without polling /porc/net/* in interval?
Suppose we have systems which are speed critical (like for example statistics/analytic's,socket programming etc), how do we design the traces and logs.
To be more specific, logs and traces generally reduce the performance (even if we have a switch off mechanism or verbose extension mechanism). In such scenarios, is there any reference guideline on how to 'put' logs/traces so that when the issue occurs (especially at production site) developer/post production team are able to pin-point the actual issue.
PS: I come from background where such applications are developed in C/C++ (run on Linux)
You can accumulate logs inside a buffer which you can describe and implement using Google Protocol Buffers. You can have a different thread periodically(every 5minutes) empty this buffer to disk or sending it through a UNIX domain socket (or other Linux IPC mechanisms) to a daemon that listens and writes them to a persistent DB or simply writes them to disk.
If you don't want to hit the disk on the machine that produces logs, you can send them to a different machine through a regular socket and write them to disk on that machine.
If you are aggregating logs from multiple machines, consider using 0MQ or CrossRoads as message queues to pass your logs through the network to a machine where they are stored persistently. You can find some information about using 0MQ in conjuction with Google Protocol Buffers here.
(Edited to try to explain better)
We have an agent, written in C++ for Win32. It needs to periodically post information to a server. It must support disconnected operation. That is: the client doesn't always have a connection to the server.
Note: This is for communication between an agent running on desktop PCs, to communicate with a server running somewhere in the enterprise.
This means that the messages to be sent to the server must be queued (so that they can be sent once the connection is available).
We currently use an in-house system that queues messages as individual files on disk, and uses HTTP POST to send them to the server when it's available.
It's starting to show its age, and I'd like to investigate alternatives before I consider updating it.
It must be available by default on Windows XP SP2, Windows Vista and Windows 7, or must be simple to include in our installer.
This product will be installed (by administrators) on a couple of hundred thousand PCs. They'll probably use something like Microsoft SMS or ConfigMgr. In this scenario, "frivolous" prerequisites are frowned upon. This means that, unless the client-side code (or a redistributable) can be included in our installer, the administrator won't be happy. This makes MSMQ a particularly hard sell, because it's not installed by default with XP.
It must be relatively simple to use from C++ on Win32.
Our client is an unmanaged C++ Win32 application. No .NET or Java on the client.
The transport should be HTTP or HTTPS. That is: it must go through firewalls easily; no RPC or DCOM.
It should be relatively reliable, with retries, etc. Protection against replays is a must-have.
It must be scalable -- there's a lot of traffic. Per-message impact on the server should be minimal.
The server end is C#, currently using ASP.NET to implement a simple HTTP POST mechanism.
(The slightly odd one). It must support client-side in-memory queues, so that we can avoid spinning up the hard disk. It must allow flushing to disk periodically.
It must be suitable for use in a proprietary product (i.e. no GPL, etc.).
How is your current solution showing its age?
I would push the logic on to the back end, and make the clients extremely simple.
Messages are simply stored in the file system. Have the client write to c:/queue/{uuid}.tmp. When the file is written, rename it to c:/queue/{uuid}.msg. This makes writing messages to the queue on the client "atomic".
A C++ thread wakes up, scans c:\queue for "*.msg" files, and if it finds one it then checks for the server, and HTTP POSTs the message to it. When it receives the 200 status back from the server (i.e. it has got the message), then it can delete the file. It only scans for *.msg files. The *.tmp files are still being written too, and you'd have a race condition trying to send a msg file that was still being written. That's what the rename from .tmp is for. I'd also suggest scanning by creation date so early messages go first.
Your server receives the message, and here it can to any necessary dupe checking. Push this burden on the server to centralize it. You could simply record every uuid for every message to do duplication elimination. If that list gets too long (I don't know your traffic volume), perhaps you can cull it of items greater than 30 days (I also don't know how long your clients can remain off line).
This system is simple, but pretty robust. If the file sending thread gets an error, it will simply try to send the file next time. The only time you should be getting a duplicate message is in the window between when the client gets the 200 ack from the server and when it deletes the file. If the client shuts down or crashes at that point, you will have a file that has been sent but not removed from the queue.
If your clients are stable, this is a pretty low risk. With the dupe checking based on the message ID, you can mitigate that at the cost of some bookkeeping, but maintaining a list of uuids isn't spectacularly daunting, but again it does depend on your message volume and other performance requirements.
The fact that you are allowed to work "offline" suggests you have some "slack" in your absolute messaging performance.
To be honest, the requirements listed don't make a lot of sense and show you have a long way to go in your MQ learning. Given that, if you don't want to use MSMQ (probably the easiest overall on Windows -- but with [IMO severe] limitations), then you should look into:
qpid - Decent use of AMQP standard
zeromq - (the best, IMO, technically but also requires the most familiarity with MQ technologies)
I'd recommend rabbitmq too, but that's an Erlang server and last I looked it didn't have usuable C or C++ libraries. Still, if you are shopping MQ, take a look at it...
[EDIT]
I've gone back and reread your reqs as well as some of your comments and think, for you, that perhaps client MQ -> server is not your best option. I would maybe consider letting your client -> server operations be HTTP POST or SOAP and allow the HTTP endpoint in turn queue messages on your MQ backend. IOW, abstract away the MQ client into an architecture you have more control over. Then your C++ client would simply be HTTP (easy), and your HTTP service (likely C# / .Net from reading your comments) can interact with any MQ backend of your choice. If all your HTTP endpoint does is spawn MQ messages, it'll be pretty darned lightweight and can scale through all the traditional load balancing techniques.
Last time I wanted to do any messaging I used C# and MSMQ. There are MSMQ libraries available that make using MSMQ very easy. It's free to install on both your servers and never lost a message to this day. It handles reboots etc all by itself. It's a thing of beauty and 100,000's of message are processed daily.
I'm not sure why you ruled out MSMQ and I didn't get point 2.
Quite often for queues we just dump record data into a database table and another process lifts rows out of the table periodically.
How about using Asynchronous Agents library from .NET Framework 4.0. It is still beta though.
http://msdn.microsoft.com/en-us/library/dd492627(VS.100).aspx