I have an AMQP application that has a persistent RabbitMQ queue in the client side and a RabbitMQ queue in the server side. The client always writes in the local persistent queue and those messages are transmited to server using shovel plugin.
Producer -> Local Queue --------- SHOVEL ---------- Remote queue -> Consumer
Whether the server is not present the app stills works and shovel does the send when possible. In the other hand the server doesn't require to know the location of the clients becaiuse it consmes always from local queues. I would like to migrate this topology to AKKA using the FilePersistent Mailbox. Is it even possible? Is there something like Federation or Shovel plugin in Akka core libraries.
PS: What I want to achieve is replacing completetly AMQP to get rid of RabbitMQ. It works fine but is another piece of software to install, configure and mantain. I would like to provide all this functionality from my application using just libraries and not another server like RabbitMQ.
Just to clarify a little more what I'm looking to achieve is something like this:
Actor1 -> DurableMailBox1 ----Shovel? Federation?---- DurableMailbox2 <- Actor2
[EDIT]
It looks like there's no way to communicate directly mailbox to mailbox. The possible topologies that can be implemented with AKKA are these:
Remote Actor1 -> [DurableMailBox1 <- Actor2]
Where the arrow can be secured in order to ensure message delivery but is not possible to copy messages from one Mailbox to other Mailbox automatically.
Take a look at Akka Remoting and the Reliable Proxy Pattern.
Sending via a ReliableProxy makes the message send exactly as reliable
as if the represented target were to live within the same JVM,
provided that the remote actor system does not terminate. In effect,
both ends (i.e. JVM and actor system) must be considered as one when
evaluating the reliability of this communication channel. The benefit
is that the network in-between is taken out of that equation.
See this enhancement to the ReliableProxy that mitigates the problem with the remote actor system terminating.
Related
I am looking to build a system that is able to process a stream of requests that needs a long processing time say 5 min each. My goal is to speed up request processing with minimal resource footprint which at times can be a burst of messages.
I can use something like a service bus to queue the request and have multiple process (a.k.a Actors in akka) that can subscribe for a message and start processing. Also can have a watchdog that looks at the queue length in the service bus and create more actors/ actor systems or stop a few.
if I want to do the same in the Actor system like Akka.net how can this be done. Say something like this:
I may want to spin up/stop new Remote Actor systems based on my request queue length
Send the message to any one of the available actor who can start processing without having to check who has the bandwidth to process on the sender side.
Messages should not be lost, and if the actor fails, it should be passed to next available actor.
can this be done with the Akka.net or this is not a valid use case for the actor system. Can some one please share some thoughts or point me to resources where I can get more details.
I may want to spin up/stop new Remote Actor systems based on my request queue length
This is not supported out of the box by Akka.Cluster. You would have to build something custom for it.
However Akka .NET has pool routers which are able to resize automatically according to configurable parameters. You may be able to build something around them.
Send the message to any one of the available actor who can start processing without having to check who has the bandwidth to process on the sender side.
If you look at Akka .NET Routers, there are various strategies that can be used to assign work. SmallestMailbox is probably the closest to what you're after.
Messages should not be lost, and if the actor fails, it should be passed to next available actor.
Akka .NET supports At Least Once Delivery. Read more about it in the docs or at the Petabridge blog.
While you may achieve some of your goals with Akka cluster, I wouldn't advise that. From your requirements it clearly states that your concerns are oriented about:
Reliable message delivery (where service buses and message queues are better option). There are a lot of solutions here, depending on your needs i.e. MassTransit, NServiceBus or queues (RabbitMQ).
Scaling workers (which is infrastructure problem and it's not solved by actor frameworks themselves). From what you've said, you don't even even need a cluster.
You could use akka for building a message processing logic, like workers. But as I said, you don't need it if your goal is to replace existing service bus.
I want to connect a single queue to multiple queue managers
(atleast 2 qmgrs). Lets say I have qmgrA and qmgrB as queue managers and it is connected to queueName. I will put a message "Hello" to queueName connected to qmgrA as well as another message "World" on qmgrB. So suppose to be the queueName contains "Hello" and "World".
The question is how can I get those messages simultaneously? Can you give me an example code fragment/snippet for me to atleast have an overview of how can I start coding with that design.
Note: *I am asking for this because for example the qmgrA got disconnected/down for unknown reason, atleast qmgrB is still active
and will get messages on queueName even though qmgrA is disconnected.
By the way, I'm using Websphere MQ v7 C++.
Thanks in advance! :)
The question appears to be asking how to do something that IBM MQ does not do. QMgrs do not "connect" to queues, they host them. Applications do not "connect" to queues except in the abstract JMS sense. They connect to QMgrs and then open queues. However, the requirement that is described of keeping MQ highly available can be met with MQ Clusters, hardware clusters, Multi-Instance QMgrs, shared queues (on z/OS only), or some combination of these.
The two queue managers in your example each host a local copy of queueName. Here are several scenarios as to how messages are distributed in this situation.
PUTting messages
A message put to queueName by an application connected to qmgrA will by default result in the message landing in the local instance of that queue.
When there is a local instance and an MQ cluster with at least one other instance, the QMgr can be configured to allow messages to go to non-local instances.
When qmgrA, qmgrB, and qmgrC are in a cluster, and qmgrC does not host a local instance of the queue, messages put to that queue name will round-robin between the instances on qmgrA and qmgrB .
GETting messages
A message landing in queueName on qmgrA can only be retrieved from that queue by an application connected to qmgrA.
A message landing in queueName on qmgrB can only be retrieved from that queue by an application connected to qmgrB.
An application connected to qmgrC cannot retrieved messages from the queues hosted on qmgrA or qmgrB.
MQ HA
The requirement to make a queue highly available can be achieved in several ways. Some of these provide recoverability of the service provided by MQ. Some of them provide recoverability of messages that would be stranded in-flight. In addition to reading topology design section in the manuals, please see the MQ HA presentation from IBM Interconnect 2015.
Your question does make very much sense. Go look up MQ Clustering and MQ Multi-Instance in the MQ Knowledge Center. http://www-01.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.con.doc/q017820_.htm
I have a specific use-case for an Akka implementation.
I have a set of agents who send heartbeats to Akka. Akka takes this heartbeat, and assigns actors to send them to my meta-data server (a separate server). This part is done.
Now my meta-data server also needs to send action information to the agents. However, since these agents may be behind firewalls, Akka cannot communicate to them directly so it needs to send the action as a response to the Heartbeat. Thus when the meta-data server sends an action Akka stores it in a DurableMessageQueue (separate one for each agentID) and keeps the mapping of agent-ID to DurableMessageQueue in a HashMap. Then whenever the heartbeat comes, before responding it checks this queue and piggybacks the action in the response.
The issue with this is that the HashMap will be in a single JVM and therefor I cannot scale this. Am I missing something or is there a better way to do it?
I have Akka running behind Mina server running which received and sends messages.
Right now I have:
a multithreaded windows service written in C++ which use common static libraries as well as dynamic DLLs;
each thread performs different tasks and produces different errors (DB errors, function invocation errors, etc.). Each thread further will act as a logger client (and will send all messages to a logger server);
a separate thread which has no body yet, but which will act as a logger server for handling all log messages from the logger clients.
I need a good advise of how I should implement following idea into a working solution. The idea is to add a server-client logging architecture to my multithreaded server with following requirements (though some parts I need to implement by myself, please consider just the basic idea of logger client and logger server):
there should be a lot of log clients (as I already mentioned, the log client is just an existed working thread), each should register an entity with a unique name or/and ID and following behavior:
if the logger server is up and is working now, this log client starts to send log messages,
otherwise (the logger server is down), the log client endlessly tries to register itself with the log server using a small timeout.
there should be a logger server, with following behavior:
log server registers all log clients with their unique name or/an ID and endlessly checks if there appears a new log client to be registered
log server handles all messages from different log clients and writes to DB, file, etc.
there should be an opportunity to establish connection to the log server from an external application (for example, MySuperThreadViewerProgram to monitor all thread activity/errors/etc). At the connection, the log server should consider an external application as a one more log client. It's the most important requirement.
Summing up, there are three architecture parts to be implemented:
Server-client logger architecture;
Message queue facility between log clients and log server. And log server periodically checks if there any available log clients to be registered;
Inter-process communication between log server and external application, where the latter acts as a new log client.
Please, note, I consider a logger server as a kind of log message router.
So, the main question is:
Is there any solution (software framework) which has all described above features (which is much preferably) or I should use different libraries for different parts?
If the answer is: "there is no such solution", can you review the choice I made:
For #1: using Pantheios logger framework;
For #2: using any kind of register-subscribe library with server-client architecture and message-queue support (update: ipc library) ;
For #3: using Boost.Interprocess - using SharedMemory.
UPDATE:
The good example of #2 is this ipc library. And may be I was a bit incorrect describing logger client - logger server relations, but what I really mean is similar to approach, fully described and implemented in ipc library: when one entity (thread) subscribes to another to receive its messages (or "publish-subscribe" model).
And I want to use a kind of this technique to implement my logging architecture. But in what way?
UPDATE2:
OS is Windows. Yeah, I know, under Linux there is a bunch of useful tools and frameworks (D-Bus, Syslog). May be some of you could provide a helpful link to cross-platform library, which can be useful? Maybe there is a logger framework over D-Bus under Windows?
Any comments are highly appreciated.
Thanks a lot!
ØMQ (ZeroMQ) might be a viable alternative to the ipc library you mentioned, as it has a lot of features along the lines of your requirements.
It fully supports the PUB/SUB model, allows you to work between threads, bteween processes and even between machines. It is a client-server architecture, a message queue and works as IPC, too.
Of course, you need a specific way of coding and decoding messages, the protocol buffers are indeed a great idea.
As far as I know the logging backend pantheios uses (i.e. the log sink: DB, file or whatever) is specified at link-time. The severity of logs going to the backend can be specified at launch-time and with some simple tweaks also during runtime.
If I got you right, then you have one process (let's forget about the external application just for a minute) with multiple worker threads running. Some of these threads should log to a common backend (e.g. DB) and some to another. Because pantheios cannot do this out-of-the-box you'll need to write a custom backend that can route the logs to the correct backend.
If memory consumption is not an issue and you don't need the fastest logging performance, then you might want to look into log4cxx, because it is highly configurable and could possibly spare you from implementing a client-server-architecture with all the synchronization-problems it brings about.
About the external application: If you can guarantee, that it's only one external client, then you could use a pipe mechanism to communicate with the service. The service process would then have a separate thread, which corresponds to your server thread, that opens a named pipe and can also be specified as a log sink so your worker threads can log to it as well as to other log sinks (DB, file etc.).
There are some syslog servers for win as well. Winsyslog for example is coming from the producers of the famous rsyslog. Once you have syslogd running on win, there are plenty of OS independent syslog clients, such as SysLog4j if you're using Java, or the Syslog handler for the std. python logging.
I'm new to akka and intend to use it in my new project as a data replication mechanism.
In this scenario, there is a master server and a replicate data server. The replicate data should contain the same data as the master. Each time a data change occurred in the master, it sends an update message to the replicate server. Here the master server is the Sender, and the Replicate server is the Receiver.
But after digging the docs I'm still not sure how to satisfy the following use cases:
When the receiver crashes, the sender should pile up messages to send, none messages should be lost. It should be able to reconnect to the receiver later and continue with last successful message.
when the sender crashes, it should restart and no messages between restart is lost.
Messages are dealt with the same order they were sent.
So my question is, how to config akka to create a sender and a receiver that could do this?
I'm not sure actor with a DurableMessageBox could solve this. If it could, how can i simulate the above situations for testing?
Update:
After reading the docs Victor pointed at, I now got the point that what I wanted was once-and-only-once pattern, which is extremely costly.
In the akka docs it says
Actual transports may provide stronger semantics, but at-most-once is the semantics you should expect. The alternatives would be once-and-only-once, which is extremely costly, or at-least-once which essentially requires idempotency of message processing, which is a user-level concern.
So inorder to achieve Guaranteed Delivery, I may need to turn to some other MQ solution (for example Kafka), or try to implement once-and-only-once with DurableMessageBox, and see if the complexity with it could be relieved with my specific use case.
You'd need to write your own remoting that utilizes the durable subscriber pattern, as Akka message send guarantees are less strict than what you are going for: http://doc.akka.io/docs/akka/2.0/general/message-send-semantics.html
Cheers,
√