CSActiveMQ CPP producer - can one session use multiple queues? - c++

In a question and response here:
ActiveMQ Producer Multiple Queues One Session
The topic of a single producer sending messages to more them one destination is covered with a solution in Java.
Can the same thing in CPP/CMS be done?
I've tried to replicate that code using cms/activemq API but when I try to send a message to a different queue(destination), I receive error messages stating the producer can only send to the old destination.
Without writing the exact code here is the flow...
Create new Factory
Set broker URI
Create Connection
Connection start
Create Session
Create MessageProducer with a temporary queue
Create a new queue
Use session to create message
MessageProducer send using new queue and message

It is unclear what you code has done since you didn't include it but given the minimal input my guess is that you are creating a fixed destination producer by calling session->createProducer with some destination (sounds like a temp queue). This creates a producer married to that destination for life and calling the send methods that take a destination are required to throw. If you want to pool a producer and send to many different addresses then you need to create it with a NULL destination.

Related

ZeroMQ pub-sub send last message to new subscribers

Can ZeroMQ Publisher Subscriber sockets be configured so that a newly-connected client always receive last published message (if any)?
What am I trying to do: My message is a kind of system state so that new one deprecates previous one. All clients has to have current states. It works for already connected clients (subscribers) but when the new subscriber appears it has to wait for a new state update that triggers a new message. Can I configure PubSub model to send the state to the client immediately after connection or do I have to use a different model?
There is an example in
the ZMQ guide called Last Value Caching. The idea is to put a proxy in between that caches the last messages for each topic and forwards it to new subscribes. It uses an XPUB instead of a PUB socket to react on new connections.

Single SQS Queue vs Multiple SQS Queue while creating a Async Model

I have to develop a component where the Apis are async in nature. In order to develop this async model, I am going to use Aws SQS queues for publishing messages and the client will read from the queue and send the response back into the queue. Now there are 10 APIs (currently) that I have to expose.
Currently, I can think of having a single request and a single response queue (which I will poll) for all the APIs and the payload of the APIs can be defined by some Operation.
The other way is to use a separate queue for each API. The advantage that I can see for multiple queues is that each API can have different traffic and having multiple queues can help the client of the queues to scale effectively.
What can be other pros or cons for both the approaches?
Separate your use-case into 2 distinct problems:
Problem 1: APIs to Workers, one queue or multiple?
If your workers do different types of work, then having a single queue will require them to inspect then discard messages they don't care about. If this is the case, then you should have one queue per message type. This way, any message a worker receives from the queue, it should be able to handle.
If you start ignoring messages, then other workers, who may be idle, may be waiting for a while for messages it cares about.
Problem 2: Using a return queue for the "results". If your clients will be polling for results, then at each poll, your API will need to poll the queue. Again, it will be "searching" for the right response, discarding those it doesn't care about, starving other clients.
Recommendation:
Use multiple queues, one per "worker type". Workers should be able to process any message it receives from the queue.
Then use something other than SQS to store the result. One option is to use S3 to store the result:
When your API "creates" the task, create an object in S3 and put a reference to that S3 object on your SQS queue.
Your worker will do the work, then put the result where it was told to.
When your client polls your API for the result, your API will check S3 and return the status/results.
Instead of S3, other data stores could be used if appropriate: RDS, DynamoDB, etc.

another reliable way to do PULL-PUSH sync in ZeroMQ

If you're using PUSH sockets, you'll find that the first PULL socket to connect will grab an unfair share of messages. The accurate rotation of messages only happens when all PULL sockets are successfully connected, which can take some milliseconds. As an alternative to PUSH/PULL, for lower data rates, consider using ROUTER/DEALER and the load balancing pattern.
So one way to do sync in PUSH/PULL is using the load balancing pattern.
For this specific case below, I wonder whether there is another way to do sync:
I could set the PULL endpoint in worker to block until the connection successfully setup, and then send a special message via worker's PULL endpoint to 'sink'. After 'sink' receives #worker's special messages, 'sink' sends a message with REQ-REP to 'ventilator' to notify that all workers ready. 'ventilator' starts to distribute jobs to workers.
Is it reliable?
The picture is from here
Yes, so long as the Sink knows how many Workers to wait for before telling the Ventilator that it's OK to start sending messages. There's the question of whether the special messages from the Workers get through if they start up before the Sink connects - but you could solve that by having them keep sending their special message until they start getting data from the Ventilator. If you do this, the Sink would of course simply ignore any duplicates it receives.
Of course, that's not quite the same as the Workers having a live, working connection to the Ventilator, but that could itself be sending out special do-nothing messages that the Workers receive. When they receive one of those that's when they can start sending a special message to the Sink.

How to stream a queue across multiple subscriber?

What I am trying to accomplish on higher level:
I have a function that does I/O and generate messages. I have multiple subscriber clients that can subscribe or leave at any time. When a new client subscribes, it should get x number of previous output before streaming new messages (much like unix "tail -f").
My idea was to send-off the messages to an agent, which is a ring buffer. New clients will read the agent and then add-watch to the agent. Problem is, how can I ensure no new message arrive between reading and add-watch?
Next idea was to create 2 refs, one for a list of clients, one for the ring buffer. I can then add clients or post message in transactions. Problem is, when I add clients, I have to read the ring buffer and send it to the client (I/O). This is side effect in transaction that may be retried.
Last idea is to use locks, but that can't be the only way?

Using Amazon SQS with multiple consumers

I have a service-based application that uses Amazon SQS with multiple queues and multiple consumers. I am doing this so that I can implement an event-based architecture and decouple all the services, where the different services react to changes in state of other systems. For example:
Registration Service:
Emits event 'registration-new' when a new user registers.
User Service:
Emits event 'user-updated' when user is updated.
Search Service:
Reads from queue 'registration-new' and indexes user in search.
Reads from queue 'user-updated' and updates user in search.
Metrics Service:
Reads from 'registration-new' queue and sends to Mixpanel.
Reads from queue 'user-updated' and sends to Mixpanel.
I'm having a number of issues:
A message can be received multiple times when doing polling. I can design a lot of the systems to be idempotent, but for some services (such as the metrics service) that would be much more difficult.
A message needs to be manually deleted from the queue in SQS. I have thought of implementing a "message-handling-service" that handles the deletion of messages when all the services have received them (each service would emit a 'message-acknowledged' event after handling a message).
I guess my question is this: what patterns should I use to ensure that I can have multiple consumers for a single queue in SQS, while ensuring that the messages also get delivered and deleted reliably. Thank you for your help.
I think you are doing it wrong.
It looks to me like you are using the same queue to do multiple different things. You are better of using a single queue for a single purpose.
Instead of putting an event into the 'registration-new' queue and then having two different services poll that queue, and BOTH needing to read that message and both doing something different with it (and then needing a 3rd process that is supposed to delete that message after the other 2 have processed it).
One queue should be used for one purpose.
Create a 'index-user-search' queue and a 'send to mixpanels' queue,
so the search service reads from the search queues, indexes the user
and immediately deletes the message.
The mixpanel-service reads from the mix-panels queue, processes the
message and deletes the message.
The registration service, instead of emiting a 'registration-new' to a single queue, now emits it to two queues.
To take it one step better, add SNS into the mix here and have the registration service emit an SNS message to the 'registration-new' topic (not queue), and then subscribe both of the queues I mentioned above, to that topic in a 'fan-out' pattern.
https://aws.amazon.com/blogs/aws/queues-and-notifications-now-best-friends/
Both queues will receive the message, but you only load it into SNS once - if down the road a 3rd unrelated service needs to also process 'registration-new' events, you create another queue and subscribe it to the topic as well - it can run with no dependencies or knowledge of what the other services are doing - that is the goal.
The primary use-case for multiple consumers of a queue is scaling-out.
The mechanism that allows for multiple consumers is the Visibility Timeout, which gives a consumer time to process and delete a message without it being consumed concurrently by another consumer.
To address the "At-Least-Once Delivery" property of Standard Queues,
the consuming service should be idempotent.
If that isn't possible, one possible solution is to use FIFO queues, but this mode has a limited message delivery rate and is not compatible with SNS subscription.
They even have a tutorial on how to create a fanout scenario using the combo SNS+SQS.
https://aws.amazon.com/getting-started/tutorials/send-fanout-event-notifications/
Too bad it does not support FIFO queues so you have to be careful to handle out of order messages.
It would be nice if they had a consistent hashing solution to have multiple competing consumers while respecting the message order.