I have WSO2ESB cluster (ESB1 and ESB2 workers) and I'm configuring WSO2MB cluster with shared database MSSQL (MB1 and MB2 brokers). ESB servers will write and read messages from the brokers in WSO2MB cluster.
What I want to achieve is that ESB1 will read/write messages to broker MB1 and ESB2 will read/write messages to broker MB2. In case of failure for example MB2 both ESB servers will read/write messages to MB1. In documentation I only see round-robin version of failure strategy, that means ESB servers will randomly connect to MB brokers. There is singlebroker strategy but is that applicable in my situation or I have to implement my own FailoverMethod interface? I need priority or weight based failover strategy and I only see that in ActiveMQ.
Thnx for any reply.
Round-robin is NOT a random algorithm which connects to brokers. It will prioritise brokers in given order in broker list from first to last. With configurable "cycle count","retries","connection delay" properties you can minimise retries to low priority brokers as well. Even though wso2 mb doesn't have weighted failover strategy at the moment you can try to achieve similar behavior through above configurations.
As far as I understand in 2 node Broker cluster(in your usecase), prioritising a broker(weighted failover strategy) is not a valid case. For example, if MB1 is down only available option for failover is MB2 and vice-versa. IF you do not want to connect ESB1 to MB2 (while MB1 is not available) simply remove MB2's connection url from broker list in "jndi.properties" file of ESB1. Further, you need to vary "cycle count","retries","connection delay" in MB1 broker url in order to retry until MB1 is available again. Therefore, this usecase can be achieved easily by "round-robin" strategy.
Related
We do have a system that is using Redis pub/sub features to communicate between different parts of the system. To keep it simple we used the pub/sub channel to implement different things. On both ends (producer and consumer), we do have Servers containing code that I see no way to convert into Lambda Functions.
We are migrating to AWS and among other changes, we are trying to replace the use of Redis with a managed pub/sub solution. The required solution is fairly simple: a managed broker that allows to publish a message from one node and to subscribe for its reception from 0 or more other nodes.
It seems impossible to achieve this with any of the available solutions:
Kinesis - It is a streaming solution for data ingestion (similar to Apache Pulsar)
SNS - From the documentation, it looks like exactly what we need until we realize that there is no solution to connect a server (not a Lambda) unless with a custom HTTP endpoint.
EventBridge - Same issue as with SNS
SQS - It is a queue, not a pub/sub.
Amazon MQ / Rabbit MQ - It is a queue, not a pub/sub. But also is not a SaaS solution but rather an installation to an owned node.
I see no reason to remove a feature such as subscribing from a server, this is why I was sure it will be present in one or more of the available solutions. But we went through the documentation and attempted to consume fro SNS and EventBridge without success. Are we missing something? How to achieve what we need?
Example
Assume we have an API server layer, deployed on ECS with a load balancer in front. The API has 2 endpoints, a PUT to update a document, and an SSE to listen for updates on documents.
Assuming a simple round-robin load balancer, an update for document1 may occur on node1 where a client may have an ongoing SSE request for the same document on node2. This can be done with a Redis backbone; node1 publishes on document1 topic and node2 is subscribed to the same topic. This solution is fast and efficient (in this case at-most-once delivery is perfectly acceptable).
Being this an example we will not consider WebSocket pub/sub API or other ready-made solutions for this specific use case.
Lambda
Subscriber side can not be a Lambda. Being two distinct contexts involved (the SSE HTTP Request one and the SNS event one) this will cause two distinct lambdas to fire and no way to 'stitch' them together.
SNS + SQS
We hesitate with SQS in conjunction with SNS being a solution that will add a lot of unneeded complexity:
Number of nodes is not known in advance and they scale, requiring an automated system to increase/reduce the number of SQS queues.
Persistence is not required
Additional latency is introduced
Additional infrastructure cost
HTTP Endpoint
This is the closest thing to a programmatic subscription but suffers from similar issues to the SNS-SQS solution:
Number of nodes is unknown requiring endpoint subscriptions to be automatically added.
Eiter we expose one endpoint for each node or have a particular configuration on the Load Balancer to route the message to the appropriate node.
Additional API endpoints must be exposed, maintained, and secured.
I need to notify all machines behind a load balancer when something happens.
For example, I have machines behind a load balancer which cache data, and if the data changes I want to notify the machines so they can dump their caches.
I feel as if I'm missing something as it seems I might be overcomplicating how I talk to all the machines behind my load balancer.
--
Options I've considered
SNS
The problem with this is such that each individual machine would need to be publicly accessible over HTTPS.
SNS Straight to Machines
Machines would subscribe themselves with their EC2 URL with SNS on startup. To achieve this I'd need to either
open those machines up to http from anywhere (not just the load balancer)
create a security group which lets SNS IP ranges into the machines over HTTPS.
This security group could be static (IPs don't appear to have changed since ~2014 from what i can gather)
I could create a scheduled lambda which updates this security group from the json file provided by AWS if I wanted to ensure this list was always up to date.
SNS via LB with fanout
The load balancer URL would be subscribed to SNS. When a notification is received one of the machines would receive it.
The machine would use the AWS API to look at the autoscaling group it belongs to to find other machines attached to the same load balancer and then send the other machines the same message using its internal URL.
SQS with fanout
Each machine would be a queue worker, one would receive the message and forward on to the other machines in the same way as the SNS fanout described above.
Redis PubSub
I could set up a Redis cluster which each node subscribes to and receives the updates. This seems a costly option given the task at hand (especially given I'm operating in many regions and AZs).
Websocket MQTT Topics
Each node would subscribe to an MQTT topic and received the update this way. Not every region I use supports IOT Core yet so I'd need to either host my own broker in each region or have every region connect to their nearest supported (or even a single) region. Not sure about the stability of this but seems like it might be a good option perhaps.
I suppose a 3rd party websocket service like Pusher or something could be used for this purpose.
Polling for updates
Each node contains x cached items, I would have to poll for each item individually or build some means by which to determine which items have changed into a bulk request.
This seems excessive though - hypothetically 50 items, at polling intervals of 10 seconds
6 requests per item per minute
6 * 50 * 60 * 24 = 432000 requests per day to some web service/lambda etc. Just seems a bad option for this use case when most of those requests will say nothing has changed. A push/subscription model seems better than a pull/get model.
I could also use long polling perhaps?
Dynamodb streams
The change which would cause a cache clear is made in a global DynamoDB table (not owned by or known by this service) so I could perhaps allow access to read the stream from that table in every region and listen for changes via that route. That couples the two services pretty tightly though which I'm not keen on.
I want to inspect the logs of an AmazonMQ broker containing information about when a message was enqueued and with which parameters.
AWS provides two options to log activities from the AmazonMQ brokers to CloudWatch called general and audit, but none include the log entries for enqueued messages.
The ActiveMQ official documentation specifies an option called loggingBrokerPlugin, which can be set to log everything, including events when a message gets enqueued and dequeued. Still, AmazonMQ does not support this option in its configuration. I tried to add this option, but AWS sanitizes the configuration file and removes the entry.
Is there a way around this problem?
It doesn't seem to be possible. An alternative is to use CloudWatch, where for particular broker and queue/topic duo you can observe a bunch of metrics. Among them you can find EnqueueCount and DequeueCount (Monitoring Amazon MQ brokers using Amazon CloudWatch). It is less than ideal, but allows some reasoning about messages going through.
I'm looking for JMS beans for monitoring the queues status of the WSO2 traffic manager in cluster environment that has traffic manager VS multiple gateways.
Our understanding is that the gateways send API requests counts to the traffic manager as binary/thrift messages, and at the same time they are subscribed to traffic manager decisions, sent as JMS topics.
Our assumption is that there are multiple queues used for this bidirectional communication, and we want to measure the state of those queues, if they are getting full, etc.
Looking at the WSO2 list of beans (curl http://localhost:9404/metrics) I saw many AMQP and MB related parameters, but playing with them I did not find any meaningful parameters.
If anyone is aware of any relevant parameters I'll be happy to give it a try and share the findings.
There is a DB called WSO2MB_DB.mv.db and its stored the JMS topic related data in traffic manager. its located in <APIM_HOME>/repository/database folder. It will help you to find some details about creted queues and topics.
I'm using Spring JMS to communicate with Amazon SQS queues. I set up a handful of queues and wired up the listeners, but the app isn't sending any messages through them currently. AWS allows 1 million requests per month for free, which I thought should be no problem, but after a month I got billed a small amount for going over that limit.
Is there a way to tune SQS or Spring JMS to keep the requests down?
I'm assuming a request is whenever my app polls the queue to check for new messages. Some queues don't need to be near realtime so I could definitely reduce those requests. I'd appreciate any insights you can offer into how SQS and Spring JMS communicate.
"Normal" JMS clients, when polled for messages, don't poll the server - the server pushes messages to the client and the poll is just done locally.
If the SQS client polls the server, that would be unusual, to say the least, but if it's using REST calls, I can see why it would happen.
Increasing the container's receiveTimeout (default 1 second) might help, but without knowing what the client is doing under the covers, it's hard to tell.