publishing and subscribing through rabbitmq server - python-2.7

I need a scenario where one node send a message and another node starts waiting to get the message.
Each node after getting message sent turns into listener and after receiving message it turns into publisher again.

Look on this RPC example. You will have to slightly modify it to exit listening and start publishing.

Related

WSO2 Micro Integrator - Message Processors are Stuck After Do Server Stop and Start

I am running WSO2 MI 4.1 in a cluster with two nodes. After I re-enable all message forwarding processors in the Dashboard that are forwarding messages from RabbitMQs message store to an endpoint, each queue says it is running. When I stop the server on one node wait for a short period of time and then start the same node back up and then repeat this on the second node, the message processors look enabled and all have an emabled state. If I go to RabbitMQ, some of the queues are idle. If I try to send a message to these queues the message just sits there in the queue. If I stop and start the message processor for the queue then the queue starts processing messages. This behavior happens with empty queues and queues that have messages in them. Is this a bug or is there a better way to do a system restart?
Removing the _meta_MSMP* files in the _system/governance/repository/components/org.wso2.carbon.tasks/definitions/-1234/ESB_TASK/folder resolved this issue.

C++/TCP Server - Convenient way of event notification

I have created a simple C++ TCP Server application.
Client connects and receives back as a simple echo everything that the client sends to the server. No purpose at all except for me to test the communication.
So far so good. What comes as next task for me is to decide a way of how to send a notification to the server that specific event has started.
Some event examples:
Player wrote a message - Server accepts the data sent from the client and recognizes that it's a chat message and sends back data to all connected clients that there is new message. Clients recognize that there is new message incoming.
Player is casting spell.
Player has died
Many more examples but you get the main idea.
I was thinking of sending all the data in json format and there all messages will contain identifiers like
0x01 is message event.
0x02 is casting spell event.
0x03 is player dead event.
And once identifier is send server can recognize what event the client is asking/informing and can apply the needed logic behind.
My question is isn't there a better approach to identify for what event the server is notified ?
I am in a search of better approach before I take this road.
You can take a look at standard iso8583 message, it's a financial message but every message has a processing code that determine what action should be done for each incoming message.

WSO2: MQTT input event adapter is not listening

I am using WSO2 CEP 4.2.0 and have created MQTT input event adapter. I have also created the receiver which will receive the data from a external topic and then using streams, I am adding some logics and then same message will be published using publishers to another external topic.
Now, When I restart the application, I get below two messages:
INFO {org.wso2.carbon.event.input.adapter.core.internal.InputAdapterRuntime} - Connecting receiver mqttreceiver_test
INFO {org.wso2.carbon.event.input.adapter.mqtt.internal.util.MQTTAdapterListener} - MQTT Connection successful
And then When I am publishing the message from external mqtt client, I can see that message arrives the event receiver and after stream processing, the message goes to output event publisher.
But after approx 5 mins, the messages are not received any more in the event receiver. I do not get any error message also in logs but what I could sense is may be the input adapter is not listening any more.
Any suggestions or any guidance will help.
Thanks
Few things I could suggest to debug this issue:
Perhaps the flow is broken so that the event does not reach the
output event publisher? You could use logger event publisher [1] and log
the stream that is generated by the MQTT input event adapter.
Enable debug logs for package
org.wso2.carbon.event.input.adapter.mqtt.internal.util so that you
will see a log when the MQTTAdapterListener receives a message (See
[2] ). You can follow [3] to enable debug logs.
When the issue
happens, take a thread dump and see whether the MQTTAdapterListener
thread is running.
Hope these will help you to narrow down the issue.
[1] https://docs.wso2.com/display/CEP420/Logger+Event+Publisher
[2] https://github.com/wso2/carbon-analytics-common/blob/v5.1.3/components/event-receiver/event-input-adapters/org.wso2.carbon.event.input.adapter.mqtt/src/main/java/org/wso2/carbon/event/input/adapter/mqtt/internal/util/MQTTAdapterListener.java#L150
[3] https://docs.wso2.com/display/CEP420/Logging

WSO2 ESB - How process messages one by one (in series) from messages store

I try use sample process and scheduler process to do it. But they are work by fixed intervals and don't wait finish previous message.
A forwarding message processor (class ScheduledMessageForwardingProcessor) wait for the http response before dequeueing next message in the store, if the response is OK.
In case of error, 404 for exemple, it rollback JMS transaction and continue with the same message again and again.
The interval used in the ScheduledMessageForwardingProcessor's definition is the interval use by the MP to dequeue next message after a response.

Amazon Message Queue Service (SQS) sending and confirmation

Scenario:
A elastic beanstalk environment has a few web servers which sends a request to another worker environment for user sign up, etc.
Problem:
When one of the worker machines have finished the task, I also want it to send a confirmation to the web server.
It seems that SQS does not have "confirmation" message.
When we offload a job to send a email, but I also want to let the web server to know that send email was successful.
One solution I could do is implement another queue that the web server polls, however, many servers can poll on the same queue, and the confirmation for Server 1, can be recieved by Server 2, and we would need to wait for a timeout for the message, but then Server 3 might intercept the message. It could wait a while for Server 1 to get a confirmation.
The way your worker machines "confirm" they processed a message is by deleting it from the queue. The lifecycle of a queue message is:
Web server sends a message with a request.
Worker receives the message.
Worker processes the message successfully.
Worker deletes the message.
A message being deleted is the confirmation that it was processed successfully.
If the worker does not delete the message in step 4 then the message will reappear in the queue after a specified timeout, and the next worker to check for messages will get it and process it.
If you are concerned that a message might be impossible to process and keep reappearing in the queue forever, you can set up a "dead letter" queue. After a message has been received a specified number of times but never deleted, the message is transferred to the dead letter queue, where you can have a process for dealing with these unusual situations.
Amazon's Simple Queue Service Developer Guide has a good, clear explanation of this lifecycle and how you can be sure that every message is processed or moved to a dead letter queue for special handling.