In ZMQ Proxy, we have 2 types of sockets, DEALER and ROUTER. Also, I've tried to use the capture socket, but it didn't work based on what exactly I looked for.
I'm looking for a way to log what message my proxy server receives.
Q : a way to log what message my proxy server receives.
The simplest way is to make use of an API v4+ directly supported logging via a ManInTheMiddle-"capture" socket:
// [ROUTER]--------------------------------------+++++++
// |||||||
// [DEALER]---------------*vvvvvvvv *vvvvvvv
int zmq_proxy (const void *frontend, const void *backend, const void *capture);
// [?]---------------------------------------------------------------*^^^^^^^
Where the capture ought be either of { ZMQ_PUB | ZMQ_DEALER | ZMQ_PUSH | ZMQ_PAIR }
If the capture socket is not NULL, the proxy shall send all messages, received on both frontend and backend, to the capture socket.
If this ZeroMQ API-granted is not meeting your expectation, feel free to express your expectations in as sufficiently detailed manner as needed ( and implement either an "external" capture-socket payload { message-content | socket_monitor() }-based filtering or one may design a brand new, user-defined logging-proxy, where your expressed features will get implemented with a use of your custom use-case specific requirements, implemented in your application-specific code, resorting to re-use but the clean and plain ZeroMQ API for all the DEALER-inbound/outbound-ROUTER message-passing and log-filtering/processing logic. )
There is no other way I can imagine to take place and solve the task.
It also works with a pair of PAIR sockets. As soon as one end of a pair of sockets is connected to the capture socket, messages are sent to the capture sockets AND to the other end of the proxy.
http://zguide.zeromq.org/page:all#ZeroMQ-s-Built-In-Proxy-Function
and
http://api.zeromq.org/3-2:zmq-proxy
and
http://zguide.zeromq.org/page:all#Pub-Sub-Tracing-Espresso-Pattern
helped me.
This code in python demonstrates it:
import zmq, threading, time
def peer_run(ctx):
""" this is the run method of the PAIR thread that logs the messages
going through the broker """
sock = ctx.socket(zmq.PAIR)
sock.connect("inproc://peer") # connect to the caller
sock.send(b"") # signal the caller that we are ready
while True:
try:
topic = sock.recv_string()
obj = sock.recv_pyobj()
except Exception:
topic = None
obj = sock.recv()
print(f"\n !!! peer_run captured message with topic {topic}, obj {obj}. !!!\n")
def proxyrun():
""" zmq broker run method in separate thread because zmq.proxy blocks """
xpub = ctx.socket(zmq.XPUB)
xpub.bind(xpub_url)
xsub = ctx.socket(zmq.XSUB)
xsub.bind(xsub_url)
zmq.proxy(xpub, xsub, cap)
def pubrun():
""" publisher run method in a separate thread, publishes 5 messages with topic 'Hello'"""
socket = ctx.socket(zmq.PUB)
socket.connect(xsub_url)
for i in range(5):
socket.send_string(f"Hello {i}", zmq.SNDMORE)
socket.send_pyobj({'a' : 123})
time.sleep(0.01)
ctx = zmq.Context()
xpub_url = "ipc://xpub"
xsub_url = "ipc://xsub"
#xpub_url = "tcp://127.0.0.1:5567"
#xsub_url = "tcp://127.0.0.1:5568"
# set up the capture socket pair
cap = ctx.socket(zmq.PAIR)
cap.bind("inproc://peer")
cap_th = threading.Thread(target=peer_run, args=(ctx,), daemon=True)
cap_th.start()
cap.recv() # wait for signal from peer thread
print("cap received message from peer, proceeding.")
# start the proxy
th_proxy=threading.Thread(target=proxyrun, daemon=True)
th_proxy.start()
# create req/rep socket just to prove that pub/sub can run alongside it
zmq_rep_sock = ctx.socket(zmq.REP)
zmq_rep_sock.bind("ipc://ghi")
# create sub socket and connect it to proxy's pub socket
zmq_sub_sock = ctx.socket(zmq.SUB)
zmq_sub_sock.connect(xpub_url)
zmq_sub_sock.setsockopt(zmq.SUBSCRIBE, b"Hello")
# create the poller
poller = zmq.Poller()
poller.register(zmq_rep_sock, zmq.POLLIN)
poller.register(zmq_sub_sock, zmq.POLLIN)
# create publisher thread and start it
th_pub = threading.Thread(target=pubrun, daemon=True)
th_pub.start()
# receive publisher's messages ordinarily
while True:
events = dict(poller.poll())
print(f"received events: {events}")
if zmq_rep_sock in events:
message = zmq_rep_sock.recv_pyobj()
print(f"received zmq_rep_sock {message}")
elif zmq_sub_sock in events:
topic = zmq_sub_sock.recv_string()
message = zmq_sub_sock.recv_pyobj()
print(f"received zmq_sub_sock {topic} , {message}")
output
cap received message from peer, proceeding.
!!! peer_run captured message with topic None, obj b'\x80\x03}q\x00X\x01\x00\x00\x00aq\x01K{s.'. !!!
received events: {<zmq.sugar.socket.Socket object at 0x76310f70>: 1}
received zmq_sub_sock Hello 1 , {'a': 123}
!!! peer_run captured message with topic Hello 2, obj {'a': 123}. !!!
received events: {<zmq.sugar.socket.Socket object at 0x76310f70>: 1}
received zmq_sub_sock Hello 2 , {'a': 123}
!!! peer_run captured message with topic Hello 3, obj {'a': 123}. !!!
received events: {<zmq.sugar.socket.Socket object at 0x76310f70>: 1}
received zmq_sub_sock Hello 3 , {'a': 123}
!!! peer_run captured message with topic Hello 4, obj {'a': 123}. !!!
received events: {<zmq.sugar.socket.Socket object at 0x76310f70>: 1}
received zmq_sub_sock Hello 4 , {'a': 123}
Be aware of the slow joiner problem, hence the sleep command in the publisher/
Related
I use django channels 3.0.0 and websocket using angular.
But users connect to django websocket and they are in their own rooms respectively.
And when I want to send the event to connected users outside of consumers, I used the following code.
all_users = list(Member.objects.filter(is_active = True).values_list('id', flat = True))
for user_id in all_users:
async_to_sync(channel_layer.group_send)(
"chat_{}".format(user_id),
{ "type": "tweet_send", "message": "tweet created" }
)
And in consumers.py, my consumer class's chat_message function
async def tweet_send(self, event):
content = event['message']
# Send message to WebSocket
await self.send(text_data=json.dumps({
"type": "MESSAGE",
"data": content
}))
And this "self.send" function is meant to be sent to all connected users respectively, but when I run the code, the all data are sent to only one user who has connected the last time.
I don't know why. If anyone knows the reason, please help me.
This is indeed a bug in version 3.0.0. It was reported and fixed in 3.0.1.
I'm new to rabbitMQ, and I'm trying to make a application where there will be 3 roles: two producers and one consumer. The consumer is related with two queues which related to the two producers. Each producer sends the message to queue with different frequency. What I need is that the consumer read alternatively from the two producers.
For example:
Producer 1: Send "Hello" every 2 seconds
Producer 2: Send "World" every 5 seconds
Consumer: Print whatever it receives
So the consumer is expected to print:
hello world hello world hello world ...
Since producer 1 send the message more frequently than producer 2, after the consumer have read from consumer 1, it needs to wait a little bit for the arrival of the message from producer 2 (that's the problem)
I tried to declare two queues for the producers and link them to the consumer but the consumer only prints somthing like:
hello hello world hello hello world
Thanks for the help!
Update: Here's my code
Producer 1:
import pika
import sys
message = 'hello'
credentials = pika.PlainCredentials('xxxx', 'xxxx)
connection =pika.BlockingConnection(pika.ConnectionParameters('localhost', 5672, '/', credentials))
channel = connection.channel()
channel.queue_declare(queue='hello')
while True:
channel.basic_publish(exchange='', routing_key='hello', body=message)
print('Sent message: {}'.format(message))
connection.sleep(2)
connection.close()
Producer 2:
import pika
import sys
message = 'world'
credentials = pika.PlainCredentials('xxxx', 'xxxx')
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost', 5672, '/', credentials))
channel = connection.channel()
channel.queue_declare(queue='world')
while True:
channel.basic_publish(exchange='', routing_key='world', body=message)
print('Sent message: {}'.format(message))
connection.sleep(4)
connection.close()
Consumer 1:
import pika
def callback(ch, method, properties, body):
print('Receive: {}'.format(body))
credentials = pika.PlainCredentials('xxxx', 'xxxx')
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost', 5672, '/', credentials))
channel = connection.channel()
channel.basic_qos(prefetch_count=1)
channel.queue_declare(queue='hello')
channel.queue_declare(queue='world')
channel.basic_consume(on_message_callback=callback, queue='hello', auto_ack=True)
channel.basic_consume(on_message_callback=callback, queue='world', auto_ack=True)
print('Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
Since a consumer can only consume from a single queue, you will have to make sure that all messages are routed to this queue.
It is then up to the consumer to handle the messages. It would have to use the polling API to get a single messages. Depending on which consumer published each message, the consumer would have to act differentlty. It could keep a local store of messages coming from producer 1 that arrived before a message coming from producer 2 has been acted upon. The Cosumer would delay acting on messages it keeps in this store until a message coming from producer 2 has been acted upon. Only then would it take the first message from this store and act on it.
Edit:
In the code you've added to your question, you have a single channel (that's good) but two consumers, one for each call to channel.basic_consume. Both consumers use the same callback method callback. It is this method which would have to implement the logic I've described above.
I am trying to send message through direct exchange. I have not declared the queue as mentioned in the official page tutorial. Below is my code:
import sys
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) # Connect to AMQP
def setup():
channel = connection.channel()
channel.exchange_declare(exchange='direct_logs', type='direct')
return channel
def log_emitter(message, severity):
channel = setup()
channel.basic_publish(exchange='direct_logs',
routing_key=severity,
body=message)
def logger():
severity = sys.argv[1] if len(sys.argv) > 2 else 'info'
print severity
exit = 'N'
message = ' '.join(sys.argv[2:]) or "Hello World!"
log_emitter(message, severity)
print(" [x] Sent %r:%r" % (severity, message))
connection.close()
logger()
I am executing with
python direct_log_publisher.py info "Info testing"
It is creating the direct_logs exchange but I cannot see any "info" queue is created in the admin console. As per my understanding no queue binding is required at publisher side.
Thanks in advance.
Why would you want the queue to be created automatically? Since you don't bind a queue anywhere, the message is dropped basically (because it has nowhere to go). Either your consumer or your producer have to declare and bind the queue to the exchange (depending on what makes sense to you).
folks!
I'm using akka 2.2.3 and developing simple tcp server application.
The work flow is:
1. client connects to server
2. server accepts connection,
3. server sends to client message "Hello!"
On page http://doc.akka.io/docs/akka/2.2.3/scala/io-tcp.html I can see how I can send response message to request. But, how can I send message before some data was received?
How can I send message to a client without receiving a init.Event first?
Code from documentation page:
class AkkaSslHandler(init: Init[WithinActorContext, String, String])
extends Actor with ActorLogging {
def receive = {
case init.Event(data) ⇒
val input = data.dropRight(1)
log.debug("akka-io Server received {} from {}", input, sender)
val response = serverResponse(input)
sender ! init.Command(response)
log.debug("akka-io Server sent: {}", response.dropRight(1))
case _: Tcp.ConnectionClosed ⇒ context.stop(self)
}
}
You use the init for creating the TcpPipelineHandler as well, and you can of course always send commands to that actor. For this you will need to pass its ActorRef to your handler actor besides the Init.
So I was reading this article on how to create proxy/broker for (X)PUB/(X)SUB messaging in ZMQ. There is this nice picture of what shall architecture look like :
But when I look at XSUB socket description I do not get how to forward all subscriptions via it due to the fact that its Outgoing routing strategy is N/A
So how one shall implement (un)subscription forwarding in ZeroMQ, what is minimal user code for such forwarding application (one that can be inserted between simple Publisher and Subscriber samples)?
XPUB does receive messages - the only messages it receives are subscriptions from connected subscribers, and these messages should be forwarded upstream as-is via XSUB.
The very simplest way to relay messages is with zmq_proxy:
xpub = ctx.socket(zmq.XPUB)
xpub.bind(xpub_url)
xsub = ctx.socket(zmq.XSUB)
xsub.bind(xsub_url)
pub = ctx.socket(zmq.PUB)
pub.bind(pub_url)
zmq.proxy(xpub, xsub, pub)
which will relay messages to/from xpub and xsub. Optionally, you can add a PUB socket to monitor the traffic that passes through in either direction.
If you want user code in the middle to implement extra routing logic, you would do something like this,
which re-implements the inner loop of zmq_proxy:
def broker(ctx):
xpub = ctx.socket(zmq.XPUB)
xpub.bind(xpub_url)
xsub = ctx.socket(zmq.XSUB)
xsub.bind(xsub_url)
poller = zmq.Poller()
poller.register(xpub, zmq.POLLIN)
poller.register(xsub, zmq.POLLIN)
while True:
events = dict(poller.poll(1000))
if xpub in events:
message = xpub.recv_multipart()
print "[BROKER] subscription message: %r" % message[0]
xsub.send_multipart(message)
if xsub in events:
message = xsub.recv_multipart()
# print "publishing message: %r" % message
xpub.send_multipart(message)
# insert user code here
full working (Python) example