RabbitMQ direct exchange queue not created - python-2.7

I am trying to send message through direct exchange. I have not declared the queue as mentioned in the official page tutorial. Below is my code:
import sys
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) # Connect to AMQP
def setup():
channel = connection.channel()
channel.exchange_declare(exchange='direct_logs', type='direct')
return channel
def log_emitter(message, severity):
channel = setup()
channel.basic_publish(exchange='direct_logs',
routing_key=severity,
body=message)
def logger():
severity = sys.argv[1] if len(sys.argv) > 2 else 'info'
print severity
exit = 'N'
message = ' '.join(sys.argv[2:]) or "Hello World!"
log_emitter(message, severity)
print(" [x] Sent %r:%r" % (severity, message))
connection.close()
logger()
I am executing with
python direct_log_publisher.py info "Info testing"
It is creating the direct_logs exchange but I cannot see any "info" queue is created in the admin console. As per my understanding no queue binding is required at publisher side.
Thanks in advance.

Why would you want the queue to be created automatically? Since you don't bind a queue anywhere, the message is dropped basically (because it has nowhere to go). Either your consumer or your producer have to declare and bind the queue to the exchange (depending on what makes sense to you).

Related

AWS SES python sqs.receive_message returning only one message output

I am using amazon ses python sdk to see how many messages are there in the queue for a given queue URL. in amazon GUI console i can see there are 3 messages within the queue for the queue URL. However i do not get more than 1 message as output everytime i run the command. Below is my code
import boto3
import json
from botocore.exceptions import ClientError
def GetSecretKeyAndAccesskey():
#code to pull secretkey and access key
return(aws_access_key,aws_secret_key)
# Create SQS client
aws_access_key_id,aws_secret_access_key = GetSecretKeyAndAccesskey()
sqs = boto3.client('sqs',aws_access_key_id=str(aws_access_key_id),aws_secret_access_key=str(aws_secret_access_key) ,region_name='eu-west-1')
response = sqs.receive_message(
QueueUrl='my_queue_url',
AttributeNames=[
'All',
],
MaxNumberOfMessages=10,
)
print(response["Messages"][0])
Every time i run the code i get a different message id, and if i change my print code to check for the next list i get list index out of bound meaning that there is only one message
print(response["Messages"][1])
C:\>python testing.py
d4e57e1d-db62-4fc5-8233-c5576cb2603d
C:\>python testing.py
857858e9-55dc-4d23-aead-3c6622feccc5
First, you need to add "WaitTimeSeconds" to turn on long polling and collect more messages during a single connection.
The other issue is that if you only put 3 messages on the queue, they get separated on the backend systems as part of the redundancy of the AWS SQS service. So when you call to SQS, it connects you to one of the systems and delivers the single message that's available. If you increase the number of total messages, you'll get more messages per request.
I wrote this code to demonstrate the functionality of SQS and allow you to play around with the concept and test.
import json
session = boto3.Session(region_name="us-east-2", profile_name="dev")
sqs = session.client('sqs')
def get_message():
response = sqs.receive_message(QueueUrl='test-queue', MaxNumberOfMessages=10, WaitTimeSeconds=10)
return len(response["Messages"])
def put_messages(seed):
for message_number in range(seed):
body = {"test": "message {}".format(message_number)}
sqs.send_message(QueueUrl='test-queue', MessageBody=json.dumps(body))
if __name__ == '__main__':
put_messages(2)
print(get_message())

Flask - Need to return immediate "server busy" reply when processing current request

Currently my Flask app only processes one request at one time. Any request has to wait for previous request to finish before being processed and it is not a good user experience.
While I do not want to increase the number of requests the Flask app can processed at one time, how is it possible to return a Server Busy message immediately when the next request comes in before the previous request finishes?
I have tried out the threading method below, but can only get both 'Server busy message' and "Proper return message" after 10 seconds.
import threading
from contextlib import ExitStack
busy = threading.Lock()
#app.route("/hello")
def hello():
if busy.acquire(timeout = 1):
return 'Server busy message'
with ExitStack() as stack:
stack.callback(busy.release)
# simulate heavy processing
time.sleep(10)
return "Proper return message"

How to Log Receive Message in ZMQ Proxy?

In ZMQ Proxy, we have 2 types of sockets, DEALER and ROUTER. Also, I've tried to use the capture socket, but it didn't work based on what exactly I looked for.
I'm looking for a way to log what message my proxy server receives.
Q : a way to log what message my proxy server receives.
The simplest way is to make use of an API v4+ directly supported logging via a ManInTheMiddle-"capture" socket:
// [ROUTER]--------------------------------------+++++++
// |||||||
// [DEALER]---------------*vvvvvvvv *vvvvvvv
int zmq_proxy (const void *frontend, const void *backend, const void *capture);
// [?]---------------------------------------------------------------*^^^^^^^
Where the capture ought be either of { ZMQ_PUB | ZMQ_DEALER | ZMQ_PUSH | ZMQ_PAIR }
If the capture socket is not NULL, the proxy shall send all messages, received on both frontend and backend, to the capture socket.
If this ZeroMQ API-granted is not meeting your expectation, feel free to express your expectations in as sufficiently detailed manner as needed ( and implement either an "external" capture-socket payload { message-content | socket_monitor() }-based filtering or one may design a brand new, user-defined logging-proxy, where your expressed features will get implemented with a use of your custom use-case specific requirements, implemented in your application-specific code, resorting to re-use but the clean and plain ZeroMQ API for all the DEALER-inbound/outbound-ROUTER message-passing and log-filtering/processing logic. )
There is no other way I can imagine to take place and solve the task.
It also works with a pair of PAIR sockets. As soon as one end of a pair of sockets is connected to the capture socket, messages are sent to the capture sockets AND to the other end of the proxy.
http://zguide.zeromq.org/page:all#ZeroMQ-s-Built-In-Proxy-Function
and
http://api.zeromq.org/3-2:zmq-proxy
and
http://zguide.zeromq.org/page:all#Pub-Sub-Tracing-Espresso-Pattern
helped me.
This code in python demonstrates it:
import zmq, threading, time
def peer_run(ctx):
""" this is the run method of the PAIR thread that logs the messages
going through the broker """
sock = ctx.socket(zmq.PAIR)
sock.connect("inproc://peer") # connect to the caller
sock.send(b"") # signal the caller that we are ready
while True:
try:
topic = sock.recv_string()
obj = sock.recv_pyobj()
except Exception:
topic = None
obj = sock.recv()
print(f"\n !!! peer_run captured message with topic {topic}, obj {obj}. !!!\n")
def proxyrun():
""" zmq broker run method in separate thread because zmq.proxy blocks """
xpub = ctx.socket(zmq.XPUB)
xpub.bind(xpub_url)
xsub = ctx.socket(zmq.XSUB)
xsub.bind(xsub_url)
zmq.proxy(xpub, xsub, cap)
def pubrun():
""" publisher run method in a separate thread, publishes 5 messages with topic 'Hello'"""
socket = ctx.socket(zmq.PUB)
socket.connect(xsub_url)
for i in range(5):
socket.send_string(f"Hello {i}", zmq.SNDMORE)
socket.send_pyobj({'a' : 123})
time.sleep(0.01)
ctx = zmq.Context()
xpub_url = "ipc://xpub"
xsub_url = "ipc://xsub"
#xpub_url = "tcp://127.0.0.1:5567"
#xsub_url = "tcp://127.0.0.1:5568"
# set up the capture socket pair
cap = ctx.socket(zmq.PAIR)
cap.bind("inproc://peer")
cap_th = threading.Thread(target=peer_run, args=(ctx,), daemon=True)
cap_th.start()
cap.recv() # wait for signal from peer thread
print("cap received message from peer, proceeding.")
# start the proxy
th_proxy=threading.Thread(target=proxyrun, daemon=True)
th_proxy.start()
# create req/rep socket just to prove that pub/sub can run alongside it
zmq_rep_sock = ctx.socket(zmq.REP)
zmq_rep_sock.bind("ipc://ghi")
# create sub socket and connect it to proxy's pub socket
zmq_sub_sock = ctx.socket(zmq.SUB)
zmq_sub_sock.connect(xpub_url)
zmq_sub_sock.setsockopt(zmq.SUBSCRIBE, b"Hello")
# create the poller
poller = zmq.Poller()
poller.register(zmq_rep_sock, zmq.POLLIN)
poller.register(zmq_sub_sock, zmq.POLLIN)
# create publisher thread and start it
th_pub = threading.Thread(target=pubrun, daemon=True)
th_pub.start()
# receive publisher's messages ordinarily
while True:
events = dict(poller.poll())
print(f"received events: {events}")
if zmq_rep_sock in events:
message = zmq_rep_sock.recv_pyobj()
print(f"received zmq_rep_sock {message}")
elif zmq_sub_sock in events:
topic = zmq_sub_sock.recv_string()
message = zmq_sub_sock.recv_pyobj()
print(f"received zmq_sub_sock {topic} , {message}")
output
cap received message from peer, proceeding.
!!! peer_run captured message with topic None, obj b'\x80\x03}q\x00X\x01\x00\x00\x00aq\x01K{s.'. !!!
received events: {<zmq.sugar.socket.Socket object at 0x76310f70>: 1}
received zmq_sub_sock Hello 1 , {'a': 123}
!!! peer_run captured message with topic Hello 2, obj {'a': 123}. !!!
received events: {<zmq.sugar.socket.Socket object at 0x76310f70>: 1}
received zmq_sub_sock Hello 2 , {'a': 123}
!!! peer_run captured message with topic Hello 3, obj {'a': 123}. !!!
received events: {<zmq.sugar.socket.Socket object at 0x76310f70>: 1}
received zmq_sub_sock Hello 3 , {'a': 123}
!!! peer_run captured message with topic Hello 4, obj {'a': 123}. !!!
received events: {<zmq.sugar.socket.Socket object at 0x76310f70>: 1}
received zmq_sub_sock Hello 4 , {'a': 123}
Be aware of the slow joiner problem, hence the sleep command in the publisher/

Single consumer reading alternativly from multiple queues

I'm new to rabbitMQ, and I'm trying to make a application where there will be 3 roles: two producers and one consumer. The consumer is related with two queues which related to the two producers. Each producer sends the message to queue with different frequency. What I need is that the consumer read alternatively from the two producers.
For example:
Producer 1: Send "Hello" every 2 seconds
Producer 2: Send "World" every 5 seconds
Consumer: Print whatever it receives
So the consumer is expected to print:
hello world hello world hello world ...
Since producer 1 send the message more frequently than producer 2, after the consumer have read from consumer 1, it needs to wait a little bit for the arrival of the message from producer 2 (that's the problem)
I tried to declare two queues for the producers and link them to the consumer but the consumer only prints somthing like:
hello hello world hello hello world
Thanks for the help!
Update: Here's my code
Producer 1:
import pika
import sys
message = 'hello'
credentials = pika.PlainCredentials('xxxx', 'xxxx)
connection =pika.BlockingConnection(pika.ConnectionParameters('localhost', 5672, '/', credentials))
channel = connection.channel()
channel.queue_declare(queue='hello')
while True:
channel.basic_publish(exchange='', routing_key='hello', body=message)
print('Sent message: {}'.format(message))
connection.sleep(2)
connection.close()
Producer 2:
import pika
import sys
message = 'world'
credentials = pika.PlainCredentials('xxxx', 'xxxx')
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost', 5672, '/', credentials))
channel = connection.channel()
channel.queue_declare(queue='world')
while True:
channel.basic_publish(exchange='', routing_key='world', body=message)
print('Sent message: {}'.format(message))
connection.sleep(4)
connection.close()
Consumer 1:
import pika
def callback(ch, method, properties, body):
print('Receive: {}'.format(body))
credentials = pika.PlainCredentials('xxxx', 'xxxx')
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost', 5672, '/', credentials))
channel = connection.channel()
channel.basic_qos(prefetch_count=1)
channel.queue_declare(queue='hello')
channel.queue_declare(queue='world')
channel.basic_consume(on_message_callback=callback, queue='hello', auto_ack=True)
channel.basic_consume(on_message_callback=callback, queue='world', auto_ack=True)
print('Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
Since a consumer can only consume from a single queue, you will have to make sure that all messages are routed to this queue.
It is then up to the consumer to handle the messages. It would have to use the polling API to get a single messages. Depending on which consumer published each message, the consumer would have to act differentlty. It could keep a local store of messages coming from producer 1 that arrived before a message coming from producer 2 has been acted upon. The Cosumer would delay acting on messages it keeps in this store until a message coming from producer 2 has been acted upon. Only then would it take the first message from this store and act on it.
Edit:
In the code you've added to your question, you have a single channel (that's good) but two consumers, one for each call to channel.basic_consume. Both consumers use the same callback method callback. It is this method which would have to implement the logic I've described above.

How to implement proxy/broker for (X)PUB/(X)SUB messaging in ZMQ?

So I was reading this article on how to create proxy/broker for (X)PUB/(X)SUB messaging in ZMQ. There is this nice picture of what shall architecture look like :
But when I look at XSUB socket description I do not get how to forward all subscriptions via it due to the fact that its Outgoing routing strategy is N/A
So how one shall implement (un)subscription forwarding in ZeroMQ, what is minimal user code for such forwarding application (one that can be inserted between simple Publisher and Subscriber samples)?
XPUB does receive messages - the only messages it receives are subscriptions from connected subscribers, and these messages should be forwarded upstream as-is via XSUB.
The very simplest way to relay messages is with zmq_proxy:
xpub = ctx.socket(zmq.XPUB)
xpub.bind(xpub_url)
xsub = ctx.socket(zmq.XSUB)
xsub.bind(xsub_url)
pub = ctx.socket(zmq.PUB)
pub.bind(pub_url)
zmq.proxy(xpub, xsub, pub)
which will relay messages to/from xpub and xsub. Optionally, you can add a PUB socket to monitor the traffic that passes through in either direction.
If you want user code in the middle to implement extra routing logic, you would do something like this,
which re-implements the inner loop of zmq_proxy:
def broker(ctx):
xpub = ctx.socket(zmq.XPUB)
xpub.bind(xpub_url)
xsub = ctx.socket(zmq.XSUB)
xsub.bind(xsub_url)
poller = zmq.Poller()
poller.register(xpub, zmq.POLLIN)
poller.register(xsub, zmq.POLLIN)
while True:
events = dict(poller.poll(1000))
if xpub in events:
message = xpub.recv_multipart()
print "[BROKER] subscription message: %r" % message[0]
xsub.send_multipart(message)
if xsub in events:
message = xsub.recv_multipart()
# print "publishing message: %r" % message
xpub.send_multipart(message)
# insert user code here
full working (Python) example