Transfer data from one rabbitmq consumer to other consumer - flask

I'm presently working on my ubuntu(14.04) system. In my project there are three servers apart from the broker. The core is the flask server. The other two are Scrapyd and Sentiment Analysis server.
Using the tutorial 'Work Queues', I have managed to write consumer as well as producer code for the broker between Scrapyd (via pipelining) and flask, and similarly using the 'RPC' part of the tutorial. I have written code for SA server and the flask server.
The problem is, the flask server has become a consumer at two ends. It is waiting for response from the scrapyd as well as SA server. The whole idea is to take data from the scraper, transfer it to the SA server and take back the response and pass it on to the front-end. Now, the only way I could think of getting data from the "consumer" part of the code to the code running in the 'view' function of the flask server is via the 'callback' function in the rabbitmq consumer.
Presently, I was trying this way:
Once the data from the scraper arrives at flask end, we create an object of the other 'consumer' (the one who will interact with the SA server), and transfer data through that object. This is done in the callback function of the consumer side of the broker between the Scraper and the Flask Server. Till this was fine.
The problem arises when the data from the SA server arrives. I don't know how am I supposed to take data from the callback function of the consumer part of the broker code to the 'view' function of the flask app.

Related

Kafka Python producer integration with django web app

I have a question on how can we integrate kafka producer with a front end web app. get the data for every minute or second . Can the web app pass the JSON object to a running producer each time the it is created ? or do we need to initiate the kafka client each time we get a JSON object ?
You would want to probably open a new Producer for every session, probably not open and close for each and every request. And this would be done on the backend, not the frontend.
But a web server consisting of a Kafka client is no different underneath the HTTP layer vs a regular console app; you accept an incoming request, deserialize it, then optionally parse, then serialize again for Kafka output, then optionally render something back to the user.
If you're really asking, "is Kafka with HTTP requests possible", regardless of the language and platforms, then sure, the Confluent REST Proxy operates similarly, only written in Java
As far as webapps tracking goes, I would suggest looking into Divolte Collector

Use Redis for communication between nodes

I am working on an application which consists of 2 layers: The GUI built in Electron and the "backend" built in C++ running in the background. The GUI needs to be able to (amongst other things such as streaming data) send and request data to and from the backend for configuration purposes. For communication Redis is being used, mainly for its pub/sub capability.
What would be the preferred way to request and send data from/to the backend? I came up with the following ideas but I'm not sure if any of these are the way to go.
Publish value on a configuration channel and handle the request via switch case. E.g. configuration.set_sensor_frequency is handled by a set_sensor_frequency(value) function in the backend.
Write the configuration to configuration.sensor_frequency on the redis server and listen to the set event on the backend and react accordingly. But this kinda seems like method 1 but more complicated.
Like method 2, write the config to the redis server and periodically check (every few cycles or so) in the backend whether the value has been updated
Something else. Please elaborate.

Client-Server functional tests

I am writing a client and server library for local IPC. Client and Server both has classes which make use of named pipes to send data between two processes. I want to write functional test to test client-server libraries.
My idea is to create client in functionaltest, mock a server in a separate executable, launch server using CreateProcess and send data to server. But in such case I won't have any control on mock server and checking data sent by client cannot be validated on server.
Can anyone suggest me how to write client server functional test so I can validate functionality of both modules.
Here are couple of tests I'm thinking of,
1. Client connects to server.
2. Client disconnects gracefully from server.
3. Client sends some data to server.
4. Server disconnects client connection selectively.
5. Server shutdown/client shutdown
6. etc.
Thanks,
Ajay
For testing, run the client and server code in the same process using separate threads.

Remote Django application sending messages to RabbitMQ

I'm starting to get familiar with the RabbitMQ lingo so I'll try my best to explain. I'll be going into a public beta test in a few weeks and this is the set up I am hoping to achieve. I would like Django to be the producer; producing messages to a remote RabbitMQ box and another Celery box listening on the RabbitMQ queue for tasks. So in total there would be three boxes. Django, RabbitMQ & Celery. So far, from the Celery docs, I have successfully been able to run Django and Celery together and Rabbit MQ on another machine. Django simply calls the task in the view:
add.delay(3, 3)
And the message is sent over to RabbitMQ. RabbitMQ sends it back to the same machine that the task was sent from (since Django and celery share the same box) and celery processes the task.
This is great for development purposes. However, having Django and Celery running on the same box isn't a great idea since both will have to compete for memory and CPU. The whole goal here is to get clients in and out of the HTTP Request cycle and have celery workers process the tasks. But the machine will slow down considerably if it is accepting HTTP requests and also processing tasks.
So I was wondering is there was a way to make this all separate from one another. Have Django send the tasks, RabbitMQ forward them, and Celery process them (Producer, Broker, Consumer).
How can I go about doing this? Really simple examples would help!
What you need is to deploy the code of your application on the third machine and execute there only the command that handles the tasks. You need to have the code on that machine also.

How to integrate web sockets with a django wsgi

We have a significantly complex Django application currently served by
apache/mod_wsgi and deployed on multiple AWS EC2 instances behind a
AWS ELB load balancer. Client applications interact with the server
using AJAX. They also periodically poll the server to retrieve notifications
and updates to their state. We wish to remove the polling and replace
it with "push", using web sockets.
Because arbitrary instances handle web socket requests from clients
and hold onto those web sockets, and because we wish to push data to
clients who may not be on the same instance that provides the source
data for the push, we need a way to route data to the appropriate
instance and then from that instance to the appropriate client web
socket.
We realize that apache/mod_wsgi do not play well with web sockets and
plan to replace these components with nginx/gunicorn and use the
gevent-websocket worker. However, if one of several worker processes
receive requests from clients to establish a web socket, and if the
lifetime of worker processes is controlled by the main gunicorn
process, it isn't clear how other worker processes, or in fact
non-gunicorn processes can send data to these web sockets.
A specific case is this one: A user who issues a HTTP request is
directed to one EC2 instance (host) and the desired behavior is that data is
to be sent to another user who has a web socket open in a completely
different instance. One can easily envision a system where a message
broker (e.g. rabbitmq) running on each instance can be sent a message
containing the data to be sent via web sockets to the client connected
to that instance. But how can the handler of these messages access
the web socket, which was received in a worker process of gunicorn?
The high-level python web socket objects created gevent-websocket and
made available to a worker cannot be pickled (they are instance
methods with no support for pickling), so they cannot easily be shared
by a worker process to some long-running, external process.
In fact, the root of this question comes down to how can web sockets
which are initiated by HTTP requests from clients and handled by WSGI
handlers in servers such as gunicorn be accessed by external
processes? It doesn't seem right that gunicorn worker processes,
which are intended to handle HTTP requests would spawn long-running
threads to hang onto web sockets and support handling messages from
other processes to send messages to the web sockets that have been
attached through those worker processes.
Can anyone explain how web sockets and WSGI-based HTTP request
handlers can possibly interplay in the environment I've described?
Thanks.
I think you've made the correct assesment that mod_wsgi + websockets is a nasty combination.
You would find all of your wsgi workers hogged by the web sockets and an attempt to (massively) increase the size of the worker pool would probably choke the server because of the memory usage and context switching.
If you like to stick with the synchronous wsgi worker architecture (as opposed to the reactive approach implemented by gevent, twisted, tornado etc), I would suggest looking into uWSGI as a application server. Recent versions can handle some URLs in the old way (i.e. your existing django views would still work the same as before), and route other urls to a async websocket handler. This might be a relatively smooth migration path for you.
It doesn't seem right that gunicorn worker processes, which are intended to handle HTTP requests would spawn long-running threads to hang onto web sockets and support handling messages from other processes to send messages to the web sockets that have been attached through those worker processes.
Why not? This is a long-running connection, after all. A long-running thread to take care of such a connection would seem... absolutely natural to me.
Often in these evented situations, writing is handled separately from reading.
A worker that is currently handling a websocket connection would wait for relevant message to come down from a messaging server, and then pass that down the websocket.
You can also use gevent's async-friendly Queues to handle in-code message passing, if you like.