which is main concept of django asgi?
when there are multiple tasks to be done inside a view,
handle those multiple tasks concurrently thus reduce view's response time.
when there are multiple requests from multiple users at same time,
hendle those requests concurrently thus users wait less in queue.
Channels? Web Socket?
I'm trying to understand and use the asgi concept but feeling so lost.
Thanks.
asgi provides an asynchronous/synchronous interface for python applications to interact with front-end web elements (HTML and Scripts). In a sense, because the interface itself handles the requests concurrently, it is working to reduce response time - because it is the reason that that django web servers respond notably quick. Multiple tasks from multiple users are handly quickly and efficiently, but that is not the main concept.
Most importantly, asgi provides a method for python (as well as the django library) to interact with the frontend HTML page we are showing the user. As was the original triumph of wsgi; asgi is the upgrade that allows python to communicate with the web client actively (listening) then start asynchronous tasks that allow us to begin tasks or change values outside of the direct scope of what the application is doing. Thus, we can start those tasks, serve different values to the user, and continue those tasks in the background uninterrupted.
Related
I have written an API with Django which purpose is to operate as a bridge between a website back-end and external services we use, so that the website doesn't have to handle many requests to external APIs (CRM, calendar events, email providers etc.).
The API mainly polls other services, parses the results and forwards them to the website backend.
I initially went for a Celery-based task queue, as it seemed to me like the right tool to offload that processing to another instance, but I'm starting to think it doesn't really fit the purpose.
As the website expects synchronous responses, my code contains a lot of :
results = my_task.delay().get()
or
results = chain(fetch_results.s(), parse_results.s()).delay().get()
Which doesn't feel like the proper way to use Celery tasks.
It is efficient when pulling dozens of requests and processing the results in parallel - a periodic refresh task for example - but adds a lot of overhead for simple requests (fetch - parse - forward), which represent most of the traffic.
Should I go full synchronous for those "simple requests" and keep Celery tasks for specific scenarios ? Is there an alternative design (maybe involving asyncio) that would better suit the purpose of my API ?
Using Django, Celery (w/ Amazon SQS) on an EBS EC2 instance.
You could consider using Gevent with your Django webserver to allow it to operate efficiently for the "simple requests" you've mentioned without being blocked. If you proceed with this approach, be sure to pool database connections with PgBouncer or Pgpool-II or a Python library since each greenlet will make its own connection.
Once you've implemented that, it's possible to also use Gevent instead of Celery to handle asynchronous processing by joining on multiple Greenlets that each make an external API request, rather than incur the overhead of passing messages to an external celery worker.
Your implementation is similar to what we've done at Kloudless, which provides a single API to access multiple other APIs, including CRM, calendar, storage, etc.
I'm trying to do a Django application with an asynchronous part: Websockets. Just as a little challenge, I want to mount everything in the same process. Tried Socket.IO but couldn't manage to actually use sockets, instead of longpolling (which killed my browser several times, until I gave up).
What I then tried was a not-so-maintained library based on gevent-websocket. However, had many errors and was not easy to debug.
Now I am trying a Tornado approach but AFAIK (please correct me if I'm wrong) integrating async with a regular django app wrapped by WSGIContainer (websockets would go through Tornado, regular connections through Django) will be a true server killer if a resource is heavy or, somehow, the Django ORM goes slow into heavy operations.
I was thinking on moving to Twisted/Cyclone. Before I move from one architecture with such issue to ANOTHER architecture with such issue, i'd like to ask:
Does Tornado (and/or Twisted) have an architecture of scheduling tasks in the same way Gevent does? (this means: when certain greenlets "block", they schedule themselves to other threads, at least until the operation finishes). I'm asking this because (please correct me if I'm wrong) a regular django view will not be suitable for stuff like #inlineCallbacks, and will cause the whole server to be blocked (incl. the websockets).
I'm new to async programming in python, so there's a huge change I have misinformation about more than one concept. Please help me clarifying this before I switch.
Neither Tornado nor Twisted have anything like gevent's magic to run (some) blocking code with the performance characteristics of asynchronous code. Idiomatic use of either Tornado or Twisted will be visible throughout your app in the form of callbacks and/or Futures/Deferreds.
In general, since you'll need to run multiple python processes anyway due to the GIL, it's usually best to dedicate some processes to websockets with Tornado/Twisted and other processes to Django with the WSGI container of your choice (and then put nginx or haproxy in front so it looks like a single service to the outside world).
If you still want to combine django and an asynchronous service in the same process, the next best solution is to use threads. If you want the two to share one listening port, the listener must be a websocket-aware HTTP server that can spawn other threads for WSGI requests. Tornado does not yet have a solution for this, although one is planned for version 4.1 (https://github.com/tornadoweb/tornado/pull/1075). I believe Twisted's WSGI container does support running the WSGI workers in threads, but I don't have any experience with it myself. If you need them in the same process but do not need to share the same port, then you can simply run the IOLoop or Reactor in one thread and the WSGI container of your choice in another (with its associated worker threads).
So I have an application that could theoretically be used by multiple people at the same time, filling positions in a list of lists. The client app is in angular and is using a grid to display the lists of lists, the backend is in Django.
I'm having a hard time coming up with a way to signal client A, that client B did something. Multi-threading would let me do some long polling with locks and signals, but multi-processing makes this much more difficult.
How do I keep both (could be more than 2) of the clients up to date with the state/content of the list of lists as it is on the server?
Right now I'm restricting the number of users to 1, but this is not optimal.
You can make the clients pull changes from the server every X seconds, then the server can respond with changes list or with no change. This is not real realtime but if you make the pull delta low it will look like almost real time.
Alterantively, you can use websockets, but AFAIK, this is not supported by django, so you need to manage the websockets using tornado python or node.js or something like.
I would suggest using redis or other memory based datastore to communicate and propagate teh changes between the different servers.
Hello Stackoverflowers,
We're developing an online board-game (think online monopoly) site using Python for the backend.
We use Django for the non realtime stuff (authenticating, player profiles, ranking...). The chat server is implemented using socket.io and Tornodo. The game server part is what caused us problems.
We currently (that could change) also use Tornado and socket.io, each Tornado instance is located at a gameX.site.com address on a (maybe) different server and host several games simultaneously (much like a chat server in fact, except that messages would not go to all users but only to the ones involved in the same game).
What cause us trouble is how to we update the Django instance (game log, score, and so on) as games progress. Also we'd like to use Django for authentication as each player would ask the Django server to join the game and be given a disposable id/password couple just for it. Obviously we would have to communicate those to the game server in some way.
At first the chosen solution was to use something like Redis as a bidirectional message queue, Django would post is/password to Redis and the Tornado would then querying Redis on incoming connection. Also a Django cron would run every minute or so to deal with the waiting message. But we fear that frequently and possibly long running cron would impede the main site since the PostgreSQL database is hosted on the same server as Django (and some Game server may also run on the same machine).
We could alternatively wait for a player to request a ranking updated to process the past games results but we fear such an indefinite delay will skew the overall ranking (and experience) and would possibly cause data loss.
We could use Celery/RabbitMQ to update the Main database using Django ORM out of the Tornado processes, but would it be possible to use the same solution to communicate the temporary id/password to the game server ? It doesn't look like you can post a message to Celery and retrieve it on an other side.
Thank for your insight.
I have a RESTful web service with a C++ API at the back-end. I am using the FastCGI library to facilitate the REST interface. My C++ API has multiple functions that may be used independently. I am looking for a way to make it as fast as possible. Here are a few ideas I got:
Have one FastCGI application that gets the function to be executed, executes that function and returns the output. This way the API calls keep waiting until one 'function' is complete, even though the next call is for a different independent function.
Have multiple FastCGI applications, each having access to only one function from the API, each getting inputs for that particular app and returning outputs of that particular app alone.
This way I can have concurrent calls made to all the different functions, and separate process queues would be made for each function that I have, instead of having one generic process queue to the FastCGI application consisting of calls to different independent functions.
While this looks like it would perform better, I am not sure if it is possible to implement a system such as this - i.e having many FastCGI apps running in parallel from the same server. If it is possible, can someone tell me how to implement this?
Each FastCGI application is a separate program, running in a loop, and communicating with Apache in a binary protocol defined by FastCGI specification. The only possible concurrency problems are the same concurrency problems you would experience if you were running concurrent CGI or PHP requests, with just one exception: since FastCGI processes do not terminate, any limited resources will have to be carefully managed. For example, if you only have a ten-client licence to a database server, you can't have eleven FastCGI processes using the database unless you manage connections better than "open at start, let it close at the end" method often used in CGI or PHP.