Django Asyncronous task running syncronously with asgiref - django

I'm receiving notification in my Django app via API and I should return HTTP 200 before 500 milliseconds. To achieve that I should run the related task asynchronously. I'm using asgiref library for it, everything runs ok but I think is not actually running asynchronously.
Main viev
In this view I receive the notifications. I set 2 print points to check timing in the server log.
#csrf_exempt
#api_view(('POST',))
#renderer_classes((TemplateHTMLRenderer, JSONRenderer))
def IncomingMeliNotifications(request):
print('--------------------------- Received ---------------------')
notificacion = json.loads(request.body)
call_resource = async_to_sync(CallResource(notificacion))
print('--------------------------- Answered ---------------------')
return Response({}, template_name='assessments.html', status=status.HTTP_200_OK)
Secondary view
After receiving the notification I call the secondary view CallResource, which I expect to run asynchronously.
def CallResource(notificacion):
do things inside....
print('--------------------------- Secondary view ---------------------')
return 'ok'
Log results
When I check the log, I always get the prints in the following order:
print('--------------------------- Received ---------------------')
print('--------------------------- Secondary view ---------------------')
print('--------------------------- Answered ---------------------')
But I suppose that the Secondary viewshould be the last to print, as:
print('--------------------------- Received ---------------------')
print('--------------------------- Answered ---------------------')
print('--------------------------- Secondary view ---------------------')
What am I missing here?
I'm reading the documentation github.com/django/asgiref and as far as I can tell this should be the paradigm I'm working on:
If the outermost layer of your program is synchronous, then all async
code run through AsyncToSync will run in a per-call event loop in
arbitrary sub-threads, while all thread_sensitive code will run in the
main thread.
Any clues welcome. Thanks in advance.

async_to_sync lets a sync thread stop and wait for an async function, not run a sync function asynchronously.
Running an asynchronous task
Since you didn't mention an ASGI server, it appears you are running the sync development server python manage.py runserver.
If so, install and run Daphne (from Django) instead.
pip install daphne
daphne myproject.asgi:application
If you're using Django 3.x and above, there should already be an asgi.py file in the same directory as your wsgi.py file. See docs.djangoproject.com/en/3.2/howto/deployment/asgi/.
If you're using Django 2.x and below, upgrade to Django 3.x and above and create the asgi.py file. See How to generate asgi.py for existent project?.
You can create an async task when running an ASGI server.
# call_resource = async_to_sync(CallResource(notificacion))
loop = asyncio.get_event_loop()
task = loop.create_task(CallResource(notificacion))
An async function should be defined with async def.
# def CallResource(notificacion):
async def CallResource(notificacion):

Related

Multiple concurrent requests in Django async views

From version 3.1 Django supports async views. I have a Django app running on uvicorn. I'm trying to write an async view, which can handle multiple requests to itself concurrently, but have no success.
Common examples, I've seen, include making multiple slow I/O operations from inside the view:
async def slow_io(n, result):
await asyncio.sleep(n)
return result
async def my_view(request, *args, **kwargs):
task_1 = asyncio.create_task(slow_io(3, 'Hello'))
task_2 = asyncio.create_task(slow_io(5, 'World'))
result = await task_1 + await task_2
return HttpResponse(result)
This will yield us "HelloWorld" after 5 seconds instead of 8 because requests are run concurrently.
What I want - is to concurrently handle multiple requests TO my_view. E.g. I expect this code to handle 2 simultaneous requests in 5 seconds, but it takes 10.
async def slow_io(n, result):
await asyncio.sleep(n)
return result
async def my_view(request, *args, **kwargs):
result = await slow_io(5, 'result')
return HttpResponse(result)
I run uvicorn with this command:
uvicorn --host 0.0.0.0 --port 8000 main.asgi:application --reload
Django doc says:
The main benefits are the ability to service hundreds of connections without using Python threads.
So it's possible.
What am I missing?
UPD:
It seems, my testing setup was wrong. I was opening multiple tabs in browser and refreshing them all at once. See my answer for details.
Here is a sample project on Django 3.2 with multiple async views and tests. I tested it multiple ways:
Requests from Django's test client are handled simultaneously, as expected.
Requests to different views from single client are handled simultaneously, as expected.
Requests to the same view from different clients are handled simultaneously, as expected.
What doesn't work as expected?
Requests to the same view from single client are handled by one at a time, and I didn't expect that.
There is a warning in Django doc:
You will only get the benefits of a fully-asynchronous request stack if you have no synchronous middleware loaded into your site. If there is a piece of synchronous middleware, then Django must use a thread per request to safely emulate a synchronous environment for it.
Middleware can be built to support both sync and async contexts. Some of Django’s middleware is built like this, but not all. To see what middleware Django has to adapt, you can turn on debug logging for the django.request logger and look for log messages about “Synchronous middleware … adapted”.
So it might be some sync middleware causing the problem even in the bare Django. But it also states that Django should use threads in this case and I didn't use any sync middleware, only standard ones.
My best guess, that it's related to client-server connection and not to sync or async stack. Despite Google says that browsers create new connections for each tab due to security reasons, I think:
Browser might keep the same connection for multiple tabs if the url is the same for economy reasons.
Django creates async tasks or threads per connection and not per request, as it states.
Need to check it out.
Your problem is that you write code like sync version. And you await every function result and only after this, you await next function.
You simple need use asyncio functions like gather to run all tasks asynchronously:
import asyncio
async def slow_io(n, result):
await asyncio.sleep(n)
return result
async def my_view(request, *args, **kwargs):
all_tasks_result = await asyncio.gather(slow_io(3, 'Hello '), slow_io(5, "World"))
result = "".join(all_tasks_result)
return HttpResponse(result)

Celery asynchronous in django does not work

I am trying to create an asynchronous task using celery, but I am not achieving success.
I have a task that sends emails:
#shared_task()
def send_email_example(email_id):
...
I call it using the delay() method:
class SomeModelExample:
...
def example(self):
...
send_email_example.delay(self.id)
Locally, I run the celery and can use it. However, in my server, when I use the method that calls the function it takes more than 30 seconds and I receive a status code 502.
I hope that my celery setup is ok, because my periodc tasks works.
Solution
Testing it myself I saw that the celery is really ok. The problem was with my tests on the server.
At this stage I would try:
Use decorator without brackets: #shared_task
Increase harakiri time to >60 seconds on your server to see if it is a task problem or a server problem.

How to start celery task after django request finished

I need to run celery task only when django request finished.
Is it possible?
I've found that the best way to make sure your task happens after the request is finished is to write a custom middleware. In the process_response method, you can handle any quick actions that don't impact page load time or performance too much. Anything else, you can hand off to Celery. Any saving or database transactions are completed by the time process_response is called (AFAICT).
Try something like this:
Django sends request_finished at the end of every request.
You can access request object through sender argument,
from django.dispatch import receiver
from django.core.signals import request_finished
from app.tasks import my_task
#receiver(request_finished)
def add_celery_task(sender):
if sender.__name__ != 'StaticFilesHandler':
my_task.delay()
If you are running server in development environment it's good to check sender's name to avoid adding too many celery task for every static file you are serving.
You can run the task in the background, using delay method of celery. I mean just before returning the response you can call the delay method to put the task in the background.
Some thing like this:
task_name.delay(arg1, arg2, ...)
By doing this your task will be put into background and run asynchronously, this is not going to block the request response cycle .

Running gevent on apache server + mod_wsgi + bottle

I read http://bottlepy.org/docs/dev/tutorial_app.html#server-setup
and running Apache + Bottle + Python
and Bottle + Apache + WSGI + Sessions
and I would like to know if one can run asynchronous rest api calls to bottle on mod_wsgi server to a py function that does not return anything(its a backend logic) and is non blocking - so I looked up gevent but i am haven't found a solution where you can run mod_wsgi with gevents.
Is there any solution to async calls to run on apache server using mod_wsgi or any other alternative?
UPDATE
as per andreans' answer below;
I ran a simple myip address return with bottle + celery. so one has to run a celery as #celery.task and then run(host='localhost', port=8080, debug=True)? does it require to start celery worker on terminal as well? never used celery before [runnin locally] also running bottle with decorator #route(/something) works but app.route doesnt where app = Bottle() possibly due to some .wsgi file error?
Sorry, can't fit into the comment box. Every request must get a response eventually (or fail/time out). If you really don't need to return any data to the client, send back just an empty response with a status code. If the processing of the request takes time, it should run asynchronously, and that's where celery comes in. So a blocking implementation of your request handler:
def request_processor_long_running_blocking_func(request_data):
# process request data, which takes a lot of time
# result is probably written into db
pass
def request_handler_func(request):
request_processor_long_running_blocking_func(request.data)
return HttpResponse(status=200)
If I understood correctly this is what you're trying to avoid, by making the request_processor_long_running_blocking_func run asynchronously, so the request_handler_func won't block. This would be solved with celery like this:
from celery.task import task
#task
def request_processor_long_running_blocking_func(request_data):
# the task decorator wraps your blocking function, with celery's Task class
# which has a delay method available for you to call, which will run your function
# asynchronously on one of your celery background worker processes
pass
def request_handler_func(request):
request_processor_long_running_blocking_func.delay(request.data)
# calling the function with delay won't block, it returns immediately
# and your response is sent back instantly
return HttpResponse(status=200)
One more thing, send these task requests with ajax, so your web interface won't be reloaded or anything, so the user can continue using your app after sending the request

HTTP call after celery task have changed state

I need a scheduler for my next project, and since I'm coding using Django I went for Celery.
What I am looking for is a way for a task to tell Django when it is done, so I can update the database and use SSE to tell the user. All this can be done fairly simple with just putting all the logic into the task. But what do I do when I am planning to have several celery workers?
I found a bunch of info online to cover the single-worker-case, but not many covering the problem if you have more than one worker.
What I thought about was using http callbacks from the workers to the web-server to let it know that the task is done. Looking at celery.task.http looked promising, but didnt do what I needed.
Is the solution to use signals and hook up manual http calls? Or am I on the wrong path? Isn't this a common problem? How can this be solved more elegantly?
So, what are you mean when you tell tell to Django? Is I understand you right, django request which initiliazed a Celery task, is still alive a time when this task is finished? I that case you can check some storage ( database, memcached, etc ). and send your SSE.
Look, there is one way to do that.
1. You django view send task to Celery, after that it goes to infinite loop ( or loop with timeout 60sec?) and waits results in memcached.
Celery gets task executes, and pastes results to memcached.
Django view gets new results, exit the loop and sends your SSE.
Next variant is
Django view sends task to Celery, and returns
Celery execute tasks, after executing it makes simple HTTP requests to your django app.
Django receives a http request from Celery, parse params and send SSE to your user again
Here is some code that seems to do what I want:
In django settings:
CELERY_ANNOTATIONS = {
"*": {
"on_failure": celery_handlers.on_failure,
"on_success": celery_handlers.on_success
}
}
In the celery_handlers.py file included:
def on_failure(self, exc, task_id, *args, **kwargs):
# Use urllib or similar to poke eg; api-int.mysite.com/task_handler/TASK_ID
pass
def on_success(self, retval, task_id, *args, **kwargs):
# Use urllib or similar to poke eg; api-int.mysite.com/task_handler/TASK_ID
pass
And then you can just setup api-int to use something like:
from celery.result import AsyncResult
task_obj = AsyncResult(task_id)
# Logic to handle task_obj.result and related goes here....