Testing asynchronous capabilities of a django + tornado application - django

I'm using django-on-tornado to build an application that is similar to the chat applicatoin proposed. All tutorials are focused on how to run a django application over tornado server, but how can I test an asynchronous feature that depends on tornado?
My current test does the following:
Starts a thread that sleeps for some time than sends a chat message
Do a request to ask for messages
When request ends, check that message arrived and that time elapsed is compatible with thread sleep time
When I run the test (with manage.py test), I get an "AttributeError: 'WSGIRequest' object has no attribute '_tornado_handler'", which is expected, since the _tornado_handler property of the request is set in runtornado command.
Is there a way to make this setup so that I can test the asynchronous feature? I use nose with django_nose plugin for tests.

Actually django-on-tornado does not anyhow change the manage.py test command of Django, so the Tornado is invoked only via runtornado. You will need to add command to manage.py called something like "testtornado" with implementation similar to https://github.com/koblas/django-on-tornado/blob/master/myproject/django_tornado/management/commands/runtornado.py - it should set up _tornado_handler and proceed with launching your test code.

Related

Why does Celery task autodiscovery never trigger under Django?

Everything is working except task autodiscovery under Django. (Celery 5.2.6, Django 3.0.6)
If I run celery worker, the worker triggers autodiscovery, finds all my tasks, and displays them in a list as part of it's startup process. However, if I run my Django app or Django shell, this never happens.
Additionally, even though the docs promise that accessing the task registry will trigger autodiscovery, it does not. I print app.tasks, I call app.tasks.keys(), and still no autodiscovery - it only shows the tasks that are built-in or were registered when the module containing them happened to be imported for other reasons.
What do I need to do to trigger task autodiscovery?
PS - If I try adding force=True to app.autodiscover_tasks(), it fails because the Django app registry hasn't finished loading at that time.
I dug into the code and tried a bunch of things. Here's what I learned.
How Autodiscovery is Performed
Autodiscovery is actually performed by app._autodiscover_tasks() (src), which is invoked by the celery.signals.import_modules signal (src), which is sent by app.loader.import_default_modules() (src). The signal is connected to app._autodiscover_tasks() in app.autodiscover_tasks (src).
In other words, the only way to get autodiscovery to trigger is to:
call app.autodiscover_tasks() to register app._autodiscover_tasks as a receiver for signal celery.signals.import_modules
call app.loader.import_default_modules() to send the signal
The docs instruct me to do the first, but not the second. Thus, in my Django app, autodiscovery does not happen.
How Autodiscovery is Performed in celery
If you run celery with the commands beat, report, shell, or worker, the resulting code path eventually calls app.loader.import_default_modules() directly. Thus, autodiscovery happens under those conditions - but not when you run your Django app, or open a Django shell, or any other context but those four commands, unless you explicitly call app.loader.import_default_modules().
On Finalize
The docs promise that accessing app.tasks will cause the app to finalize, which I assumed triggered autodiscovery, but it has nothing to do with that. All that finalize does is send an app.on_after_finalize signal, which does nothing.
My Solution
After much frustrating and experimentation, I settled on this snippet from my celery.py:
app = Celery("myproj")
app.config_from_object("django.conf:settings", namespace="CELERY")
app.autodiscover_tasks()
#app.on_after_configure.connect()
def trigger_autodiscovery(sender, **kwargs):
app.loader.import_default_modules()

Return JSONResponse before updating database in django, python2

Im having a project running on python2.7. The project is old but still its necessary to update the database when a request is received. But the update process takes time and ends up with a timeout. Is there anyway return JsonResponse/Httpresponse, before updating the database so that timeout doesn't occur. I know its not logical to do so, but its a temporary fix.
Also, i cant use async since its python2
Use multiprocessing or multithreading this will execute your task with another process and send HTTP response fastly to client-side

Execute code after Response using Django's Async Views

I'm trying to execute a long running function (ex: sleep(30)) after a Django view returns a response. I've tried implementing the solutions suggested to similar questions:
How to execute code in Django after response has been sent
Execute code in Django after response has been sent to the
client
However, the client's page load only completes after the long running function completes running when using a WSGI server like gunicorn.
Now that Django supports asynchronous views is it possible to run a long running query asynchronously?
Obviously, I am looking for a solution regarding the same issue, to open a view which should start a background task and send a response to the client without waiting until started task is finished.
As far as I understand yet this is not one of the objectives of async view in Django. The problem is that all executed code is connected the the worker started to handle the http request. If the response is sent back to the client the worker cannot handle any other code / task anymore started in the view asynchronous. Therefore, all async functions require an "await" in front of. Consequently, the view will only send its response to the client if the awaited function is finished.
As I understand all background tasks must be pushed in a queue of tasks where another worker can catch each new task. There are several solution for this, like Djangp Channels or Django Q. However, I am not sure what is the most lightweighted solution.

Django running long sql process in background

I have a web page where I need to run a long sql process (up to 20 mins or so) when the user clicks on a certain button. The script runs, but the user is then unable to continue browsing the rest of the website.
I would like to have it so that when the button is clicked, it goes into a queue that runs in the background.
I have looked inth django-background-tasks, but the problem is that it does not seem to be possible to start the queued tasks without running python manage.py process_tasks.
I have heard of Celery, but I am using a Windows system and it does not seem to be suitable.
I am new to django and website infrastructures, and am not sure if this is feasible. I have also seen in older response that the threading package can work to do this, but I am unsure if it is outdated.
You can use create_task provided by Asyncio that can run a background task for you without blocking the view for clients.
Python 3.7+
Asyncio create_task
Disclaimer: I'm not so sure if myfunc() needs to be async unless you are performing an async worthwhile task.
You could also have a while loop in myfunc() for periodic repeated operations.
import asyncio
async def myfunc():
await asyncio.sleep(5)
print("Hi, after 5 seconds.")
task = asyncio.create_task(myfunc(), )

Why does apscheduler not work in uwsgi mode?

I have a flask application. This application mimics routing of vehicles in a city and when a vehicle reaches a designated point, I have to wait for 30-180 seconds before starting it again. I am trying to use apscheduler for this.
When the vehicle arrives I start an apscheduler job (with the 'date' trigger for X seconds). When the job fires, I do my processing.
This works well on my dev machine when I am running the flask app standalone. But when I try to run it on my production server (where the app is running in uwsgi mode) the job never fires. I have already set --enable-threads=true for the app, so that doesn't seem to be the problem.
My relevant code is like this.
At my app initialization.
scheduler = BackgroundScheduler()
scheduler.start()
Whenever the trigger happens.
scheduler.add_job(func=myfunc, trigger='date', run_date=datetime.datetime.now() + datetime.timedelta(seconds=value)).
Anything I am missing in using apscheduler in uwsgi mode? Or any other options in flask to achieve what I want to?