Django async view - how do I know its running async? - django

I can create an async view and inquire as to whether the method thinks it's a coroutine, and it is.
However, looking at the stack it looks like my view is being called by asgiref's AsyncToSync.
I am using Gunicorn with Uvicorn worker and an asgi.py script that starts with:
from channels.routing import get_default_application
channel_layer = get_default_application
Plus, I am trying to discover if any of my middleware is running sync or even with a (forced) sync middleware I am not hitting (logger.debug('Synchronous %s adapted.', name))
Makes no sense.

Related

Webhook from Django 4.1 to python-telegram-bot 20.0a2

I use the python-telegram-bot 20.0a2 library and Django 4.1
The bot runs by main.py script:
if __name__ == "__main__":
asyncio.run(main())
Inside of the main script I also run uvicorn in the same ascynhronous context as Application instance
# Run application and webserver together
async with application_tg:
await application_tg.start()
await server.serve() # uvicorn
await application_tg.stop()
What is the problem?
I use webhook for my bot
Django's url.py calls async view but the view can't get initalized Application instance of the bot.
so the question is:
How can to rearrange a scheme of interaction between python-telegram-bot 20 and Django 4.1 in a way that I can access Application instance from a Django hook?
Addition:
It's easy to achieve by using other frameworks such as starlette as it mentioned on the official wiki page of PTB library: https://docs.python-telegram-bot.org/en/v20.0a2/examples.customwebhookbot.html
My main script:
https://gist.github.com/SergSm/6843fadf505b826f83a10bf7eebc3fa0
my view:
import json
from django.views import View
from django.http import JsonResponse, HttpResponse
from django.views.decorators.csrf import csrf_exempt
from telegram import Update
from bot.tgbot.main import application_tg
async def telegram_handle(request):
if request.method == 'POST':
await application_tg.update_queue.put(
Update.de_json(data=json.loads(request.body), bot=application_tg.bot)
)
return JsonResponse({"ok": "POST processed"})
else:
return JsonResponse({"ok": "GET processed"})
UPDATE 1
I was desperate to make it run this way.
I tried to use contextvars module and read a lot of asyncio related stuff/
In the end I made an awful assumption that if I put my python-telegram-bot code into the Django view async function it's gonna work. But it does work!
And now I will try to wrap it using middleware to make my code cleaner
UPDATE 2
If you want to use Django orm with sync functions you need to use #sync_to_async(thread_sensitive=False)
the thread_sensitive=False parameter is important in this case otherwise you will never get the result of awaitables

Can a streamlit app be run within a flask app?

No code here, just a question. I have tried various means to get a streamlit app to run within a flask app. Main reason? Using Flask for user authentication into the streamlit app. Cannot get it to work out. Is it not possible perhaps?
Streamlit uses Tornado to serve HTTP and WebSocket data to its frontend. That is, it’s already its own web server, and is written in an existing web framework; it wouldn’t be trivial to wrap it inside another web framework.
Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed. By using non-blocking network I/O, Tornado can scale to tens of thousands of open connections, making it ideal for long polling, WebSockets, and other applications that require a long-lived connection to each user.
Flask is a synchronous web framework and not ideal for WebSockets etc.
Serving an interactive Streamlit app via flask.render_template isn’t feasible, because Streamlit apps are not static; when you interact with your Streamlit app, it is re-running your Python code to generate new results dynamically
Follow these discussions for more info
Integration with flask app
Serve streamlit within flask
import asyncio
import subprocess
from mlflow.server import app as mlflow_app
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.middleware.wsgi import WSGIMiddleware
import uvicorn
from fastapi.logger import logger
import uuid
from config import *
streamlit_app_process = None
streamlit_app_stdout = None
streamlit_app_stderr = None
async def registry_subprocess() -> None:
logger.debug("registry distance_matrix")
global streamlit_app_process
global streamlit_app_stdout
global streamlit_app_stderr
id = str(uuid.uuid1())
streamlit_app_stdout = open(f"/tmp/subprocess_stdout_{''.join(id.split('-'))}", 'w+b')
streamlit_app_stderr = open(f"/tmp/subprocess_stderr_{''.join(id.split('-'))}", 'w+b')
cmd = ['streamlit', 'run', f'{app_dir}/Home.py', f'--server.port={streamlit_app_port}', f'--server.address={streamlit_app_host}']
logger.info(f"subprocess start cmd {cmd}")
streamlit_app_process = subprocess.Popen(cmd, stdout=streamlit_app_stdout.fileno(), stderr=streamlit_app_stderr.fileno())
logger.info(f"subprocess start success {streamlit_app_process.pid} uid:{id}")
await asyncio.sleep(1)
streamlit_app_stdout.flush()
streamlit_app_stderr.flush()
[logger.info(i) for i in streamlit_app_stdout.readlines()]
[logger.info(i) for i in streamlit_app_stderr.readlines()]
async def close_subprocess() -> None:
logger.debug("close subprocess")
try:
streamlit_app_process.kill()
streamlit_app_stdout.flush()
streamlit_app_stderr.flush()
streamlit_app_stdout.close()
streamlit_app_stderr.close()
except Exception as error:
logger.error(error)
application = FastAPI()
application.add_event_handler("startup", registry_subprocess)
application.add_event_handler("shutdown", close_subprocess)
application.add_middleware(
CORSMiddleware,
allow_origins='*',
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
application.mount(f"/{mlflow_app_prefix.strip('/')}", WSGIMiddleware(mlflow_app))
if __name__ == "__main__":
uvicorn.run(application, host=mlflow_app_host, port=int(mlflow_app_port))

Async Django 3.1 with aiohttp client

Is it possible now, after the start of porting django to async to use aiohttp client with django?
My case:
server receives a request;
server sends another request to another server;
server decodes the response, saves it into DB, and returns that response to the client;
Andrew Svetlov mentioned that aiohttp "was never intended to work with synchronous WSGI" two years ago. (https://github.com/aio-libs/aiohttp/issues/3357)
But how the situation looks now? Django seems to almost support async views. Can we use aiohttp with asgi django?
I know, I can create an aiohttp server that handles requests, then populates some queue, and queue handler that saves responses into database, but here I am missing a lot from django: ORM, admin, etc.
You can implement thhis scenario in Django 3.1 like this:
async def my_proxy(request):
async with ClientSession() as session:
async with session.get('https://google.com') as resp:
print(resp.status)
print(await resp.text())
But the biggest question for me is how to share one session of aiohttp within django project because it is strongly not recommended to spawn a new ClientSession per request.
Important note:
Of course you should run your django application in ASGI mode with some compatible application server(for example uvicorn):
uvicorn my_project.asgi:application --reload
UPDATE: I found a workaround. You can create a module(file *.py) with shared global objects and populate it with ClientSession instance from project settings at project startup:
shared.py:
from typing import Optional
from aiohttp import ClientSession
AIOHTTP_SESSION: Optional[ClientSession] = None
settings.py:
from aiohttp import ClientSession
from my_project import shared
...
shared.AIOHTTP_SESSION = ClientSession()
views.py:
from my_project import shared
async def my_proxy(request):
async with shared.AIOHTTP_SESSION.get('https://google.com') as resp:
print(resp.status, resp._session)
await resp.text()
Important - imports should be EXACTLY the same. When i change them to form "from my_project.shared import AIOHTTP_SESSION" my test brakes :(
tests.py:
from asyncio import gather
from aiohttp import ClientSession
from django.test import TestCase
from my_project import shared
class ViewTests(TestCase):
async def test_async_view(self):
shared.AIOHTTP_SESSION = ClientSession()
await gather(*[self.async_client.get('/async_view_url') for _ in range(3)])
Run test by ./manage.py test

How do I get django to execute a remote celery task? Seems to ignore BROKER_URL in settings.py

I've got a django app that's trying to call a celery task that will eventually be executed on some remote hosts. The task codebase is completely separate to the django project, so I'm using celery.execute.send_task and calling it from a post_delete model signal. The code looks a bit like this:
class MyModel(models.Model):
#staticmethod
def do_async_thing(sender, instance, **kwargs):
celery.execute.send_task("tasks.do_my_thing", args=[instance.name])
signals.post_delete.connect(MyModel.do_async_thing, sender=MyModel)
I'm using the latest Django (1.6.1) and celery 3.1.7, so I understand that I don't need any extra module or app in my django project for it to be able to talk to celery. I've set BROKER_URL inside my settings.py to be the right url amqp://user:password#host/vhost.
When this method fires, I get a Connection Refused error. There's no indication on the celery broker that any connection was attempted - I guess it's not seeing the BROKER_URL configuration and is trying to connect to localhost.
How do I make this work? What extra configuration does send_task need to know where the broker is?
So I discovered the answer, and it was to do with not reading the tutorial (http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html) closely enough.
Specifically, I had the correct celery.py in place which I would have thought should have loaded the settings, but I'd missed the necessary changes to __init__.py in the django project, which wasn't hooking everything together.
My celery.py should be:
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
app = Celery('mypoject')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
and the __init__.py should be simply:
from __future__ import absolute_import
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app

Flask context and python-rq

I have a Flask application with blueprints that is structured like this:
application.py
project/
form_emailer.py
blueprints/
example_form.py
wtforms-models/
example_form_model.py
templates/
example_form_template.html
I'm trying to use RQ to send emails in the background (using Flask-Mail) because our SMTP uses the Gmail servers, which can take a few seconds to complete. My function in form_emailer.py looks like this:
from flask import Flask
from flask.ext.mail import Mail, Message
from application import app, q
mail = Mail(app)
def _queue_message(message):
mail.send(message)
def sendemail(recipients, subject, body):
"""
This function gets called in a Flask blueprint.
"""
message = Message(recipients=recipients, subject=subject, body=body)
q.enqueue(_queue_message, message)
My (simplified) application.py looks like this. I'm breaking convention by using "import *" in order to simplify additions there (our __init__.py in those packages dynamically import all modules):
from flask import Flask
from redis import Redis
from rq import Queue
app = Flask(__name__)
q = Queue(connection=Redis())
from project.blueprints import *
from project.forms import *
if __name__ == "__main__":
app.run()
I have an rqworker running in the same virtual environment where my application is running, and the worker detects the task. However, I'm getting the following traceback and can't figure out how to fix this:
16:41:29 *** Listening on high, normal, low...
16:43:26 low: project.form_emailer._queue_message(<flask_mail.Message object at 0x299d690>) (bd913b3a-4e7f-4efb-b51c-8ae11d37ac00)
16:43:27 ImportError: cannot import name sendemail
Traceback (most recent call last):
...
File "./project/blueprints/example_form.py", line 4, in <module>
from project.form_emailer import sendemail
ImportError: cannot import name sendemail
I suspect this has to do with Flask's application context, but my initial attempts to use with app.app_context(): are failing; the worker is not even able to import the function I want to use. What am I doing wrong here?