Async Django 3.1 with aiohttp client - django

Is it possible now, after the start of porting django to async to use aiohttp client with django?
My case:
server receives a request;
server sends another request to another server;
server decodes the response, saves it into DB, and returns that response to the client;
Andrew Svetlov mentioned that aiohttp "was never intended to work with synchronous WSGI" two years ago. (https://github.com/aio-libs/aiohttp/issues/3357)
But how the situation looks now? Django seems to almost support async views. Can we use aiohttp with asgi django?
I know, I can create an aiohttp server that handles requests, then populates some queue, and queue handler that saves responses into database, but here I am missing a lot from django: ORM, admin, etc.

You can implement thhis scenario in Django 3.1 like this:
async def my_proxy(request):
async with ClientSession() as session:
async with session.get('https://google.com') as resp:
print(resp.status)
print(await resp.text())
But the biggest question for me is how to share one session of aiohttp within django project because it is strongly not recommended to spawn a new ClientSession per request.
Important note:
Of course you should run your django application in ASGI mode with some compatible application server(for example uvicorn):
uvicorn my_project.asgi:application --reload
UPDATE: I found a workaround. You can create a module(file *.py) with shared global objects and populate it with ClientSession instance from project settings at project startup:
shared.py:
from typing import Optional
from aiohttp import ClientSession
AIOHTTP_SESSION: Optional[ClientSession] = None
settings.py:
from aiohttp import ClientSession
from my_project import shared
...
shared.AIOHTTP_SESSION = ClientSession()
views.py:
from my_project import shared
async def my_proxy(request):
async with shared.AIOHTTP_SESSION.get('https://google.com') as resp:
print(resp.status, resp._session)
await resp.text()
Important - imports should be EXACTLY the same. When i change them to form "from my_project.shared import AIOHTTP_SESSION" my test brakes :(
tests.py:
from asyncio import gather
from aiohttp import ClientSession
from django.test import TestCase
from my_project import shared
class ViewTests(TestCase):
async def test_async_view(self):
shared.AIOHTTP_SESSION = ClientSession()
await gather(*[self.async_client.get('/async_view_url') for _ in range(3)])
Run test by ./manage.py test

Related

Django async view - how do I know its running async?

I can create an async view and inquire as to whether the method thinks it's a coroutine, and it is.
However, looking at the stack it looks like my view is being called by asgiref's AsyncToSync.
I am using Gunicorn with Uvicorn worker and an asgi.py script that starts with:
from channels.routing import get_default_application
channel_layer = get_default_application
Plus, I am trying to discover if any of my middleware is running sync or even with a (forced) sync middleware I am not hitting (logger.debug('Synchronous %s adapted.', name))
Makes no sense.

Webhook from Django 4.1 to python-telegram-bot 20.0a2

I use the python-telegram-bot 20.0a2 library and Django 4.1
The bot runs by main.py script:
if __name__ == "__main__":
asyncio.run(main())
Inside of the main script I also run uvicorn in the same ascynhronous context as Application instance
# Run application and webserver together
async with application_tg:
await application_tg.start()
await server.serve() # uvicorn
await application_tg.stop()
What is the problem?
I use webhook for my bot
Django's url.py calls async view but the view can't get initalized Application instance of the bot.
so the question is:
How can to rearrange a scheme of interaction between python-telegram-bot 20 and Django 4.1 in a way that I can access Application instance from a Django hook?
Addition:
It's easy to achieve by using other frameworks such as starlette as it mentioned on the official wiki page of PTB library: https://docs.python-telegram-bot.org/en/v20.0a2/examples.customwebhookbot.html
My main script:
https://gist.github.com/SergSm/6843fadf505b826f83a10bf7eebc3fa0
my view:
import json
from django.views import View
from django.http import JsonResponse, HttpResponse
from django.views.decorators.csrf import csrf_exempt
from telegram import Update
from bot.tgbot.main import application_tg
async def telegram_handle(request):
if request.method == 'POST':
await application_tg.update_queue.put(
Update.de_json(data=json.loads(request.body), bot=application_tg.bot)
)
return JsonResponse({"ok": "POST processed"})
else:
return JsonResponse({"ok": "GET processed"})
UPDATE 1
I was desperate to make it run this way.
I tried to use contextvars module and read a lot of asyncio related stuff/
In the end I made an awful assumption that if I put my python-telegram-bot code into the Django view async function it's gonna work. But it does work!
And now I will try to wrap it using middleware to make my code cleaner
UPDATE 2
If you want to use Django orm with sync functions you need to use #sync_to_async(thread_sensitive=False)
the thread_sensitive=False parameter is important in this case otherwise you will never get the result of awaitables

How to use KafkaConsumer with Django4

I have a Django 4 project and using KafkaConsumer from kafka-python. I want to update django models after receiving a Kafka message. The goal here is to have some Kafka worker running and consuming message, it is also should able to have access to the models in the existing django ASGI app. Is it possible or should this worker be a separate django project?
Yes, this is possible.
You can simply write a python script and import a model like this
from PROJECT_PATH import settings as PROJECT
from django.core.management import settings
# Import your django model
from SOME_APP.models import SOME_MODEL
# import Kafka consumer
from kafka import KafkaConsumer, TopicPartition
# Create kafka consumer
consumer = KafkaConsumer("topicName", bootstrap_servers='<BOOTSTRAP_SERVER>')
for msg in consumer:
# play with message and use Django model here

Can a streamlit app be run within a flask app?

No code here, just a question. I have tried various means to get a streamlit app to run within a flask app. Main reason? Using Flask for user authentication into the streamlit app. Cannot get it to work out. Is it not possible perhaps?
Streamlit uses Tornado to serve HTTP and WebSocket data to its frontend. That is, it’s already its own web server, and is written in an existing web framework; it wouldn’t be trivial to wrap it inside another web framework.
Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed. By using non-blocking network I/O, Tornado can scale to tens of thousands of open connections, making it ideal for long polling, WebSockets, and other applications that require a long-lived connection to each user.
Flask is a synchronous web framework and not ideal for WebSockets etc.
Serving an interactive Streamlit app via flask.render_template isn’t feasible, because Streamlit apps are not static; when you interact with your Streamlit app, it is re-running your Python code to generate new results dynamically
Follow these discussions for more info
Integration with flask app
Serve streamlit within flask
import asyncio
import subprocess
from mlflow.server import app as mlflow_app
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.middleware.wsgi import WSGIMiddleware
import uvicorn
from fastapi.logger import logger
import uuid
from config import *
streamlit_app_process = None
streamlit_app_stdout = None
streamlit_app_stderr = None
async def registry_subprocess() -> None:
logger.debug("registry distance_matrix")
global streamlit_app_process
global streamlit_app_stdout
global streamlit_app_stderr
id = str(uuid.uuid1())
streamlit_app_stdout = open(f"/tmp/subprocess_stdout_{''.join(id.split('-'))}", 'w+b')
streamlit_app_stderr = open(f"/tmp/subprocess_stderr_{''.join(id.split('-'))}", 'w+b')
cmd = ['streamlit', 'run', f'{app_dir}/Home.py', f'--server.port={streamlit_app_port}', f'--server.address={streamlit_app_host}']
logger.info(f"subprocess start cmd {cmd}")
streamlit_app_process = subprocess.Popen(cmd, stdout=streamlit_app_stdout.fileno(), stderr=streamlit_app_stderr.fileno())
logger.info(f"subprocess start success {streamlit_app_process.pid} uid:{id}")
await asyncio.sleep(1)
streamlit_app_stdout.flush()
streamlit_app_stderr.flush()
[logger.info(i) for i in streamlit_app_stdout.readlines()]
[logger.info(i) for i in streamlit_app_stderr.readlines()]
async def close_subprocess() -> None:
logger.debug("close subprocess")
try:
streamlit_app_process.kill()
streamlit_app_stdout.flush()
streamlit_app_stderr.flush()
streamlit_app_stdout.close()
streamlit_app_stderr.close()
except Exception as error:
logger.error(error)
application = FastAPI()
application.add_event_handler("startup", registry_subprocess)
application.add_event_handler("shutdown", close_subprocess)
application.add_middleware(
CORSMiddleware,
allow_origins='*',
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
application.mount(f"/{mlflow_app_prefix.strip('/')}", WSGIMiddleware(mlflow_app))
if __name__ == "__main__":
uvicorn.run(application, host=mlflow_app_host, port=int(mlflow_app_port))

How to understand tornado response request cycle in Django

I want to create a real time twitter streaming application using tornado and Django. The problem is I am not able to understand the role of Tornado here, and how will I use view.py models.py of Django in Tornado Web Server.
Below if the request response cycle of Django, could anybody explain to me how the tornado web server will play its role here.
Few questions:
1- What will be role of urls.py file in Django since we will be routing all the urls from Tornado itself.
2- How will I connect to models.py to fetch rows for my tornado application.
I am looking into this github project link
Tornado fits roughly in the "web server" and "wsgi" parts of this diagram, and adds another section for Tornado RequestHandlers attached to the web server. When you create your tornado.web.Application, you will send some URLs to Tornado RequestHandlers and some to the Django WSGIContainer (which will in turn use the Django urls.py).
Using Django models from Tornado code is more challenging; my code from the last time I tried doing this is at https://gist.github.com/bdarnell/654157 (but note that this is quite old and I don't know if it will work any more)
This is tornado_main.py stored in one level with manage.py ... I've tested it with Django 1.8 ...
# coding=utf-8
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "django_project_dir.settings")
import django
django.setup()
from django.core.urlresolvers import reverse_lazy
from django.contrib.auth.models import User
from tornado.options import options, define, parse_command_line
import logging
import tornado.httpserver
import tornado.ioloop
import tornado.web
import tornado.websocket
import tornado.wsgi
define('port', type=int, default=8004)
# tornado.options.options['log_file_prefix'].set(
# '/var/www/myapp/logs/tornado_server.log')
tornado.options.parse_command_line()
class SomeHandler(tornado.websocket.WebSocketHandler):
pass
def main():
logger = logging.getLogger(__name__)
tornado_app = tornado.web.Application(
[
(r'/some_url/(?P<user_id>[0-9]+)', SomeHandler),
],
debug=True
)
logger.info("Tornado server starting...")
server = tornado.httpserver.HTTPServer(tornado_app)
server.listen(options.port)
try:
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
logger.info("Tornado server has stopped")
if __name__ == '__main__':
main()