How to understand tornado response request cycle in Django - django

I want to create a real time twitter streaming application using tornado and Django. The problem is I am not able to understand the role of Tornado here, and how will I use view.py models.py of Django in Tornado Web Server.
Below if the request response cycle of Django, could anybody explain to me how the tornado web server will play its role here.
Few questions:
1- What will be role of urls.py file in Django since we will be routing all the urls from Tornado itself.
2- How will I connect to models.py to fetch rows for my tornado application.
I am looking into this github project link

Tornado fits roughly in the "web server" and "wsgi" parts of this diagram, and adds another section for Tornado RequestHandlers attached to the web server. When you create your tornado.web.Application, you will send some URLs to Tornado RequestHandlers and some to the Django WSGIContainer (which will in turn use the Django urls.py).
Using Django models from Tornado code is more challenging; my code from the last time I tried doing this is at https://gist.github.com/bdarnell/654157 (but note that this is quite old and I don't know if it will work any more)

This is tornado_main.py stored in one level with manage.py ... I've tested it with Django 1.8 ...
# coding=utf-8
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "django_project_dir.settings")
import django
django.setup()
from django.core.urlresolvers import reverse_lazy
from django.contrib.auth.models import User
from tornado.options import options, define, parse_command_line
import logging
import tornado.httpserver
import tornado.ioloop
import tornado.web
import tornado.websocket
import tornado.wsgi
define('port', type=int, default=8004)
# tornado.options.options['log_file_prefix'].set(
# '/var/www/myapp/logs/tornado_server.log')
tornado.options.parse_command_line()
class SomeHandler(tornado.websocket.WebSocketHandler):
pass
def main():
logger = logging.getLogger(__name__)
tornado_app = tornado.web.Application(
[
(r'/some_url/(?P<user_id>[0-9]+)', SomeHandler),
],
debug=True
)
logger.info("Tornado server starting...")
server = tornado.httpserver.HTTPServer(tornado_app)
server.listen(options.port)
try:
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
logger.info("Tornado server has stopped")
if __name__ == '__main__':
main()

Related

Webhook from Django 4.1 to python-telegram-bot 20.0a2

I use the python-telegram-bot 20.0a2 library and Django 4.1
The bot runs by main.py script:
if __name__ == "__main__":
asyncio.run(main())
Inside of the main script I also run uvicorn in the same ascynhronous context as Application instance
# Run application and webserver together
async with application_tg:
await application_tg.start()
await server.serve() # uvicorn
await application_tg.stop()
What is the problem?
I use webhook for my bot
Django's url.py calls async view but the view can't get initalized Application instance of the bot.
so the question is:
How can to rearrange a scheme of interaction between python-telegram-bot 20 and Django 4.1 in a way that I can access Application instance from a Django hook?
Addition:
It's easy to achieve by using other frameworks such as starlette as it mentioned on the official wiki page of PTB library: https://docs.python-telegram-bot.org/en/v20.0a2/examples.customwebhookbot.html
My main script:
https://gist.github.com/SergSm/6843fadf505b826f83a10bf7eebc3fa0
my view:
import json
from django.views import View
from django.http import JsonResponse, HttpResponse
from django.views.decorators.csrf import csrf_exempt
from telegram import Update
from bot.tgbot.main import application_tg
async def telegram_handle(request):
if request.method == 'POST':
await application_tg.update_queue.put(
Update.de_json(data=json.loads(request.body), bot=application_tg.bot)
)
return JsonResponse({"ok": "POST processed"})
else:
return JsonResponse({"ok": "GET processed"})
UPDATE 1
I was desperate to make it run this way.
I tried to use contextvars module and read a lot of asyncio related stuff/
In the end I made an awful assumption that if I put my python-telegram-bot code into the Django view async function it's gonna work. But it does work!
And now I will try to wrap it using middleware to make my code cleaner
UPDATE 2
If you want to use Django orm with sync functions you need to use #sync_to_async(thread_sensitive=False)
the thread_sensitive=False parameter is important in this case otherwise you will never get the result of awaitables

How to use KafkaConsumer with Django4

I have a Django 4 project and using KafkaConsumer from kafka-python. I want to update django models after receiving a Kafka message. The goal here is to have some Kafka worker running and consuming message, it is also should able to have access to the models in the existing django ASGI app. Is it possible or should this worker be a separate django project?
Yes, this is possible.
You can simply write a python script and import a model like this
from PROJECT_PATH import settings as PROJECT
from django.core.management import settings
# Import your django model
from SOME_APP.models import SOME_MODEL
# import Kafka consumer
from kafka import KafkaConsumer, TopicPartition
# Create kafka consumer
consumer = KafkaConsumer("topicName", bootstrap_servers='<BOOTSTRAP_SERVER>')
for msg in consumer:
# play with message and use Django model here

Async Django 3.1 with aiohttp client

Is it possible now, after the start of porting django to async to use aiohttp client with django?
My case:
server receives a request;
server sends another request to another server;
server decodes the response, saves it into DB, and returns that response to the client;
Andrew Svetlov mentioned that aiohttp "was never intended to work with synchronous WSGI" two years ago. (https://github.com/aio-libs/aiohttp/issues/3357)
But how the situation looks now? Django seems to almost support async views. Can we use aiohttp with asgi django?
I know, I can create an aiohttp server that handles requests, then populates some queue, and queue handler that saves responses into database, but here I am missing a lot from django: ORM, admin, etc.
You can implement thhis scenario in Django 3.1 like this:
async def my_proxy(request):
async with ClientSession() as session:
async with session.get('https://google.com') as resp:
print(resp.status)
print(await resp.text())
But the biggest question for me is how to share one session of aiohttp within django project because it is strongly not recommended to spawn a new ClientSession per request.
Important note:
Of course you should run your django application in ASGI mode with some compatible application server(for example uvicorn):
uvicorn my_project.asgi:application --reload
UPDATE: I found a workaround. You can create a module(file *.py) with shared global objects and populate it with ClientSession instance from project settings at project startup:
shared.py:
from typing import Optional
from aiohttp import ClientSession
AIOHTTP_SESSION: Optional[ClientSession] = None
settings.py:
from aiohttp import ClientSession
from my_project import shared
...
shared.AIOHTTP_SESSION = ClientSession()
views.py:
from my_project import shared
async def my_proxy(request):
async with shared.AIOHTTP_SESSION.get('https://google.com') as resp:
print(resp.status, resp._session)
await resp.text()
Important - imports should be EXACTLY the same. When i change them to form "from my_project.shared import AIOHTTP_SESSION" my test brakes :(
tests.py:
from asyncio import gather
from aiohttp import ClientSession
from django.test import TestCase
from my_project import shared
class ViewTests(TestCase):
async def test_async_view(self):
shared.AIOHTTP_SESSION = ClientSession()
await gather(*[self.async_client.get('/async_view_url') for _ in range(3)])
Run test by ./manage.py test

Django ASGI New Relic getting a CORS Error Heroku

I am running a Django Application using Django channels and a daphne server (ASGI) instead of the typical gunicorn (WSGI) server. So I had to modify my application to this:
# asgi.py
import os
import django
from channels.routing import get_default_application
from asgiref.wsgi import WsgiToAsgi
from django.core.wsgi import get_wsgi_application
from newrelic import agent
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "server.settings")
django.setup()
application = agent.WSGIApplicationWrapper(get_wsgi_application())
application = WsgiToAsgi(application)
To my surprise this actually works. When I access my django api from a browser or postman it works properly and the data shows up in New Relic. However, I also have a client-side Angular web app which makes REST API calls to the django server and I am getting CORS errors.
Please note that this is not a regular CORS issue as when I remove the new relic wrapper I am able to access my API properly from Angular.
Failed to load https://my-app.herokuapp.com/api/: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin https://frontend.com is therefore not allowed access.
Change your asgi.py file something like this
from django.core.asgi import get_asgi_application
import newrelic.agent #import newrelic agent
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "server.settings")
django_asgi_app = get_asgi_application()
from django.conf import settings
newrelic_config_file = settings.BASE_DIR + "/newrelic.ini" #location of your newrelic.ini file
newrelic.agent.initialize(newrelic_config_file) #initialise agent
application = newrelic.agent.ASGIApplicationWrapper(django_asgi_app)

How do I get django to execute a remote celery task? Seems to ignore BROKER_URL in settings.py

I've got a django app that's trying to call a celery task that will eventually be executed on some remote hosts. The task codebase is completely separate to the django project, so I'm using celery.execute.send_task and calling it from a post_delete model signal. The code looks a bit like this:
class MyModel(models.Model):
#staticmethod
def do_async_thing(sender, instance, **kwargs):
celery.execute.send_task("tasks.do_my_thing", args=[instance.name])
signals.post_delete.connect(MyModel.do_async_thing, sender=MyModel)
I'm using the latest Django (1.6.1) and celery 3.1.7, so I understand that I don't need any extra module or app in my django project for it to be able to talk to celery. I've set BROKER_URL inside my settings.py to be the right url amqp://user:password#host/vhost.
When this method fires, I get a Connection Refused error. There's no indication on the celery broker that any connection was attempted - I guess it's not seeing the BROKER_URL configuration and is trying to connect to localhost.
How do I make this work? What extra configuration does send_task need to know where the broker is?
So I discovered the answer, and it was to do with not reading the tutorial (http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html) closely enough.
Specifically, I had the correct celery.py in place which I would have thought should have loaded the settings, but I'd missed the necessary changes to __init__.py in the django project, which wasn't hooking everything together.
My celery.py should be:
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
app = Celery('mypoject')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
and the __init__.py should be simply:
from __future__ import absolute_import
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app