Webhook from Django 4.1 to python-telegram-bot 20.0a2 - django

I use the python-telegram-bot 20.0a2 library and Django 4.1
The bot runs by main.py script:
if __name__ == "__main__":
asyncio.run(main())
Inside of the main script I also run uvicorn in the same ascynhronous context as Application instance
# Run application and webserver together
async with application_tg:
await application_tg.start()
await server.serve() # uvicorn
await application_tg.stop()
What is the problem?
I use webhook for my bot
Django's url.py calls async view but the view can't get initalized Application instance of the bot.
so the question is:
How can to rearrange a scheme of interaction between python-telegram-bot 20 and Django 4.1 in a way that I can access Application instance from a Django hook?
Addition:
It's easy to achieve by using other frameworks such as starlette as it mentioned on the official wiki page of PTB library: https://docs.python-telegram-bot.org/en/v20.0a2/examples.customwebhookbot.html
My main script:
https://gist.github.com/SergSm/6843fadf505b826f83a10bf7eebc3fa0
my view:
import json
from django.views import View
from django.http import JsonResponse, HttpResponse
from django.views.decorators.csrf import csrf_exempt
from telegram import Update
from bot.tgbot.main import application_tg
async def telegram_handle(request):
if request.method == 'POST':
await application_tg.update_queue.put(
Update.de_json(data=json.loads(request.body), bot=application_tg.bot)
)
return JsonResponse({"ok": "POST processed"})
else:
return JsonResponse({"ok": "GET processed"})
UPDATE 1
I was desperate to make it run this way.
I tried to use contextvars module and read a lot of asyncio related stuff/
In the end I made an awful assumption that if I put my python-telegram-bot code into the Django view async function it's gonna work. But it does work!
And now I will try to wrap it using middleware to make my code cleaner
UPDATE 2
If you want to use Django orm with sync functions you need to use #sync_to_async(thread_sensitive=False)
the thread_sensitive=False parameter is important in this case otherwise you will never get the result of awaitables

Related

Async Django 3.1 with aiohttp client

Is it possible now, after the start of porting django to async to use aiohttp client with django?
My case:
server receives a request;
server sends another request to another server;
server decodes the response, saves it into DB, and returns that response to the client;
Andrew Svetlov mentioned that aiohttp "was never intended to work with synchronous WSGI" two years ago. (https://github.com/aio-libs/aiohttp/issues/3357)
But how the situation looks now? Django seems to almost support async views. Can we use aiohttp with asgi django?
I know, I can create an aiohttp server that handles requests, then populates some queue, and queue handler that saves responses into database, but here I am missing a lot from django: ORM, admin, etc.
You can implement thhis scenario in Django 3.1 like this:
async def my_proxy(request):
async with ClientSession() as session:
async with session.get('https://google.com') as resp:
print(resp.status)
print(await resp.text())
But the biggest question for me is how to share one session of aiohttp within django project because it is strongly not recommended to spawn a new ClientSession per request.
Important note:
Of course you should run your django application in ASGI mode with some compatible application server(for example uvicorn):
uvicorn my_project.asgi:application --reload
UPDATE: I found a workaround. You can create a module(file *.py) with shared global objects and populate it with ClientSession instance from project settings at project startup:
shared.py:
from typing import Optional
from aiohttp import ClientSession
AIOHTTP_SESSION: Optional[ClientSession] = None
settings.py:
from aiohttp import ClientSession
from my_project import shared
...
shared.AIOHTTP_SESSION = ClientSession()
views.py:
from my_project import shared
async def my_proxy(request):
async with shared.AIOHTTP_SESSION.get('https://google.com') as resp:
print(resp.status, resp._session)
await resp.text()
Important - imports should be EXACTLY the same. When i change them to form "from my_project.shared import AIOHTTP_SESSION" my test brakes :(
tests.py:
from asyncio import gather
from aiohttp import ClientSession
from django.test import TestCase
from my_project import shared
class ViewTests(TestCase):
async def test_async_view(self):
shared.AIOHTTP_SESSION = ClientSession()
await gather(*[self.async_client.get('/async_view_url') for _ in range(3)])
Run test by ./manage.py test

Python webhook called multiple times from Facebook Chatbot - Api.ai

I am developing a fb chatbot where for specific intents, webhooks are been fired and process via python. The python app is hosted in Heroku cloud. I'm facing a typical problem, whenever any webhook is been fired, it keeps continued to be fired in an infinite loop until the next query from chat is been triggered.
#!/usr/bin/env python
from __future__ import print_function
from future import standard_library
standard_library.install_aliases()
import urllib.request, urllib.parse, urllib.error
import json
import os
import psycopg2
import urlparse
from flask import Flask
from flask import request, render_template
from flask import make_response
# Flask should start in global layout
context = Flask(__name__)
# Webhook requests are coming to this method
#context.route('/webhook', methods=['POST'])
def webhook():
reqContext = request.get_json(silent=True, force=True)
if reqContext.get("result").get("action") == "input.welcome":
return welcome()
elif reqContext.get("result").get("action") == "yahooWeatherForecast":
return weatherhook(reqContext)
elif reqContext.get("result").get("action") == "GoogleSearch":
return searchhook()
else:
print("Good Bye")
I have enabled webhook for 3 intents only. Other intents in api.ai does not have fulfillment (Webhook or Webhook slot filling) enabled.
Can anybody help me in this.
Two things to look for in such case:
We need to send response 200 to facebook to end the response
Need to check whether the message delivery response is enabled on the fb subscription or not. If it is enabled, fb will send delivery response as well, which should be caught on webhook.

Using Python Flask-restful with mod-wsgi

I am trying to use mod-wsgi with Apache 2.2
I have the following directory structure:
scheduling-algos
-lib
-common
-config
-config.json
resources
-Optimization.py
optimization.wsgi
optimization_app.py
My optimization_app.py is the following:
from flask import Flask
from flask_restful import Api
from resources.Optimization import OptimizationAlgo
def optimizeInstances():
optimization_app = Flask(__name__)
api = Api(optimization_app)
api.add_resource(OptimizationAlgo, '/instances')
if __name__ == '__main__':
optimizeInstances()
optimization_app.run(host='0.0.0.0', debug=True)
My Optimization.py code looks like the following:
class OptimizationAlgo(Resource):
def post(self):
return "success"
When I make a POST request to the url http://<host>:5000/instances, it works just as expected. I want make this work using WSGI. I have mod-wsgi installed with Apache 2.2.
My optimization.wsgi file looks like the following
import sys
sys.path.insert(0, '<path to app>')
from optimization_app import optimizeInstances as application
I get the following error: TypeError: optimizeInstances() takes no arguments (2 given) . Apparently this is not the correct way to use WSGI. What is the correct way to use WSGI?
Apparently, this is not the correct way to use WSGI.
As I told you in your other question, you should perhaps go back and read the Flask documentation again. That way you will learn and understand properly. By ignoring advice and expecting others to tell you, it only annoys people and they will stop helping you. Would suggest you take heed of that rather than leave a trail of separate questions hoping someone will solve your problems for you.
That said, I can't see how the code you give can even work with the Flask development server as you claim. The problem is that optimization_app = Flask(__name__) is setting a local variable within function scope. It isn't setting a global variable. As a result the call of optimization_app.run(host='0.0.0.0', debug=True) should fail with a LookupError as it will not see a variable called optimization_app. Not even sure why you are bothering with the function.
If you go look at the Flask documentation, the pattern it would likely use is:
# optimisation.wsgi
import sys
sys.path.insert(0, '<path to app>')
# We alias 'app' to 'application' here as mod_wsgi expects it to be called 'application'.
from optimization_app import app as application
# optimization_app.py
from flask import Flask
from flask_restful import Api
from resources.Optimization import OptimizationAlgo
app = Flask(__name__)
api = Api(app)
api.add_resource(OptimizationAlgo, '/instances')
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)

How to understand tornado response request cycle in Django

I want to create a real time twitter streaming application using tornado and Django. The problem is I am not able to understand the role of Tornado here, and how will I use view.py models.py of Django in Tornado Web Server.
Below if the request response cycle of Django, could anybody explain to me how the tornado web server will play its role here.
Few questions:
1- What will be role of urls.py file in Django since we will be routing all the urls from Tornado itself.
2- How will I connect to models.py to fetch rows for my tornado application.
I am looking into this github project link
Tornado fits roughly in the "web server" and "wsgi" parts of this diagram, and adds another section for Tornado RequestHandlers attached to the web server. When you create your tornado.web.Application, you will send some URLs to Tornado RequestHandlers and some to the Django WSGIContainer (which will in turn use the Django urls.py).
Using Django models from Tornado code is more challenging; my code from the last time I tried doing this is at https://gist.github.com/bdarnell/654157 (but note that this is quite old and I don't know if it will work any more)
This is tornado_main.py stored in one level with manage.py ... I've tested it with Django 1.8 ...
# coding=utf-8
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "django_project_dir.settings")
import django
django.setup()
from django.core.urlresolvers import reverse_lazy
from django.contrib.auth.models import User
from tornado.options import options, define, parse_command_line
import logging
import tornado.httpserver
import tornado.ioloop
import tornado.web
import tornado.websocket
import tornado.wsgi
define('port', type=int, default=8004)
# tornado.options.options['log_file_prefix'].set(
# '/var/www/myapp/logs/tornado_server.log')
tornado.options.parse_command_line()
class SomeHandler(tornado.websocket.WebSocketHandler):
pass
def main():
logger = logging.getLogger(__name__)
tornado_app = tornado.web.Application(
[
(r'/some_url/(?P<user_id>[0-9]+)', SomeHandler),
],
debug=True
)
logger.info("Tornado server starting...")
server = tornado.httpserver.HTTPServer(tornado_app)
server.listen(options.port)
try:
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
logger.info("Tornado server has stopped")
if __name__ == '__main__':
main()

django: debugging code in the view layer

I am developing my first django website.
I have written code in my view layer (the handlers that return an HttpResponse object to the view template (hope I am using the correct terminology).
In any case, I want to put print statements in my views.py file, so that I can debug it. However, it looks like stdout has been redirect to another stream, so I am not seeing anything printed out on my console (or even the browser).
What is the recommended way (best practice) for debugging django view layer scripts?
there are more advanced ways of doing it, but i find dropping
import pdb
pdb.set_trace()
does the job.
Use the Python logging module. Then use the Django debug toolbar, which will catch and display all the things you send to the log.
I'd upvote dysmsyd, but I don't have the reputation.
pdb is good because it lets you step thru your procedure and follow the control flow.
If you are using the django runserver, you can print to stdout or stderr.
If you are using the mod_wsgi, you can print to stderr.
The pprint module is also useful:
import sys
from pprint import pprint
def myview(request):
pprint (request, sys.stderr)
Try django-sentry. Especially if your project is in production stage.
Configure django debug toolbar: pip install django-debug-toolbar and follow the instructions to configure it in: https://github.com/django-debug-toolbar/django-debug-toolbar
import logging
Use the logging to debug: logging.debug('My DEBUG message')
Here is how it works on my class view:
from django.views.generic import TemplateView
import logging
class ProfileView(TemplateView):
template_name = 'profile.html'
def get(self, request, *args, **kwargs):
logging.debug(kwargs)
return render(request, self.template_name)