How to use aiogram + flask (or only aiogram) for payment processing in telegram bot? - flask

I have a telegram bot, it is written in python (uses the aiogram library), it works on a webhook. I need to process payments for a paid subscription to a bot (I use yoomoney as a payment).
It’s clear how you can do this on Flask: through its request method, catch http notifications that are sent from yoomoney (you can specify a url for notifications in yoomoney, where payment statuses like "payment.succeeded" should come)
In short, Flask is able to check the status of a payment. The bottom line is that the bot is written in aiogram and the bot is launched by the command:
if __name__ == '__main__': try: start_webhook( dispatcher=dp, webhook_path=WEBHOOK_PATH, on_startup=on_startup, on_shutdown=on_shutdown, skip_updates=True, host=WEBAPP_HOST, port=WEBAPP_PORT ) except (KeyboardInterrupt, SystemExit): logger.error("Bot stopped!")
And if you just write in this code the launch of the application on flask in order to listen for answers from yoomoney, then EITHER the commands (of the bot itself) from aiogram will be executed OR the launch of flask, depending on what comes first in the code.
In fact, it is impossible to use flask and aiogram at the same time without multithreading. Is it possible somehow without flask in aiogram to track what comes to my server from another server (yoomoney)? Or how to use the aiogram + flask bundle more competently?
I tried to run flask in multi-threaded mode and the aiogram bot itself, but then an error occurs that the same port cannot be attached to different processes (which is logical).
It turns out it is necessary to change ports or to execute processes on different servers?

Related

Possible to already have a "ready cursor" in a serverless environment?

Take the following two timings on a trivial SQL statement:
timeit.timeit("""
import MySQLdb;
import settings;
conn = MySQLdb.connect(host=settings.DATABASES['default']['HOST'], port=3306, user=settings.DATABASES['default']['USER'], passwd=settings.DATABASES['default']['PASSWORD'], db=settings.DATABASES['default']['NAME'], charset='utf8');
cursor=conn.cursor();
cursor.execute('select 1');
cursor.fetchone()
""", number=100
)
# 2.5417470932006836
And, the same thing but assuming we already have a cursor that is ready to execute a statement:
timeit.timeit("""
cursor.execute('select 1');
cursor.fetchone()""",
setup="""
import MySQLdb;
import settings;
conn = MySQLdb.connect(host=settings.DATABASES['default']['HOST'], port=3306, user=settings.DATABASES['default']['USER'], passwd=settings.DATABASES['default']['PASSWORD'], db=settings.DATABASES['default']['NAME'], charset='utf8');
cursor=conn.cursor()
""", number=100
)
# 0.1153109073638916
And so we see that the second approach is about 20x faster on initialization time when we don't have to create a new connection/cursor each time.
But how would it be possible to do something like this in a serverless environment? For example, if I were using Google Cloud Functions or Cloud Run, would it be possible to:
Authenticate the user in order to set up a cursor to the database; and
Open a websocket where they can then send the query each time? (For an open websocket, do we need to check authentication on the user each time?)
Or, is there a possible approach to deal with the above overhead in a serverless environment?
AS #John Hanley mentioned, on Cloud Run, your code can either run continuously as a service or as a job. Both services and jobs run in the same environment and can use the same integrations with other services on Google Cloud.
Cloud Run services-Used to run code that responds to web requests, or
events.
Cloud Run jobs. Used to run code that performs work (a job) and quits
when the work is done.
A Cloud Run instance that has at least one open WebSocket connection is considered active and is therefore billed Websocket connection.
WebSockets applications are supported on Cloud Run with no additional configuration required. However, WebSockets streams are HTTP requests still subject to the request timeout,configured for your Cloud Run service, so you need to make sure this setting works well for your use of WebSockets such as implementing reconnects in your clients.
I would also recommend you check links for a socket instance and sql connect functions.

Shutting down a plotly-dash server

This is a follow-up to this question: How to stop flask application without using ctrl-c . The problem is that I didn't understand some of the terminology in the accepted answer since I'm totally new to this.
import dash
import dash_core_components as dcc
import dash_html_components as html
app = dash.Dash()
app.layout = html.Div(children=[
html.H1(children='Dash Tutorials'),
dcc.Graph()
])
if __name__ == '__main__':
app.run_server(debug=True)
How do I shut this down? My end goal is to run a plotly dashboard on a remote machine, but I'm testing it out on my local machine first.
I guess I'm supposed to "expose an endpoint" (have no idea what that means) via:
from flask import request
def shutdown_server():
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
#app.route('/shutdown', methods=['POST'])
def shutdown():
shutdown_server()
return 'Server shutting down...'
Where do I include the above code? Is it supposed to be included in the first block of code that I showed (i.e. the code that contains app.run_server command)? Is it supposed to be separate? And then what are the exact steps I need to take to shut down the server when I want?
Finally, are the steps to shut down the server the same whether I run the server on a local or remote machine?
Would really appreciate help!
The method in the linked answer, werkzeug.server.shutdown, only works with the development server. Creating a view function, with an assigned URL ("exposing an endpoint") to implement this shutdown function is a convenience thing, which won't work when deployed with a WSGI server like gunicorn.
Maybe that creates more questions than it answers:
I suggest familiarising yourself with Flask's wsgi-standalone deployment docs.
And then probably the gunicorn deployment guide. The monitoring section has a number of different examples of service monitors, which you can use with gunicorn allowing you to run the app in the background, start on reboot, etc.
Ultimately, starting and stopping the WSGI server is the responsibility of the service monitor and logic to do this probably shouldn't be coded into your app.
What works in both cases of
app.run_server(debug=True)
and
app.run_server(debug=False)
anywhere in the code is:
os.kill(os.getpid(), signal.SIGTERM)
(don't forget to import os and signal)
SIGTERM should cause a clean exit of the application.

Gunicorn--how to kill a worker if the client closes their connection?

I've got a flask app running under gunicorn which handles client requests via REST api with an extremely CPU-intensive backend; some requests take minutes to respond to.
But that creates its own problem. If I, say, run a little script to make a request and kill it (ctrl-C or whatever), the flask app keeps on running despite the fact that no one will hear it when it comes back from the depths of computation and gets its broken pipe.
Is there a way to terminate the API call (even just kill/restart the worker) as soon as the client connection is broken? That feels like a thing Gunicorn could handle, but I'm powerless to find any setting that would do the trick.
Thanks--this has been vexing me!
Killing a flask worker can be done with this code:
from flask import request
def shutdown_server():
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Werkzeug server doesn't run flask')
func()
#app.route('/shutdown', methods=['GET'])
def shutdown():
shutdown_server()
return 'Shutting down...'
For killing a Gunicorn server on Linux, you can use this command, which I tested:
pkill gunicorn
This command works flawlessly on all kinds of Linuxes, which I assume you have installed for server
Or if I give you a Python implementation:
import os
def shutdownGunicorn():
os.system("pkill gunicorn")
I don't think killing after request is done would be smart, because then you couldn't know when you will get next request.
Flask doesn't take much CPU and RAM usage while it's not working!
Hope that gives you an answer!

Periodic tasks in Django/Celery - How to notify the user on screen?

I have now succesfully setup Django-celery to check after my existing tasks to remind the user by email when the task is due:
#periodic_task(run_every=datetime.timedelta(minutes=1))
def check_for_tasks():
tasks = mdls.Task.objects.all()
now = datetime.datetime.utcnow().replace(tzinfo=utc,second=00, microsecond=00)
for task in tasks:
if task.reminder_date_time == now:
sendmail(...)
So far so good, however what if I wanted to also display a popup to the user as a reminder?
Twitter bootstrap allows creating popups and displaying them from javascript:
$(this).modal('show');
The problem is though, how can a celery worker daemon run this javascript on the user's browser? Maybe I am going a complete wrong way and this is not possible at all. Therefore the question remains can a cronjob on celery ever be used to achieve a ui notification on the browser?
Well, you can't use the Django messages framework, because the task has no way to access the user's request, and you can't pass request objects to the workers neither, because they're unpickable.
But you could definitely use something like django-notifications. You could create notifications in your task and attach them to the user in question. Then, you could retrieve those messages from your view and handle them in your templates to your liking. The user would see the notification on their next request (or you could use AJAX polling for real-time-ish notifications or HTML5 websockets for real real-time [see django-websocket]).
Yes it is possible but it is not easy. Ways to do/emulate server to client communication:
polling
The most trivial approach would be polling the server from javascript. Your celery task could create rows in your database that can be fetched by a url like /updates which checks for new updates, marks the rows as read and returns them.
long polling
Often referred to as comet. The client does a request to the server which pends until the server decides to return something. See django-comet for example.
websocket
To enable true server to client communication you need an open connection from the client to the server. django-socketio and django-websocket are examples of reusable apps that make this possible.
My advice judging by your question's context: either do some basic polling or stick with the emails.

creating a web url that listens to redis pubsub published message

Edit
OK I have a long polling from javascript that talks to a django view. The view looks as follows. It loses some messages that I publish from redis client in the channel. Also I should not be connecting to redis for every request (Perhaps the redis variables can be saved in session?)
If someone can point out the changes I need to make this view work with long polling, it would be awesome! Thank you!
def listen (request):
if request.session:
logger.info( 'request session: %s' %(request.session))
channel = request.GET.get('channel', None)
if channel:
logger.info('not in cache - first time - constructing redis object')
r = redis.Redis(host='localhost', port=6379, db=0)
p = r.pubsub()
logger.info('subscribing to channel: %s' %(channel))
p.psubscribe(channel)
logger.info('subscribed to channel: %s' %(channel))
message = p.listen().next()
logger.info('got msg %s' %(message))
return HttpResponse(json.dumps(message));
return HttpResponse('')
----Original question---
I am trying to create a chat application (using django, python) and am trying to avoid the polling mechanism. I have been struggling with this now - so any pointers would be really appreciated!
Since web sockets are not supported in most browsers, I think long polling is the right choice. Right now I am looking for something that scales better than regular polling and is easy to integrate with python django stack. Once I am done with this development, I plan to evaluate other python frameworks (tornado twister, gevent etc.) come to mind.
I did some research and liked the redis pubsub mechanism. The chat message gets published to a channel to which both users have already subscribed to. Following are my questions:
From what I understand, apache would not scale well since long polling would soon run into process/thread limits. Hence I have decided to switch to nginx. Is this rationale correct? Also are there any issues involved in nginx that I am worried about? In particular, I am worried about the latest version not supporting http 1.1 for proxy passing as mentioned in the blog post at http://www.letseehere.com/reverse-proxy-web-sockets?
How do I create the client portion of the subscription of messages on the browser side? In my mind, it would be a url to which the javascript code would "long poll". So at the javascript level, the client would poll a url which gets "blocked" in a "non blocking way" at the server side. When a result (in this case a new chat message) appears, server returns the result. Javascript does what it needs to and then again polls the same url. Is this thinking correct? What happens in between the intervals when the javascript loop is pausing - do we loose any messages from the server side.
In essence, I want to create the following:
From redis, I publish a message to a channel "foo" (can use redis-cli also - easy to incorporate it later in python/django)
I want the same message to appear in two browser windows that use the same js code to poll. Assume that the browser code knows the channel name for test purpose
I publish a second message that again appears in two browser windows.
I am new to real time apps, so apologies for any question that may not make sense.
Thank you!
Well just answering your question partly and mentioning one option out of many: Gunicorn being used with an async worker class is a solution for long-polling/non-blocking requests that is really easy to setup!