Can a streamlit app be run within a flask app? - flask

No code here, just a question. I have tried various means to get a streamlit app to run within a flask app. Main reason? Using Flask for user authentication into the streamlit app. Cannot get it to work out. Is it not possible perhaps?

Streamlit uses Tornado to serve HTTP and WebSocket data to its frontend. That is, it’s already its own web server, and is written in an existing web framework; it wouldn’t be trivial to wrap it inside another web framework.
Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed. By using non-blocking network I/O, Tornado can scale to tens of thousands of open connections, making it ideal for long polling, WebSockets, and other applications that require a long-lived connection to each user.
Flask is a synchronous web framework and not ideal for WebSockets etc.
Serving an interactive Streamlit app via flask.render_template isn’t feasible, because Streamlit apps are not static; when you interact with your Streamlit app, it is re-running your Python code to generate new results dynamically
Follow these discussions for more info
Integration with flask app
Serve streamlit within flask

import asyncio
import subprocess
from mlflow.server import app as mlflow_app
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.middleware.wsgi import WSGIMiddleware
import uvicorn
from fastapi.logger import logger
import uuid
from config import *
streamlit_app_process = None
streamlit_app_stdout = None
streamlit_app_stderr = None
async def registry_subprocess() -> None:
logger.debug("registry distance_matrix")
global streamlit_app_process
global streamlit_app_stdout
global streamlit_app_stderr
id = str(uuid.uuid1())
streamlit_app_stdout = open(f"/tmp/subprocess_stdout_{''.join(id.split('-'))}", 'w+b')
streamlit_app_stderr = open(f"/tmp/subprocess_stderr_{''.join(id.split('-'))}", 'w+b')
cmd = ['streamlit', 'run', f'{app_dir}/Home.py', f'--server.port={streamlit_app_port}', f'--server.address={streamlit_app_host}']
logger.info(f"subprocess start cmd {cmd}")
streamlit_app_process = subprocess.Popen(cmd, stdout=streamlit_app_stdout.fileno(), stderr=streamlit_app_stderr.fileno())
logger.info(f"subprocess start success {streamlit_app_process.pid} uid:{id}")
await asyncio.sleep(1)
streamlit_app_stdout.flush()
streamlit_app_stderr.flush()
[logger.info(i) for i in streamlit_app_stdout.readlines()]
[logger.info(i) for i in streamlit_app_stderr.readlines()]
async def close_subprocess() -> None:
logger.debug("close subprocess")
try:
streamlit_app_process.kill()
streamlit_app_stdout.flush()
streamlit_app_stderr.flush()
streamlit_app_stdout.close()
streamlit_app_stderr.close()
except Exception as error:
logger.error(error)
application = FastAPI()
application.add_event_handler("startup", registry_subprocess)
application.add_event_handler("shutdown", close_subprocess)
application.add_middleware(
CORSMiddleware,
allow_origins='*',
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
application.mount(f"/{mlflow_app_prefix.strip('/')}", WSGIMiddleware(mlflow_app))
if __name__ == "__main__":
uvicorn.run(application, host=mlflow_app_host, port=int(mlflow_app_port))

Related

Flask MQTT high CPU usage

I'm using Flask on a project on an embedded system and I'm having performance issues. I'm running gunicorn with one eventlet worker by running:
gunicorn -b 0.0.0.0 --worker-class eventlet -w 1 'app:create_app()'
The problem I'm facing is that, when the MQTT messages start to pour with more cadence, the application starts to use almost all the CPU I have available. My initial thought was that I was handling the messages not ideally but, I even took out my handler, and just receive the messages, and the problem still persists.
I have another python application that subscribes to the same information with the paho client and this is not an issue, so I'm assuming I'm missing something on my Flask application and not the information itself.
My code is:
import eventlet
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_login import LoginManager, current_user
from flask_socketio import SocketIO
from flask_mqtt import Mqtt
eventlet.monkey_patch()
#USERS DB
db_alchemy = SQLAlchemy()
#socketIO
socketio = SocketIO(cors_allowed_origins="*", async_mode='eventlet')
# MQTT
mqtt_client = Mqtt()
'''
APPLICATION CREATION
'''
def create_app():
app = Flask(__name__)
if app.config["ENV"] == "production":
app.config.from_object("config.ProductionConfig")
else:
app.config.from_object("config.DevelopmentConfig")
#USERS DB
db_alchemy.init_app(app)
#LoginManager
login_manager = LoginManager()
login_manager.login_view = "auth.login"
login_manager.init_app(app)
#SOCKETIO
socketio.init_app(app)
#FLASK-MQTT
app.config['MQTT_BROKER_URL'] = 'localhost' #
app.config['MQTT_BROKER_PORT'] = 1883
app.config['MQTT_KEEPALIVE'] = 20
app.config['MQTT_TLS_ENABLED'] = False
mqtt_client.init_app(app)
return app
#MQTT
#mqtt_client.on_connect()
def mqtt_on_connect():
mqtt_client.subscribe('testTopic/#', 0)
#mqtt_client.on_disconnect()
def mqtt_on_disconnect():
loggerMqtt.warning(' > Disconnected from broker')
#mqtt_client.on_subscribe()
def mqtt_on_subscribe(client, obj, mid, granted_qos):
pass
#mqtt_client.on_message()
def mqtt_on_message(client, userdata, message):
pass
#mqtt_topicSplitter(client, userdata, message)
As you can see my handler mqtt_topicSplitter is commented but I'm still having performance issues. I've tried adding an sleep command [eventlet.sleep(0.1)] on the on_message handler which solved the CPU consumption problem but resulted on my application being constantly kicked from the broker.
I also tried using other workers (gevent, asyncio, ..) without success. Using the Flask development server is not an option, since is not recommended for production.
I'm sorry if I wasn't clear, but I'm not an expert, please feel free to ask me any questions if needed.
Thanks in advance.

Async Django 3.1 with aiohttp client

Is it possible now, after the start of porting django to async to use aiohttp client with django?
My case:
server receives a request;
server sends another request to another server;
server decodes the response, saves it into DB, and returns that response to the client;
Andrew Svetlov mentioned that aiohttp "was never intended to work with synchronous WSGI" two years ago. (https://github.com/aio-libs/aiohttp/issues/3357)
But how the situation looks now? Django seems to almost support async views. Can we use aiohttp with asgi django?
I know, I can create an aiohttp server that handles requests, then populates some queue, and queue handler that saves responses into database, but here I am missing a lot from django: ORM, admin, etc.
You can implement thhis scenario in Django 3.1 like this:
async def my_proxy(request):
async with ClientSession() as session:
async with session.get('https://google.com') as resp:
print(resp.status)
print(await resp.text())
But the biggest question for me is how to share one session of aiohttp within django project because it is strongly not recommended to spawn a new ClientSession per request.
Important note:
Of course you should run your django application in ASGI mode with some compatible application server(for example uvicorn):
uvicorn my_project.asgi:application --reload
UPDATE: I found a workaround. You can create a module(file *.py) with shared global objects and populate it with ClientSession instance from project settings at project startup:
shared.py:
from typing import Optional
from aiohttp import ClientSession
AIOHTTP_SESSION: Optional[ClientSession] = None
settings.py:
from aiohttp import ClientSession
from my_project import shared
...
shared.AIOHTTP_SESSION = ClientSession()
views.py:
from my_project import shared
async def my_proxy(request):
async with shared.AIOHTTP_SESSION.get('https://google.com') as resp:
print(resp.status, resp._session)
await resp.text()
Important - imports should be EXACTLY the same. When i change them to form "from my_project.shared import AIOHTTP_SESSION" my test brakes :(
tests.py:
from asyncio import gather
from aiohttp import ClientSession
from django.test import TestCase
from my_project import shared
class ViewTests(TestCase):
async def test_async_view(self):
shared.AIOHTTP_SESSION = ClientSession()
await gather(*[self.async_client.get('/async_view_url') for _ in range(3)])
Run test by ./manage.py test

Making use of CherryPy as webserver for flask application

I have a simple flask application that works really well. I've been developing it separately from the main desktop application, I want to "plugin" the flask application into the main application. I also want to use cherrypy as the webserver as the default webserver that comes with flask is not production ready. I am not sure how to get both these to work together. My flask application code looks like this
from flask import Flask, render_template,send_from_directory
from scripts_data import test_data
from schedule_data import scheduledata
import os
app=Flask(__name__)
#app.route('/')
def index():
return render_template('index.html')
#app.route('/scripts')
def scripts():
test_data=t_data()
return render_template('scripts.html',data_scripts=test_data)
#app.route('/sch')
def schedules():
data_schedule=s_data()
return render_template('schedules.html',table_data=data_schedule)
if __name__=='__main__':
app.run(debug=True)
so obviously as I want to integrate into the main application I can't use app.run. Its not clear how to swap out the flask webserver for the Cherrypy Webserver
I have seen the following
from flask import Flask
import cherrypy
app = Flask(__name__)
app.debug = True
Class setup_webserver(object):
#app.route("/")
def hello():
return "Hello World!"
def run_server():
# Mount the WSGI callable object (app) on the root directory
cherrypy.tree.graft(app, '/')
# Set the configuration of the web server
cherrypy.config.update({
'engine.autoreload_on': True,
'log.screen': True,
'server.socket_port': 5000,
'server.socket_host': '0.0.0.0'
})
# Start the CherryPy WSGI web server
cherrypy.engine.start()
cherrypy.engine.block()
Class start_it_all(object)
import setup_webserver
setup_webserver.run_server()
But when I start the webserver and go to the site (0.0.0.0:5000) I get a 404?
I don't get the 404 when its just flask on its own. All I want to do is swap out the flask built-in webserver for the cherrpy webserver. I don't want to use cherrypy for anything else, as Flask will be the framework
Any suggestions? I'm on Windows and using python 2.7

How to understand tornado response request cycle in Django

I want to create a real time twitter streaming application using tornado and Django. The problem is I am not able to understand the role of Tornado here, and how will I use view.py models.py of Django in Tornado Web Server.
Below if the request response cycle of Django, could anybody explain to me how the tornado web server will play its role here.
Few questions:
1- What will be role of urls.py file in Django since we will be routing all the urls from Tornado itself.
2- How will I connect to models.py to fetch rows for my tornado application.
I am looking into this github project link
Tornado fits roughly in the "web server" and "wsgi" parts of this diagram, and adds another section for Tornado RequestHandlers attached to the web server. When you create your tornado.web.Application, you will send some URLs to Tornado RequestHandlers and some to the Django WSGIContainer (which will in turn use the Django urls.py).
Using Django models from Tornado code is more challenging; my code from the last time I tried doing this is at https://gist.github.com/bdarnell/654157 (but note that this is quite old and I don't know if it will work any more)
This is tornado_main.py stored in one level with manage.py ... I've tested it with Django 1.8 ...
# coding=utf-8
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "django_project_dir.settings")
import django
django.setup()
from django.core.urlresolvers import reverse_lazy
from django.contrib.auth.models import User
from tornado.options import options, define, parse_command_line
import logging
import tornado.httpserver
import tornado.ioloop
import tornado.web
import tornado.websocket
import tornado.wsgi
define('port', type=int, default=8004)
# tornado.options.options['log_file_prefix'].set(
# '/var/www/myapp/logs/tornado_server.log')
tornado.options.parse_command_line()
class SomeHandler(tornado.websocket.WebSocketHandler):
pass
def main():
logger = logging.getLogger(__name__)
tornado_app = tornado.web.Application(
[
(r'/some_url/(?P<user_id>[0-9]+)', SomeHandler),
],
debug=True
)
logger.info("Tornado server starting...")
server = tornado.httpserver.HTTPServer(tornado_app)
server.listen(options.port)
try:
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
logger.info("Tornado server has stopped")
if __name__ == '__main__':
main()

Importing Flask app when using app factory and flask script

This is Flask app context
app = Flask(__name__)
with app.app_context():
# insert code here
Most of the use cases of app context involves having 'app' initialized on the same script or importing app from the base.
My application is structured as the following:
# application/__init__.py
def create_app(config):
app = Flask(__name__)
return app
# manage.py
from application import create_app
from flask_script import Manager
manager = Manager(create_app)
manager.add_command("debug", Server(host='0.0.0.0', port=7777))
This might be really trivial issue, but how I should call 'with app.app_context()' if my application is structured like this?
Flask-Script calls everything inside the test context, so you can use current_app and other idioms:
The Manager runs the command inside a Flask test context. This means that you can access request-local proxies where appropriate, such as current_app, which may be used by extensions.
http://flask-script.readthedocs.org/en/latest/#accessing-local-proxies
So you don't need to use with app.app_context() with Manager scripts. If you're trying to do something else, then you'd have to create the app first:
from application import create_app
app = create_app()
with app.app_context():
# stuff here