I'm beginner in twisted world, so first I'm trying to get my working django project configured under twisted,currently its working well on django testing server or apache via mod_wsgi.
I followed this link and this too to configure the setup, based on that I have a server.py file given bellow
So in-order to integrate django app with twisted I used the following code,
import sys
import os
from twisted.application import internet, service
from twisted.web import server, resource, wsgi, static
from twisted.python import threadpool
from twisted.internet import reactor
from django.conf import settings
import twresource # This file hold implementation of "Class Root".
class ThreadPoolService(service.Service):
def __init__(self, pool):
self.pool = pool
def startService(self):
service.Service.startService(self)
self.pool.start()
def stopService(self):
service.Service.stopService(self)
self.pool.stop()
class Root(resource.Resource):
def __init__(self, wsgi_resource):
resource.Resource.__init__(self)
self.wsgi_resource = wsgi_resource
def getChild(self, path, request):
path0 = request.prepath.pop(0)
request.postpath.insert(0, path0)
return self.wsgi_resource
PORT = 8080
# Environment setup for your Django project files:
#insert it to first so our project will get first priority.
sys.path.insert(0,"django_project")
sys.path.insert(0,".")
os.environ['DJANGO_SETTINGS_MODULE'] = 'django_project.settings'
from django.core.handlers.wsgi import WSGIHandler
def wsgi_resource():
pool = threadpool.ThreadPool()
pool.start()
# Allow Ctrl-C to get you out cleanly:
reactor.addSystemEventTrigger('after', 'shutdown', pool.stop)
wsgi_resource = wsgi.WSGIResource(reactor, pool, WSGIHandler())
return wsgi_resource
# Twisted Application Framework setup:
application = service.Application('twisted-django')
# WSGI container for Django, combine it with twisted.web.Resource:
# XXX this is the only 'ugly' part: see the 'getChild' method in twresource.Root
wsgi_root = wsgi_resource()
root = Root(wsgi_root)
#multi = service.MultiService()
#pool = threadpool.ThreadPool()
#tps = ThreadPoolService(pool)
#tps.setServiceParent(multi)
#resource = wsgi.WSGIResource(reactor, tps.pool, WSGIHandler())
#root = twresource.Root(resource)
#Admin Site media files
#staticrsrc = static.File(os.path.join(os.path.abspath("."), "/usr/haridas/eclipse_workplace/skgargpms/django/contrib/admin/media/"))
#root.putChild("admin/media", staticrsrc)
# Serve it up:
main_site = server.Site(root)
#internet.TCPServer(PORT, main_site).setServiceParent(multi)
internet.TCPServer(PORT, main_site).setServiceParent(application)
#EOF.
Using above code It worked well from command line using "twisted -ny server.py", but when we run it as daemon "twisted -y server.py" it will hang, but the app is listening to the port 8080. I can access it using telnet.
I found some fixes for this hanging issue from stackoverflow itself. It helped me to use the code sections given below, which is commented in the above server.py file.
multi = service.MultiService()
pool = threadpool.ThreadPool()
tps = ThreadPoolService(pool)
tps.setServiceParent(multi)
resource = wsgi.WSGIResource(reactor, tps.pool, WSGIHandler())
root = twresource.Root(resource)
and :-
internet.TCPServer(PORT, main_site).setServiceParent(multi)
instead of using the:-
wsgi_root = wsgi_resource()
root = Root(wsgi_root)
and :-
internet.TCPServer(PORT, main_site).setServiceParent(application)
The modified method also didn't helped me to avoid the hanging issue.Is any body out there who successfully run the django apps under twisted daemon mode?.
I maid any mistakes while combining these codes?, Currently I'm only started to learn the twisted architectures in detail. Please help me to solve this problem
I'm looking for the Twisted Application configuration (TAC) file, which integrate django app with twisted and run with out any problem in the daemon mode also.
Thanks and Regards,
Haridas N.
I think you are almost there. Just add one more line at the very end:
multi.setServiceParent(application)
Related
I'm using Flask on a project on an embedded system and I'm having performance issues. I'm running gunicorn with one eventlet worker by running:
gunicorn -b 0.0.0.0 --worker-class eventlet -w 1 'app:create_app()'
The problem I'm facing is that, when the MQTT messages start to pour with more cadence, the application starts to use almost all the CPU I have available. My initial thought was that I was handling the messages not ideally but, I even took out my handler, and just receive the messages, and the problem still persists.
I have another python application that subscribes to the same information with the paho client and this is not an issue, so I'm assuming I'm missing something on my Flask application and not the information itself.
My code is:
import eventlet
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_login import LoginManager, current_user
from flask_socketio import SocketIO
from flask_mqtt import Mqtt
eventlet.monkey_patch()
#USERS DB
db_alchemy = SQLAlchemy()
#socketIO
socketio = SocketIO(cors_allowed_origins="*", async_mode='eventlet')
# MQTT
mqtt_client = Mqtt()
'''
APPLICATION CREATION
'''
def create_app():
app = Flask(__name__)
if app.config["ENV"] == "production":
app.config.from_object("config.ProductionConfig")
else:
app.config.from_object("config.DevelopmentConfig")
#USERS DB
db_alchemy.init_app(app)
#LoginManager
login_manager = LoginManager()
login_manager.login_view = "auth.login"
login_manager.init_app(app)
#SOCKETIO
socketio.init_app(app)
#FLASK-MQTT
app.config['MQTT_BROKER_URL'] = 'localhost' #
app.config['MQTT_BROKER_PORT'] = 1883
app.config['MQTT_KEEPALIVE'] = 20
app.config['MQTT_TLS_ENABLED'] = False
mqtt_client.init_app(app)
return app
#MQTT
#mqtt_client.on_connect()
def mqtt_on_connect():
mqtt_client.subscribe('testTopic/#', 0)
#mqtt_client.on_disconnect()
def mqtt_on_disconnect():
loggerMqtt.warning(' > Disconnected from broker')
#mqtt_client.on_subscribe()
def mqtt_on_subscribe(client, obj, mid, granted_qos):
pass
#mqtt_client.on_message()
def mqtt_on_message(client, userdata, message):
pass
#mqtt_topicSplitter(client, userdata, message)
As you can see my handler mqtt_topicSplitter is commented but I'm still having performance issues. I've tried adding an sleep command [eventlet.sleep(0.1)] on the on_message handler which solved the CPU consumption problem but resulted on my application being constantly kicked from the broker.
I also tried using other workers (gevent, asyncio, ..) without success. Using the Flask development server is not an option, since is not recommended for production.
I'm sorry if I wasn't clear, but I'm not an expert, please feel free to ask me any questions if needed.
Thanks in advance.
No code here, just a question. I have tried various means to get a streamlit app to run within a flask app. Main reason? Using Flask for user authentication into the streamlit app. Cannot get it to work out. Is it not possible perhaps?
Streamlit uses Tornado to serve HTTP and WebSocket data to its frontend. That is, it’s already its own web server, and is written in an existing web framework; it wouldn’t be trivial to wrap it inside another web framework.
Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed. By using non-blocking network I/O, Tornado can scale to tens of thousands of open connections, making it ideal for long polling, WebSockets, and other applications that require a long-lived connection to each user.
Flask is a synchronous web framework and not ideal for WebSockets etc.
Serving an interactive Streamlit app via flask.render_template isn’t feasible, because Streamlit apps are not static; when you interact with your Streamlit app, it is re-running your Python code to generate new results dynamically
Follow these discussions for more info
Integration with flask app
Serve streamlit within flask
import asyncio
import subprocess
from mlflow.server import app as mlflow_app
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.middleware.wsgi import WSGIMiddleware
import uvicorn
from fastapi.logger import logger
import uuid
from config import *
streamlit_app_process = None
streamlit_app_stdout = None
streamlit_app_stderr = None
async def registry_subprocess() -> None:
logger.debug("registry distance_matrix")
global streamlit_app_process
global streamlit_app_stdout
global streamlit_app_stderr
id = str(uuid.uuid1())
streamlit_app_stdout = open(f"/tmp/subprocess_stdout_{''.join(id.split('-'))}", 'w+b')
streamlit_app_stderr = open(f"/tmp/subprocess_stderr_{''.join(id.split('-'))}", 'w+b')
cmd = ['streamlit', 'run', f'{app_dir}/Home.py', f'--server.port={streamlit_app_port}', f'--server.address={streamlit_app_host}']
logger.info(f"subprocess start cmd {cmd}")
streamlit_app_process = subprocess.Popen(cmd, stdout=streamlit_app_stdout.fileno(), stderr=streamlit_app_stderr.fileno())
logger.info(f"subprocess start success {streamlit_app_process.pid} uid:{id}")
await asyncio.sleep(1)
streamlit_app_stdout.flush()
streamlit_app_stderr.flush()
[logger.info(i) for i in streamlit_app_stdout.readlines()]
[logger.info(i) for i in streamlit_app_stderr.readlines()]
async def close_subprocess() -> None:
logger.debug("close subprocess")
try:
streamlit_app_process.kill()
streamlit_app_stdout.flush()
streamlit_app_stderr.flush()
streamlit_app_stdout.close()
streamlit_app_stderr.close()
except Exception as error:
logger.error(error)
application = FastAPI()
application.add_event_handler("startup", registry_subprocess)
application.add_event_handler("shutdown", close_subprocess)
application.add_middleware(
CORSMiddleware,
allow_origins='*',
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
application.mount(f"/{mlflow_app_prefix.strip('/')}", WSGIMiddleware(mlflow_app))
if __name__ == "__main__":
uvicorn.run(application, host=mlflow_app_host, port=int(mlflow_app_port))
We have a django app which we deployed using IIS Server. The app has been running smoothly and without any problem. However, we want to schedule a job that will run every night at 02:00.
We are using APScheduler which is working perfectly fine on the django local server but it never runs on production.
Here is the code I am using to run the jobs.
myapp/scheduler.py
def schedule():
scheduler = BackgroundScheduler()
scheduler.add_job(daily_schedules, 'interval', minutes=5)
# scheduler.add_job(daily_schedules, trigger='cron', hour='2')
scheduler.start()
def daily_schedules():
time_now = time.clock()
parse_api() # my function
# Keeping logs
path = join(settings.FILES_DIR, 'schedulled/logs.csv')
logs = pd.read_csv(path, encoding='utf-8')
time_end = time.clock() - time_now
logs.loc[len(logs)] = [
datetime.now().strftime('%Y-%m-%d %H:%M:%S'), time_end
]
logs.to_csv(path, encoding='utf-8', index=False)
print(logs)
myapp/apps.py
from django.apps import AppConfig
class MyAppConfig(AppConfig):
name = 'MyApp'
def ready(self):
from myapp import scheduler
scheduler.schedule()
Is there any particular reason why the job is not being run? Do I need to do something else or this method does not work with IIS? Since the server is being used by many developers at the same time, I would like to run the jobs as part of the django application and not run them outside in a separate server.
P.S: I have read all the stackoverflow questions on this matter but none seem to answer my questions.
I am using a Flask app with SQLAlchemy and a uWSGI server. It is a known issue that uWSGI's duplicates connection through all the processes during forking. The information about how to fix this issue is a bit scatted in the web, but the two main options seems to be:
Using lazy-apps = true within uWSGI's config (Which is not recommended because it consumes a lot of memory)
Using uWSGI's #postfork decorator to close the connection after forking to start new fresh connections for each new process, but it's unclear for me when and how should be used within the Flask app.
This the sample of my app:
# file: my_app/app.py
db = SQLAlchemy()
def create_app():
app = Flask(__name__)
app.config.from_pyfile(f'../config/settings.py')
db.init_app(app)
db.create_all(app=app)
return app
Sample of the run.py file:
# file: run.py
from my_app/app import create_app
app = create_app()
if "__main__" == __name__:
app.run(debug=app.config["DEBUG"], port=5000)
So the question is where and how should the postfork executed to properly setup uWSGI's server to use isolated connections on each process without using lazy-apps = true?
The SQLAlchemy manual provides two examples how to approach this: Using Connection Pools with Multiprocessing.
The first approach involving Engine.dispose() can be approached using uwsgidecorators.postfork as suggested by Hett - simple example that should work if there's only the default binding involved:
db = SQLAlchemy()
def create_app():
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = "postgres:///test"
db.init_app(app)
def _dispose_db_pool():
with app.app_context():
db.engine.dispose()
try:
from uwsgidecorators import postfork
postfork(_dispose_db_pool)
except ImportError:
# Implement fallback when running outside of uwsgi...
raise
return app
The second one involves sqlalchemy.event and discarding connections from different PIDs - quoting:
The next approach is to instrument the Pool itself with events so that connections are automatically invalidated in the subprocess. This is a little more magical but probably more foolproof.
If you want the latter magical solution, see the docs.
In general you need to use #postfork decorator provided by uwsgi.
We are not using SQLAlchemy, but I think you can adapt our solution to your case as well.
Here's an excerpt from our uwsgi configuration:
[uwsgi]
strict = true
; process management
master = true
vacuum = true
pidfile = /tmp/uwsgi.pid
enable-threads = true
cpu-affinity = 1
processes = 4
threads = 2
mules = 2
...
And here's the application initialization code using #postfork to create process safe DB connections.
from flask import Flask
from uwsgidecorators import postfork
from app.lib.db import DB # DB object will be replaced by SQLAlchemy on your side
app = Flask(__name__)
# due to uwsgi's copy-on-write semantics workers need to connect to database
# to preserve unique descriptors after fork() is called in master process
#postfork
def db_reconnect():
# Open the DB connection and log the event
app.db = DB(config.get('db', 'host'),
config.get('db', 'user'),
config.get('db', 'pass'),
config.get('db', 'name'))
In our case all initialization code is placed in __init__.py or app folder, but I don't think you'll have any difficulties to migrate it into your app.py file.
Let me know if you'll encounter any other questions on this.
I am trying to deploy a machine learning model on AWS EC2 instance using Flask. These are sklearn's fitted Random Forest models that are pickled using joblib. When I host Flask on localhost and load them into memory everything runs smoothly. However, when I deploy it on the apache2 server using mod_wsgi, joblib works sometimes(i.e. the models are loaded using joblib sometimes) and the other times the server just hangs. There is no error in logs. Any ideas would be appreciated.
Here is the relevant code that I am using:
# In[49]:
from flask import Flask, jsonify, request, render_template
from datetime import datetime
from sklearn.externals import joblib
import pickle as pkl
import os
# In[50]:
app = Flask(__name__, template_folder="/home/ubuntu/flaskapp/")
# In[51]:
log = lambda msg: app.logger.info(msg, extra={'worker_id': "request.uuid" })
# Logger
import logging
handler = logging.FileHandler('/home/ubuntu/app.log')
handler.setLevel(logging.ERROR)
app.logger.addHandler(handler)
# In[52]:
#app.route('/')
def host_template():
return render_template('Static_GUI.html')
# In[53]:
def load_models(path):
model_arr = [0]*len(os.listdir(path))
for filename in os.listdir(path):
f = open(path+"/"+filename, 'rb')
model_arr[int(filename[2:])] = joblib.load(f)
print("Classifier ", filename[2:], " added.")
f.close()
return model_arr
# In[54]:
partition_limit = 30
# In[55]:
print("Dictionaries being loaded.")
dict_file_path = "/home/ubuntu/Dictionaries/VARR"
dictionaries = pkl.load(open(dict_file_path, "rb"))
print("Dictionaries Loaded.")
# In[56]:
print("Begin loading classifiers.")
model_path = "/home/ubuntu/RF_Models/"
classifier_arr = load_models(model_path)
print("Classifiers Loaded.")
if __name__ == '__main__':
log("/home/ubuntu/print.log")
print("Starting API")
app.run(debug=True)
I was stuck with this for quite sometime. Posting the answer in case someone runs into this problem. Using print statements and looking at logs I narrowed the problem down to joblib.load statement. I found this awesome blog: http://blog.rtwilson.com/how-to-fix-flask-wsgi-webapp-hanging-when-importing-a-module-such-as-numpy-or-matplotlib
The idea of using a global process group fixed the problem. That forced the use of main interpreter just as the top comment on that blog page mentions.