Restart python Script on Server after crashing - amazon-web-services

I am currently trying to set um a server to run a tensorflow Application. It is working fine but if there are too many requests the server terminates the flask Application which is requesting the answer from my tensorflow model.
This means that the Server is useless as long as I don't restart the flask app manually with python3 flaskApp.py in the server terminal.
Is there a way to restart the Python Script automatically once it fails? <== !! main question !!
It doesn't bother me when I don't get a return value once in a while but I don't want to manually restart the flaskApp once a day.
Here is the code for my flask Application, the method 'handler' returns a probability from my tensorflow model running in the background.
from flask import Flask, redirect, request, jsonify
from modelv4 import *
from waitress import serve
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
#app.route('/', methods=['POST'])
def processjson():
data = request.get_json()
satz = data['text']
print(satz)
ruckgabe = handler(satz)
ruckgabe = round(ruckgabe*10000)
ruckgabe = ruckgabe / 100
ruckgabe = str(ruckgabe)
ruckgabe = jsonify({"ruckgabe": ruckgabe})
#ruckgabe.headers.add('Access-Control-Allow-Origin', '*')
return ruckgabe
if __name__=='__main__':
serve(app, host="0.0.0.0", port=8080)
The server is running on aws EC2 as an Ubuntu instance, so you get a basic linux terminal.
If you need any more information to answer my question, please let me know.

Since the application is running on an ubuntu server I recommend using systemd.
You can let systemd auto-restart it in case it fails or is accidentally killed.
To do this, you can add the restart option to the .service file you created specifically for your application.
How can I run my flask application as a service with systemd?
A possible configuration of your .service file could be the following:
[Unit]
Description=My flask app
After=network-online.target
Wants=network-online.target systemd-networkd-wait-online.service
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
Restart=on-failure
RestartSec=5s
ExecStart= <script to start your application>
[Install]
WantedBy=multi-user.target

Related

APScheduler (BackgroundScheduler) with Flask on AWS ECS

I have a simple flask application which is using APScheduler to run a function in the background.
def start(application):
#application.route('/hello', methods=['GET'])
def hello_there():
return jsonify({'data': 'Hello There'})
def main():
application = Flask(__name__)
scheduler = BackgroundScheduler()
scheduler.add_job(func=my_func, trigger='interval', seconds=5)
scheduler.start()
start(application)
return application
if __name__ == "__main__":
pass
I am running this application using gunicorn:
gunicorn --workers 4 --bind 127.0.0.1:5000 "app:main()"
I am receiving the correct response when running this locally, but it is not working when the same application is deployed to AWS ECS 😢. Sending a request to the endpoint results in a timeout.

Flask MQTT high CPU usage

I'm using Flask on a project on an embedded system and I'm having performance issues. I'm running gunicorn with one eventlet worker by running:
gunicorn -b 0.0.0.0 --worker-class eventlet -w 1 'app:create_app()'
The problem I'm facing is that, when the MQTT messages start to pour with more cadence, the application starts to use almost all the CPU I have available. My initial thought was that I was handling the messages not ideally but, I even took out my handler, and just receive the messages, and the problem still persists.
I have another python application that subscribes to the same information with the paho client and this is not an issue, so I'm assuming I'm missing something on my Flask application and not the information itself.
My code is:
import eventlet
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_login import LoginManager, current_user
from flask_socketio import SocketIO
from flask_mqtt import Mqtt
eventlet.monkey_patch()
#USERS DB
db_alchemy = SQLAlchemy()
#socketIO
socketio = SocketIO(cors_allowed_origins="*", async_mode='eventlet')
# MQTT
mqtt_client = Mqtt()
'''
APPLICATION CREATION
'''
def create_app():
app = Flask(__name__)
if app.config["ENV"] == "production":
app.config.from_object("config.ProductionConfig")
else:
app.config.from_object("config.DevelopmentConfig")
#USERS DB
db_alchemy.init_app(app)
#LoginManager
login_manager = LoginManager()
login_manager.login_view = "auth.login"
login_manager.init_app(app)
#SOCKETIO
socketio.init_app(app)
#FLASK-MQTT
app.config['MQTT_BROKER_URL'] = 'localhost' #
app.config['MQTT_BROKER_PORT'] = 1883
app.config['MQTT_KEEPALIVE'] = 20
app.config['MQTT_TLS_ENABLED'] = False
mqtt_client.init_app(app)
return app
#MQTT
#mqtt_client.on_connect()
def mqtt_on_connect():
mqtt_client.subscribe('testTopic/#', 0)
#mqtt_client.on_disconnect()
def mqtt_on_disconnect():
loggerMqtt.warning(' > Disconnected from broker')
#mqtt_client.on_subscribe()
def mqtt_on_subscribe(client, obj, mid, granted_qos):
pass
#mqtt_client.on_message()
def mqtt_on_message(client, userdata, message):
pass
#mqtt_topicSplitter(client, userdata, message)
As you can see my handler mqtt_topicSplitter is commented but I'm still having performance issues. I've tried adding an sleep command [eventlet.sleep(0.1)] on the on_message handler which solved the CPU consumption problem but resulted on my application being constantly kicked from the broker.
I also tried using other workers (gevent, asyncio, ..) without success. Using the Flask development server is not an option, since is not recommended for production.
I'm sorry if I wasn't clear, but I'm not an expert, please feel free to ask me any questions if needed.
Thanks in advance.

Can a streamlit app be run within a flask app?

No code here, just a question. I have tried various means to get a streamlit app to run within a flask app. Main reason? Using Flask for user authentication into the streamlit app. Cannot get it to work out. Is it not possible perhaps?
Streamlit uses Tornado to serve HTTP and WebSocket data to its frontend. That is, it’s already its own web server, and is written in an existing web framework; it wouldn’t be trivial to wrap it inside another web framework.
Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed. By using non-blocking network I/O, Tornado can scale to tens of thousands of open connections, making it ideal for long polling, WebSockets, and other applications that require a long-lived connection to each user.
Flask is a synchronous web framework and not ideal for WebSockets etc.
Serving an interactive Streamlit app via flask.render_template isn’t feasible, because Streamlit apps are not static; when you interact with your Streamlit app, it is re-running your Python code to generate new results dynamically
Follow these discussions for more info
Integration with flask app
Serve streamlit within flask
import asyncio
import subprocess
from mlflow.server import app as mlflow_app
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.middleware.wsgi import WSGIMiddleware
import uvicorn
from fastapi.logger import logger
import uuid
from config import *
streamlit_app_process = None
streamlit_app_stdout = None
streamlit_app_stderr = None
async def registry_subprocess() -> None:
logger.debug("registry distance_matrix")
global streamlit_app_process
global streamlit_app_stdout
global streamlit_app_stderr
id = str(uuid.uuid1())
streamlit_app_stdout = open(f"/tmp/subprocess_stdout_{''.join(id.split('-'))}", 'w+b')
streamlit_app_stderr = open(f"/tmp/subprocess_stderr_{''.join(id.split('-'))}", 'w+b')
cmd = ['streamlit', 'run', f'{app_dir}/Home.py', f'--server.port={streamlit_app_port}', f'--server.address={streamlit_app_host}']
logger.info(f"subprocess start cmd {cmd}")
streamlit_app_process = subprocess.Popen(cmd, stdout=streamlit_app_stdout.fileno(), stderr=streamlit_app_stderr.fileno())
logger.info(f"subprocess start success {streamlit_app_process.pid} uid:{id}")
await asyncio.sleep(1)
streamlit_app_stdout.flush()
streamlit_app_stderr.flush()
[logger.info(i) for i in streamlit_app_stdout.readlines()]
[logger.info(i) for i in streamlit_app_stderr.readlines()]
async def close_subprocess() -> None:
logger.debug("close subprocess")
try:
streamlit_app_process.kill()
streamlit_app_stdout.flush()
streamlit_app_stderr.flush()
streamlit_app_stdout.close()
streamlit_app_stderr.close()
except Exception as error:
logger.error(error)
application = FastAPI()
application.add_event_handler("startup", registry_subprocess)
application.add_event_handler("shutdown", close_subprocess)
application.add_middleware(
CORSMiddleware,
allow_origins='*',
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
application.mount(f"/{mlflow_app_prefix.strip('/')}", WSGIMiddleware(mlflow_app))
if __name__ == "__main__":
uvicorn.run(application, host=mlflow_app_host, port=int(mlflow_app_port))

how to add job by flask apscheduler api using postman

from flask import Flask
from flask_apscheduler import APScheduler
class Config(object):
JOBS = [
{
'id': 'job5',
'func': 'f_s_api.view:job1',
'trigger': 'interval',
'seconds': 50
}
]
SCHEDULER_API_ENABLED = True
def job1():
print('job add')
if __name__ == '__main__':
app = Flask(__name__)
app.config.from_object(Config())
scheduler = APScheduler()
scheduler.init_app(app)
scheduler.start()
app.run(debug= True , port= 8080)
output
Serving Flask app "view" (lazy loading)
Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
Debug mode: on
Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
Restarting with stat
Debugger is active!
Debugger PIN: 135-565-985
job add
ob add
run the flask and open the postman and http://localhost:5000/scheduler/jobs this flask apscheduler api url for add job in the form of post request then in body-->row select text type as JSON and the send the request.

how to run mqtt in flask application simultaneously over wsgi

I created a flask application that consists of MQTT client which simultaneously subscribe data from esp32 and store it in database.
from flask import Flask, redirect, render_template, request, session, abort,make_response,jsonify, flash, url_for
import paho.mqtt.client as mqtt
import json
import config
import db_access
#mqtt code
def on_message(client, userdata, message):
topic = message.topic
print("line 12 - topic checkpoint - ",topic)
msgDecode=str(message.payload.decode("utf-8","ignore"))
msgJson=json.loads(msgDecode) #decode json data
print("line 15 - json checkpoint - ",type(msgJson))
# deviceID = msgJson["DeviceID"]
# currentCounter = msgJson["Counter"]
# status = msgJson["Status"]
db_access.updateStatus(msgJson["DeviceID"],msgJson["Status"])
app = Flask(__name__,template_folder='templates')
'''Web portal routes'''
#app.route('/device/switch', methods=['POST'])
def switch():
#parameter parsing
deviceID = request.args.get('deviceID')
status = request.args.get('status')
statusMap = {"on":1,"off":0}
#MQTT publish
mqtt_msg = json.dumps({"deviceID":int(deviceID),"status":statusMap[status]})
client.publish(config.MQTT_STATUS_CHANGE_TOPIC,mqtt_msg)
time_over_flag = 0
loop_Counter = 0
while status != db_access.getDeviceStatus(deviceID):
time.sleep(2)
loop_Counter+=1
if loop_Counter ==2:
time_over_flag = 1
break
if time_over_flag:
return make_response(jsonify({"statusChange":False}))
else:
return make_response(jsonify({"statusChange":True}))
if __name__ == "__main__":
db_access.createUserTable()
db_access.insertUserData()
db_access.createDeviceTable()
db_access.insertDeviceData()
print("creating new instance")
client = mqtt.Client("server") #create new instance
client.on_message=on_message #attach function to callback
print("connecting to broker")
client.connect(config.MQTT_BROKER_ADDRESS)
client.loop_start()
print("Subscribing to topic","esp/#")
client.subscribe("esp/#")
app.run(debug=True, use_reloader=False)
This is code in ____init____.py
db_access.py is consist of database operations and config.py consist of configurations.
Will this work in apache?
also, I have no previous experience with WSGI
The problem with launching your code (as included) with a WSGI server, is the part in that last if block, only gets run when you execute that file directly with the python command.
To make this work, I'd try moving that block of code to the top of your file, around here:
app = Flask(__name__,template_folder='templates')
db_access.createUserTable()
db_access.insertUserData()
db_access.createDeviceTable()
db_access.insertDeviceData()
print("creating new instance")
client = mqtt.Client("server") #create new instance
client.on_message=on_message #attach function to callback
print("connecting to broker")
client.connect(config.MQTT_BROKER_ADDRESS)
client.loop_start()
print("Subscribing to topic","esp/#")
client.subscribe("esp/#")
I'd also rename that __init__.py file, to something like server.py as the init file isn't meant to be heavy.
Once you've done that, run it with the development server again and test that it works as expected.
Then install a WSGI server like gunicorn into your virtual enviornment:
pip install gunicorn
And launch the app with gunicorn (this command should work, assuming you renamed your file to server.py):
gunicorn --bind '0.0.0.0:5000' server:app
Then test again.