APScheduler (BackgroundScheduler) with Flask on AWS ECS - amazon-web-services

I have a simple flask application which is using APScheduler to run a function in the background.
def start(application):
#application.route('/hello', methods=['GET'])
def hello_there():
return jsonify({'data': 'Hello There'})
def main():
application = Flask(__name__)
scheduler = BackgroundScheduler()
scheduler.add_job(func=my_func, trigger='interval', seconds=5)
scheduler.start()
start(application)
return application
if __name__ == "__main__":
pass
I am running this application using gunicorn:
gunicorn --workers 4 --bind 127.0.0.1:5000 "app:main()"
I am receiving the correct response when running this locally, but it is not working when the same application is deployed to AWS ECS 😢. Sending a request to the endpoint results in a timeout.

Related

How to serve Flask app on waitress and socket.io using eventlet server simultaneously?

I'm using waitress server to deploy the flask app for production. I'm also using flask's socketio along with the eventlet server which requires its own app run.
Currently only serving app on waitress:
serve(app, host='0.0.0.0', port=8080)
How do I include the socket.run command for running the socket server?
socketio.run(app)
My code:
This snippet sets up server for the flask socketio on which it is to be run and in the if name part I serve the app on waitress if in prod mode.
app.py
import eventlet
async_mode = None
if async_mode is None:
try:
async_mode = 'eventlet'
except ImportError:
pass
if async_mode is None:
async_mode = 'threading'
print('async_mode is ' + async_mode)
if async_mode == 'eventlet':
eventlet.monkey_patch()
socketio = socketIO(app,cors_allowed_origins='*',async_mode=async_mode)
if __name__=='__main__':
if env_mode=='dev':
app.run(host='0.0.0.0', port=8080)
elif env_mode=='prod':
serve(app, host='0.0.0.0', port=8080)

Restart python Script on Server after crashing

I am currently trying to set um a server to run a tensorflow Application. It is working fine but if there are too many requests the server terminates the flask Application which is requesting the answer from my tensorflow model.
This means that the Server is useless as long as I don't restart the flask app manually with python3 flaskApp.py in the server terminal.
Is there a way to restart the Python Script automatically once it fails? <== !! main question !!
It doesn't bother me when I don't get a return value once in a while but I don't want to manually restart the flaskApp once a day.
Here is the code for my flask Application, the method 'handler' returns a probability from my tensorflow model running in the background.
from flask import Flask, redirect, request, jsonify
from modelv4 import *
from waitress import serve
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
#app.route('/', methods=['POST'])
def processjson():
data = request.get_json()
satz = data['text']
print(satz)
ruckgabe = handler(satz)
ruckgabe = round(ruckgabe*10000)
ruckgabe = ruckgabe / 100
ruckgabe = str(ruckgabe)
ruckgabe = jsonify({"ruckgabe": ruckgabe})
#ruckgabe.headers.add('Access-Control-Allow-Origin', '*')
return ruckgabe
if __name__=='__main__':
serve(app, host="0.0.0.0", port=8080)
The server is running on aws EC2 as an Ubuntu instance, so you get a basic linux terminal.
If you need any more information to answer my question, please let me know.
Since the application is running on an ubuntu server I recommend using systemd.
You can let systemd auto-restart it in case it fails or is accidentally killed.
To do this, you can add the restart option to the .service file you created specifically for your application.
How can I run my flask application as a service with systemd?
A possible configuration of your .service file could be the following:
[Unit]
Description=My flask app
After=network-online.target
Wants=network-online.target systemd-networkd-wait-online.service
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
Restart=on-failure
RestartSec=5s
ExecStart= <script to start your application>
[Install]
WantedBy=multi-user.target

how to add job by flask apscheduler api using postman

from flask import Flask
from flask_apscheduler import APScheduler
class Config(object):
JOBS = [
{
'id': 'job5',
'func': 'f_s_api.view:job1',
'trigger': 'interval',
'seconds': 50
}
]
SCHEDULER_API_ENABLED = True
def job1():
print('job add')
if __name__ == '__main__':
app = Flask(__name__)
app.config.from_object(Config())
scheduler = APScheduler()
scheduler.init_app(app)
scheduler.start()
app.run(debug= True , port= 8080)
output
Serving Flask app "view" (lazy loading)
Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
Debug mode: on
Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
Restarting with stat
Debugger is active!
Debugger PIN: 135-565-985
job add
ob add
run the flask and open the postman and http://localhost:5000/scheduler/jobs this flask apscheduler api url for add job in the form of post request then in body-->row select text type as JSON and the send the request.

how to run mqtt in flask application simultaneously over wsgi

I created a flask application that consists of MQTT client which simultaneously subscribe data from esp32 and store it in database.
from flask import Flask, redirect, render_template, request, session, abort,make_response,jsonify, flash, url_for
import paho.mqtt.client as mqtt
import json
import config
import db_access
#mqtt code
def on_message(client, userdata, message):
topic = message.topic
print("line 12 - topic checkpoint - ",topic)
msgDecode=str(message.payload.decode("utf-8","ignore"))
msgJson=json.loads(msgDecode) #decode json data
print("line 15 - json checkpoint - ",type(msgJson))
# deviceID = msgJson["DeviceID"]
# currentCounter = msgJson["Counter"]
# status = msgJson["Status"]
db_access.updateStatus(msgJson["DeviceID"],msgJson["Status"])
app = Flask(__name__,template_folder='templates')
'''Web portal routes'''
#app.route('/device/switch', methods=['POST'])
def switch():
#parameter parsing
deviceID = request.args.get('deviceID')
status = request.args.get('status')
statusMap = {"on":1,"off":0}
#MQTT publish
mqtt_msg = json.dumps({"deviceID":int(deviceID),"status":statusMap[status]})
client.publish(config.MQTT_STATUS_CHANGE_TOPIC,mqtt_msg)
time_over_flag = 0
loop_Counter = 0
while status != db_access.getDeviceStatus(deviceID):
time.sleep(2)
loop_Counter+=1
if loop_Counter ==2:
time_over_flag = 1
break
if time_over_flag:
return make_response(jsonify({"statusChange":False}))
else:
return make_response(jsonify({"statusChange":True}))
if __name__ == "__main__":
db_access.createUserTable()
db_access.insertUserData()
db_access.createDeviceTable()
db_access.insertDeviceData()
print("creating new instance")
client = mqtt.Client("server") #create new instance
client.on_message=on_message #attach function to callback
print("connecting to broker")
client.connect(config.MQTT_BROKER_ADDRESS)
client.loop_start()
print("Subscribing to topic","esp/#")
client.subscribe("esp/#")
app.run(debug=True, use_reloader=False)
This is code in ____init____.py
db_access.py is consist of database operations and config.py consist of configurations.
Will this work in apache?
also, I have no previous experience with WSGI
The problem with launching your code (as included) with a WSGI server, is the part in that last if block, only gets run when you execute that file directly with the python command.
To make this work, I'd try moving that block of code to the top of your file, around here:
app = Flask(__name__,template_folder='templates')
db_access.createUserTable()
db_access.insertUserData()
db_access.createDeviceTable()
db_access.insertDeviceData()
print("creating new instance")
client = mqtt.Client("server") #create new instance
client.on_message=on_message #attach function to callback
print("connecting to broker")
client.connect(config.MQTT_BROKER_ADDRESS)
client.loop_start()
print("Subscribing to topic","esp/#")
client.subscribe("esp/#")
I'd also rename that __init__.py file, to something like server.py as the init file isn't meant to be heavy.
Once you've done that, run it with the development server again and test that it works as expected.
Then install a WSGI server like gunicorn into your virtual enviornment:
pip install gunicorn
And launch the app with gunicorn (this command should work, assuming you renamed your file to server.py):
gunicorn --bind '0.0.0.0:5000' server:app
Then test again.

Unable to configure Gunicorn to serve a flask app running another loop concurrently

I have a simple flask app, say like this:
# app.py
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello, World!'
I also have a slack bot reading messages
#bot.py
def serve(self):
while True:
message, channel = self.parse_slack_output(self.slack_client.rtm_read())
if message and channel:
self.handle_message(message, channel)
time.sleep(self.READ_WEBSOCKET_DELAY)
I want both the codes to run concurrently. So in app.py I do:
#app.py
if __name__ == "__main__":
import threading
import bot
flask_process = threading.Thread(target=app.run)
bot_process = threading.Thread(target=bot.serve)
bot_thread.start()
flask_thread.start()
This code works as expected with $ python app.py, But when I bring in gunicorn the bot thread doesn't seem to work.
I have tried:
gunicorn app:app
gunicorn --workers=2 app:app
gunicorn --threads=2 app:app
I also tried the multiprocessing library and got the same results.
Any idea how this issue can be tackled? Thanks.
Edit: I now understand how lame this question is. I shouldn't be writing code in if __name__ = "__main__": block. That is not what is run by gunicorn. It directly picks up the app and runs it. Still have to figure how to make it handle the bot thread.
I have made this work with the following solution:
# app.py
from flask import Flask
import threading
import bot
def create_app():
app = Flask(__name__)
bot_process = threading.Thread(target=bot.serve)
return app
app = create_app()
#app.route('/')
def hello_world():
return 'Hello, World!'
This makes sure that gunicorn --workers=1 app:app runs both the app and the bot in different threads. While this works, one drawback with this solution is I am not able to scale up the number of workers to > 1. As this would not only scale the app thread, but also the bot thread, which I don't want. The bot would then unnecessarily listen for messages in two threads.
Any better solution in your mind? please convey it. Thanks.