How can I log errors from uswgi + flask app? - flask

I'm new to wsgi and flask and want to check the logs. However I cannot print the logs to a file. I used this tutorial to set up my project. I would like to log things like print statements in python and errors that occue.
this is my config for usgwi server.ini:
[uwsgi]
module = wsgi:app
master = false
processes = 1
socket = myproject.sock
socket = stats.sock
chmod-socket = 660
vacuum = true
die-on-term = true
plugins-dir = /usr/lib/uwsgi/plugins
plugins = logfile
logger = file:logs.log
logger-req = file:requests.log
stats = myproject.sock
logs.log and requests.log never get filled (master=false and process=1 because i am using tensorflow in the app)
then i have the wsgi.py
from main import app
if __name__ == "__main__":
app.run()
then the flask app main.py
app = Flask(__name__, static_url_path='/www')
#app.route('/')
def send_index():
print("I want to print this to logs")
x = 1 / 0 # I would like this to throw a ZeroDivisionError and send it to logs
return send_from_directory('www', 'index.html')
if __name__ == '__main__':
app.run(host='0.0.0.0')

One way is to use logging within the app using the 'logging' package. A few quick hints:
logging.basicConfig(filename='record.log', level=logging.DEBUG, format=f'%(asctime)s %(levelname)s %(name)s %(threadName)s : %(message)s')
Should be called in main.py after your app is declared.
You may also use:
current_app.logger.info("Your message here")
to log specific events.

Related

Correctly Setup Celery with Flask-Application Factory Pattern/Gunicorn/Nginx/Supervisor

I have a task of updating every single row of a MySQL table but it's super slow. I rarely need to do it and only when I change something fundamental, but I thought this would be a great change to learn about multi threading. However all the examples and tutorials online go over some things and not others and I'm struggling to piece all the information together.
I know I need to make a celery process I just don't know if I'm doing it right. A lot of tutorials talk about dockerizing a redis environment without explaining how to do it so I thought I'd come here for some real human-to-human interaction to maybe help me feel less stupid about this.Here's my code so far
/website/__init__.py
from flask import Flask, appcontext_popped, render_template
from flask_sqlalchemy import SQLAlchemy
from flask_login import LoginManager, UserMixin, login_user, login_required, logout_user, current_user
from flask_migrate import Migrate
from flask_wtf import CSRFProtect
import logging
import celery
#Path Math
import sys
import os
from . import config
db:SQLAlchemy = SQLAlchemy()
migrate = Migrate()
csrf = CSRFProtect()
celery: celery.Celery
DB_NAME = "main"
def create_app(name):
#Flask Instance
app = Flask(__name__)
app.config.from_object(config.ProdTestConfig)
# logging stuff
#Database
db.init_app(app)
migrate.init_app(app, db)
csrf.init_app(app)
global celery
celery = make_celery(app)
with app.app_context():
db.create_all()
# Models and Blueprints here
from .helper_functions import migration_handling as mgh
#where you will find the thing I need to run async
app.before_first_request(mgh.run_back_check)
# log manager stuff
#error page handling
return app
def make_celery(app):
celery = celery.Celery(
app.import_name,
backend=app.config['CELERY_RESULT_BACKEND'],
broker=app.config['CELERY_BROKER_URL']
)
celery.conf.update(app.config)
class ContextTask(celery.Task):
def __call__(self, *args, **kwargs):
with app.app_context():
return self.run(*args, **kwargs)
celery.Task = ContextTask
return celery
I've read some other ways seem to fit a bit better like using:
celery = Celery(__name__, broker=Config.CELERY_BROKER_URL, result_backend=Config.RESULT_BACKEND)
Then in create_app() they run celery.conf.update(app.config). The issue with this is that I don't know how to setup a redis server on my linode machine hosting the site and my personal windows machine. I have redis pip installed. This is how the function I'm trying to run async looks:
#celery.task(name='app.tasks.campaign_pay_out_process')
def campaign_pay_out_process():
'''
Process Every Campaigns Pay
'''
campaign: Campaigns
for campaign in Campaigns.query.filter_by():
campaign.process_pay()
db.session.commit()
current_app.logger.info('Done Campaign Pay Out Processing')
I'm running gunicorn off of supervisor because restarting is super easy and ridding my life of super long linux commands to start a process has been great. I know this is the command for celery: celery -A celery_worker.celery worker --pool=solo --loglevel=info and I'd love to know how to include that in my work flow. Here's my supervisor config:
[program:paymentwebapp]
directory=/home/sai/paymentWebApp
command=/home/sai/paymentWebApp/venv/bin/gunicorn --workers 1 --threads 3 wsgi:app
user=sai
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
stderr_logfile=/var/log/paymentwebapp/paymentwebapp.err.log
stdout_logfile=/var/log/paymentwebapp/paymentwebapp.out.log
Here's my flask config right now:
from os import environ, path
from dotenv import load_dotenv
DB_NAME = "main"
class Config:
"""Base config."""
#SESSION_COOKIE_NAME = environ.get('SESSION_COOKIE_NAME')
MAX_CONTENT_LENGTH = 16*1000*1000
RECEIPT_FOLDER = '../uploads/receipts'
IMPORT_FOLDER = 'uploads/imports'
UPLOAD_FOLDER = 'uploads'
EXPORT_FOLDER = '/uploads/exports'
UPLOAD_EXTENSIONS = ['.jpg', '.png', '.pdf', '.csv', '.xls', '.xlsx']
STATIC_FOLDER = 'static'
TEMPLATES_FOLDER = 'templates'
class ProdConfig(Config):
basedir = path.abspath(path.dirname(__file__))
load_dotenv('/home/sai/.env')
env_dict = dict(environ)
FLASK_ENV = 'production'
DEBUG = False
TESTING = False
SQLALCHEMY_DATABASE_URI = environ.get('PROD_DATABASE_URI')
SECRET_KEY = environ.get('SECRET_KEY')
SERVER_NAME = environ.get('SERVER_NAME')
SESSION_COOKIE_SECURE = True
WTF_CSRF_TIME_LIMIT = 600
#Uploads
class DevConfig(Config):
basedir = path.abspath(path.dirname(__file__))
load_dotenv('C:\saiscripts\intercept_branch\Payment Web App Project\.env')
env_dict = dict(environ)
FLASK_ENV = 'development'
DEBUG = True
SQLALCHEMY_DATABASE_URI = environ.get('DEV_DATABASE_URI')
SECRET_KEY = environ.get('SECRET_KEY')
class ProdTestConfig(DevConfig):
'''
Developer config settings but production database server
'''
SQLALCHEMY_DATABASE_URI = environ.get('PROD_DATABASE_URI')
if __name__ == '__main__':
print(environ.get('SQLALCHEMY_DATABASE_URI'))
This is where I copied some code from a tutorial because I'm supposed to make a celery worker:
#!/usr/bin/env python
import os
#from app import create_app, celery
from website import create_app
app = create_app()
app.app_context().push()
from website import celery

Why flask doesn't work with gunicorn and telegram bot api?

I am using telegram bot api (telebot), flask and gunicorn.
When I use command python app.py everything is work fine but when I use python wsgi.py flask is stated on http://127.0.0.1:5000/ and bot doesn't answer and if I am using gunicorn --bind 0.0.0.0:8443 wsgi:app webhook is setting but telegram bot doesn't answer. I tried to add app.run from app.py to wsgi.py but it doesn't work
app.py
import logging
import time
import flask
import telebot
API_TOKEN = '111111111:token_telegram'
WEBHOOK_HOST = 'droplet ip'
WEBHOOK_PORT = 8443 # 443, 80, 88 or 8443 (port need to be 'open')
WEBHOOK_LISTEN = '0.0.0.0' # In some VPS you may need to put here the IP addr
WEBHOOK_SSL_CERT = 'webhook_cert.pem' # Path to the ssl certificate
WEBHOOK_SSL_PRIV = 'webhook_pkey.pem' # Path to the ssl private key
WEBHOOK_URL_BASE = "https://%s:%s" % (WEBHOOK_HOST, WEBHOOK_PORT)
WEBHOOK_URL_PATH = "/%s/" % (API_TOKEN)
logger = telebot.logger
telebot.logger.setLevel(logging.DEBUG)
bot = telebot.TeleBot(API_TOKEN)
app = flask.Flask(__name__)
# Empty webserver index, return nothing, just http 200
#app.route('/', methods=['GET', 'HEAD'])
def index():
return ''
# Process webhook calls
#app.route(WEBHOOK_URL_PATH, methods=['POST'])
def webhook():
if flask.request.headers.get('content-type') == 'application/json':
json_string = flask.request.get_data().decode('utf-8')
update = telebot.types.Update.de_json(json_string)
bot.process_new_updates([update])
return ''
else:
flask.abort(403)
# Handle all other messages
#bot.message_handler(func=lambda message: True, content_types=['text'])
def echo_message(message):
bot.reply_to(message, message.text)
# Remove webhook, it fails sometimes the set if there is a previous webhook
bot.remove_webhook()
#
time.sleep(1)
# Set webhook
bot.set_webhook(url=WEBHOOK_URL_BASE + WEBHOOK_URL_PATH,
certificate=open(WEBHOOK_SSL_CERT, 'r'))
if __name__ == "__main__":
# Start flask server
app.run(host=WEBHOOK_LISTEN,
port=WEBHOOK_PORT,
ssl_context=(WEBHOOK_SSL_CERT, WEBHOOK_SSL_PRIV),
debug=True)
wsgi.py
from app import app
if __name__ == "__main__":
app.run()
It's a bad practice to show python webservers to the world. It's not secure.
Good practice - using a reverse proxy, e.g. nginx
Chain
So the chain shoud be:
api.telegram.org -> your_domain -> your_nginx -> your_webserver -> your_app
SSL
Your ssl certificates shoud be checked on nginx level. On success just pass request to your webserver (flask or something else). It's also a tip about how to use infitity amount of bots on one host/port :)
How to configure nginx reverse proxy - you can find in startoverflow or google.
Telegram Webhooks
telebot is so complicated for using webhooks.
Try to use aiogram example. It's pretty simple:
from aiogram import Bot, Dispatcher, executor
from aiogram.types import Message
WEBHOOK_HOST = 'https://your.domain'
WEBHOOK_PATH = '/path/to/api'
bot = Bot('BOT:TOKEN')
dp = Dispatcher(bot)
#dp.message_handler()
async def echo(message: Message):
return message.answer(message.text)
async def on_startup(dp: Dispatcher):
await bot.set_webhook(f"{WEBHOOK_HOST}{WEBHOOK_PATH}")
if __name__ == '__main__':
executor.start_webhook(dispatcher=dp, on_startup=on_startup,
webhook_path=WEBHOOK_PATH, host='localhost', port=3000)
app.run(host=0.0.0.0, port=5000, debug=True)
Change port number and debug arguments based on your case.

how to add job by flask apscheduler api using postman

from flask import Flask
from flask_apscheduler import APScheduler
class Config(object):
JOBS = [
{
'id': 'job5',
'func': 'f_s_api.view:job1',
'trigger': 'interval',
'seconds': 50
}
]
SCHEDULER_API_ENABLED = True
def job1():
print('job add')
if __name__ == '__main__':
app = Flask(__name__)
app.config.from_object(Config())
scheduler = APScheduler()
scheduler.init_app(app)
scheduler.start()
app.run(debug= True , port= 8080)
output
Serving Flask app "view" (lazy loading)
Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
Debug mode: on
Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
Restarting with stat
Debugger is active!
Debugger PIN: 135-565-985
job add
ob add
run the flask and open the postman and http://localhost:5000/scheduler/jobs this flask apscheduler api url for add job in the form of post request then in body-->row select text type as JSON and the send the request.

how to run mqtt in flask application simultaneously over wsgi

I created a flask application that consists of MQTT client which simultaneously subscribe data from esp32 and store it in database.
from flask import Flask, redirect, render_template, request, session, abort,make_response,jsonify, flash, url_for
import paho.mqtt.client as mqtt
import json
import config
import db_access
#mqtt code
def on_message(client, userdata, message):
topic = message.topic
print("line 12 - topic checkpoint - ",topic)
msgDecode=str(message.payload.decode("utf-8","ignore"))
msgJson=json.loads(msgDecode) #decode json data
print("line 15 - json checkpoint - ",type(msgJson))
# deviceID = msgJson["DeviceID"]
# currentCounter = msgJson["Counter"]
# status = msgJson["Status"]
db_access.updateStatus(msgJson["DeviceID"],msgJson["Status"])
app = Flask(__name__,template_folder='templates')
'''Web portal routes'''
#app.route('/device/switch', methods=['POST'])
def switch():
#parameter parsing
deviceID = request.args.get('deviceID')
status = request.args.get('status')
statusMap = {"on":1,"off":0}
#MQTT publish
mqtt_msg = json.dumps({"deviceID":int(deviceID),"status":statusMap[status]})
client.publish(config.MQTT_STATUS_CHANGE_TOPIC,mqtt_msg)
time_over_flag = 0
loop_Counter = 0
while status != db_access.getDeviceStatus(deviceID):
time.sleep(2)
loop_Counter+=1
if loop_Counter ==2:
time_over_flag = 1
break
if time_over_flag:
return make_response(jsonify({"statusChange":False}))
else:
return make_response(jsonify({"statusChange":True}))
if __name__ == "__main__":
db_access.createUserTable()
db_access.insertUserData()
db_access.createDeviceTable()
db_access.insertDeviceData()
print("creating new instance")
client = mqtt.Client("server") #create new instance
client.on_message=on_message #attach function to callback
print("connecting to broker")
client.connect(config.MQTT_BROKER_ADDRESS)
client.loop_start()
print("Subscribing to topic","esp/#")
client.subscribe("esp/#")
app.run(debug=True, use_reloader=False)
This is code in ____init____.py
db_access.py is consist of database operations and config.py consist of configurations.
Will this work in apache?
also, I have no previous experience with WSGI
The problem with launching your code (as included) with a WSGI server, is the part in that last if block, only gets run when you execute that file directly with the python command.
To make this work, I'd try moving that block of code to the top of your file, around here:
app = Flask(__name__,template_folder='templates')
db_access.createUserTable()
db_access.insertUserData()
db_access.createDeviceTable()
db_access.insertDeviceData()
print("creating new instance")
client = mqtt.Client("server") #create new instance
client.on_message=on_message #attach function to callback
print("connecting to broker")
client.connect(config.MQTT_BROKER_ADDRESS)
client.loop_start()
print("Subscribing to topic","esp/#")
client.subscribe("esp/#")
I'd also rename that __init__.py file, to something like server.py as the init file isn't meant to be heavy.
Once you've done that, run it with the development server again and test that it works as expected.
Then install a WSGI server like gunicorn into your virtual enviornment:
pip install gunicorn
And launch the app with gunicorn (this command should work, assuming you renamed your file to server.py):
gunicorn --bind '0.0.0.0:5000' server:app
Then test again.

Running a python subprocess in Flask

I have a Flask web app running a crawling process in this fashion:
on terminal tab 1:
$ cd /path/to/scraping
$ scrapyrt
http://scrapyrt.readthedocs.io/en/latest/index.html
on terminal tab 2:
$ python app.pp
and in app.py:
params = {
'spider_name': spider,
'start_requests':True
}
response = requests.get('http://localhost:9080/crawl.json', params)
print ('RESPONSE',response)
data = json.loads(response.text)
which works.
now I'de like to move everthing into app.py, and for that I've tried:
import subprocess
from time import sleep
try:
subprocess.check_output("scrapyrt", shell=True, cwd='path/to/scraping')
except subprocess.CalledProcessError as e:
raise RuntimeError("command '{}' return with error (code {}): {}".format(e.cmd, e.returncode, e.output))
sleep(3)
params = {
'spider_name': spider,
'start_requests':True
}
response = requests.get('http://localhost:9080/crawl.json', params)
print ('RESPONSE',response)
data = json.loads(response.text)
this starts twisted, like so:
2018-02-03 17:29:35-0200 [-] Log opened.
2018-02-03 17:29:35-0200 [-] Site starting on 9080
2018-02-03 17:29:35-0200 [-] Starting factory <twisted.web.server.Site instance at 0x104effa70>
but crawling process hangs and does not go through.
what am I missing here?
Have you considered using a scheduler, such as APScheduler or similar?
You can run code using crons or in intervals, and it integrates well with Flask.
Take a look here:
http://apscheduler.readthedocs.io/en/stable/userguide.html
Here is an example:
from flask import Flask
from apscheduler.schedulers.background import BackgroundScheduler
app = Flask(__name__)
cron = BackgroundScheduler()
cron.start()
#app.route('/')
def index():
return render_template("index.html")
#cron.scheduled_job('interval', minutes=3)
def my_scrapper():
# Do the stuff you need
return True