How to integrate Gevent with Django framework, What settings has to be modified in the Settings.py and wsgi.py files for the integration.
I want to start gevent http server(port 8000) and gevent web socket server(port 9000) from the Django.
The easiest way is to use Gunicorn and tell it to use the gevent worker class. The Gunicorn docs are pretty good. For Django 1.4 or later the recommended way to start Gunicorn is simply to call the WSGI interface like so:
gunicorn --worker-class gevent wsgi:application
If you don't care about all the nice features of Gunicorn (like graceful restarts for no-downtime upgrades for example) you can use the gevent wsgi server directly. I do this myself to save some memory for non-critical websites that can be down for a little while during upgrades. This is my "run_gevent.py" file, it should be fairly easy to grok:
import gevent.monkey; gevent.monkey.patch_all()
import os, socket
from gevent.socket import socket as gsocket
from gevent.pywsgi import WSGIServer
from django.core.handlers.wsgi import WSGIHandler
script_dir = os.path.dirname(os.path.abspath(__file__))
pid_filename = os.path.join(script_dir, 'server.pid')
socket_filename = os.path.join(script_dir, 'server.sock')
pidfile = open(pid_filename, 'w')
pidfile.write(str(os.getpid()) + str('\n'))
pidfile.close()
server_socket = gsocket(socket.AF_UNIX, socket.SOCK_STREAM)
try:
os.remove(socket_filename)
except OSError:
pass
server_socket.bind(socket_filename)
server_socket.listen(256)
os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings'
server = WSGIServer(listener = server_socket, application = WSGIHandler(), log = None)
server.serve_forever(stop_timeout = 3)
Related
I'm running a Flask app that runs several Celery tasks (with Redis as the backend) and sometimes caches API calls with Flask-Caching. It will run on Heroku, although at the moment I'm running it locally. I'm trying to figure out if there's a way to reuse my various config variables for Redis access. Mainly in case Heroku changes the credentials, moves Redis to another server, etc. Currently I'm reusing the same Redis credentials in several ways.
From my .env file:
CACHE_REDIS_URL = "redis://127.0.0.1:6379/1"
REDBEAT_REDIS_URL = "redis://127.0.0.1:6379/1"
CELERY_BROKER_URL = "redis://127.0.0.1:6379/1"
RESULT_BACKEND = "redis://127.0.0.1:6379/1"
From my config.py file:
import os
from pathlib import Path
basedir = os.path.abspath(os.path.dirname(__file__))
class Config(object):
# non redis values are above and below these items
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER_URL", "redis://127.0.0.1:6379/0")
RESULT_BACKEND = os.environ.get("RESULT_BACKEND", "redis://127.0.0.1:6379/0")
CELERY_RESULT_BACKEND = RESULT_BACKEND # because of the deprecated value
CACHE_REDIS_URL = os.environ.get("CACHE_REDIS_URL", "redis://127.0.0.1:6379/0")
REDBEAT_REDIS_URL = os.environ.get("REDBEAT_REDIS_URL", "redis://127.0.0.1:6379/0")
In extensions.py:
from celery import Celery
from src.cache import cache
celery = Celery()
def register_extensions(app, worker=False):
cache.init_app(app)
# load celery config
celery.config_from_object(app.config)
if not worker:
# register celery irrelevant extensions
pass
In my __init__.py:
import os
from flask import Flask, jsonify, request, current_app
from src.extensions import register_extensions
from config import Config
def create_worker_app(config_class=Config):
"""Minimal App without routes for celery worker."""
app = Flask(__name__)
app.config.from_object(config_class)
register_extensions(app, worker=True)
return app
from my worker.py file:
from celery import Celery
from celery.schedules import schedule
from redbeat import RedBeatSchedulerEntry as Entry
from . import create_worker_app
# load several tasks from other files here
def create_celery(app):
celery = Celery(
app.import_name,
backend=app.config["RESULT_BACKEND"],
broker=app.config["CELERY_BROKER_URL"],
redbeat_redis_url = app.config["REDBEAT_REDIS_URL"],
)
celery.conf.update(app.config)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
flask_app = create_worker_app()
celery = create_celery(flask_app)
# call the tasks, passing app=celery as a parameter
This all works fine, locally (I've tried to remove code that isn't relevant to the Celery configuration). I haven't finished deploying to Heroku yet because I remembered that when I install Heroku Data for Redis, it creates a REDIS_URL setting that I'd like to use.
I've been trying to change my config.py values to use REDIS_URL instead of the other things they use, but every time I try to run my celery tasks the connection fails unless I have distinct env values as shown in my config.py above.
What I'd like to have in config.py would be this:
import os
from pathlib import Path
basedir = os.path.abspath(os.path.dirname(__file__))
class Config(object):
REDIS_URL = os.environ.get("REDIS_URL", "redis://127.0.0.1:6379/0")
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER_URL", REDIS_URL)
RESULT_BACKEND = os.environ.get("RESULT_BACKEND", REDIS_URL)
CELERY_RESULT_BACKEND = RESULT_BACKEND
CACHE_REDIS_URL = os.environ.get("CACHE_REDIS_URL", REDIS_URL)
REDBEAT_REDIS_URL = os.environ.get("REDBEAT_REDIS_URL", REDIS_URL)
When I try this, and when I remove all of the values from .env except for REDIS_URL and then try to run one of my Celery tasks, the task never runs. The Celery worker appears to run correctly, and the Flask-Caching requests run correctly (these run directly within the application rather than using the worker). It never appears as a received task in the worker's debug logs, and eventually the server request times out.
Is there anything I can do to reuse Redis_URL with Celery in this way? If I can't, is there anything Heroku does expect me to do to maintain the credentials/server path/etc for where it is serving Redis for Celery, when I'm using the same instance of Redis for several purposes like this?
By running my Celery worker with the -E flag, as in celery -A src.worker:celery worker -S redbeat.RedBeatScheduler --loglevel=INFO -E, I was able to figure out that my error was happening because Flask's instance of Celery, in gunicorn, was not able to access the config values for Celery that the worker was using.
What I've done to try to resolve this appears to have worked.
In extensions.py, instead of configuring Celery, I've done this, removing all other mentions of Celery:
from celery import Celery
celery = Celery('scraper') # a temporary name
Then, on the same level, I created a celery.py:
from celery import Celery
from flask import Flask
from src import extensions
def configure_celery(app):
TaskBase = extensions.celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
extensions.celery.conf.update(
broker_url=app.config['CELERY_BROKER_URL'],
result_backend=app.config['RESULT_BACKEND'],
redbeat_redis_url = app.config["REDBEAT_REDIS_URL"]
)
extensions.celery.Task = ContextTask
return extensions.celery
In worker.py, I'm doing:
from celery import Celery
from celery.schedules import schedule
from src.celery import configure_celery
flask_app = create_worker_app()
celery = configure_celery(flask_app)
I'm doing a similar thing in app.py:
from src.celery import configure_celery
app = create_app()
configure_celery(app)
As far as I can tell, this doesn't change how the worker behaves at all, but it allows me to access the tasks, via blueprint endpoints, in the browser.
I found this technique in this article and its accompanying GitHub repo
Hello im trying to deploy my flask app on heroku. Im using flask_socketio module and socket.io in version 4.5(i didn't know how to initialize in 2.3.x version - io())
Here's my flask code:
from flask import Flask, render_template, url_for, redirect, session, request, jsonify
from flask_socketio import SocketIO
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
socketio = SocketIO(app)
#secret key etc...
...
#socketio.on('message')
def handle_msg(msg):
socketio.send('Syncing...')
if __name__ == '__main__':
socketio.run(app)
Here's my javascript:
...
export const socket = io();
socket.connect('https://proman-code-cool.herokuapp.com/');
function init() {
...
//live sync
socket.on('message', function(msg) {
console.log(msg);
boardsManager.reloadBoards(userId);
});
}
init();
And what i want is to make real-time sync to other users after adding something. And this is working locally but on heroku ive got all the time similar errors to:
When i've changed my Procfile to web: gunicorn -k eventlet main:app and with installed eventlet 0.30.2 i've got errors like class uri 'eventlet' invalid or not found. I can't find anywhere solution for that.
eventlet==0.30.2 in requirements.txt and python-3.8.13 in runtime.txt did the job!
Procfile: web: gunicorn --worker-class eventlet -w 1 main:app
I have a task of updating every single row of a MySQL table but it's super slow. I rarely need to do it and only when I change something fundamental, but I thought this would be a great change to learn about multi threading. However all the examples and tutorials online go over some things and not others and I'm struggling to piece all the information together.
I know I need to make a celery process I just don't know if I'm doing it right. A lot of tutorials talk about dockerizing a redis environment without explaining how to do it so I thought I'd come here for some real human-to-human interaction to maybe help me feel less stupid about this.Here's my code so far
/website/__init__.py
from flask import Flask, appcontext_popped, render_template
from flask_sqlalchemy import SQLAlchemy
from flask_login import LoginManager, UserMixin, login_user, login_required, logout_user, current_user
from flask_migrate import Migrate
from flask_wtf import CSRFProtect
import logging
import celery
#Path Math
import sys
import os
from . import config
db:SQLAlchemy = SQLAlchemy()
migrate = Migrate()
csrf = CSRFProtect()
celery: celery.Celery
DB_NAME = "main"
def create_app(name):
#Flask Instance
app = Flask(__name__)
app.config.from_object(config.ProdTestConfig)
# logging stuff
#Database
db.init_app(app)
migrate.init_app(app, db)
csrf.init_app(app)
global celery
celery = make_celery(app)
with app.app_context():
db.create_all()
# Models and Blueprints here
from .helper_functions import migration_handling as mgh
#where you will find the thing I need to run async
app.before_first_request(mgh.run_back_check)
# log manager stuff
#error page handling
return app
def make_celery(app):
celery = celery.Celery(
app.import_name,
backend=app.config['CELERY_RESULT_BACKEND'],
broker=app.config['CELERY_BROKER_URL']
)
celery.conf.update(app.config)
class ContextTask(celery.Task):
def __call__(self, *args, **kwargs):
with app.app_context():
return self.run(*args, **kwargs)
celery.Task = ContextTask
return celery
I've read some other ways seem to fit a bit better like using:
celery = Celery(__name__, broker=Config.CELERY_BROKER_URL, result_backend=Config.RESULT_BACKEND)
Then in create_app() they run celery.conf.update(app.config). The issue with this is that I don't know how to setup a redis server on my linode machine hosting the site and my personal windows machine. I have redis pip installed. This is how the function I'm trying to run async looks:
#celery.task(name='app.tasks.campaign_pay_out_process')
def campaign_pay_out_process():
'''
Process Every Campaigns Pay
'''
campaign: Campaigns
for campaign in Campaigns.query.filter_by():
campaign.process_pay()
db.session.commit()
current_app.logger.info('Done Campaign Pay Out Processing')
I'm running gunicorn off of supervisor because restarting is super easy and ridding my life of super long linux commands to start a process has been great. I know this is the command for celery: celery -A celery_worker.celery worker --pool=solo --loglevel=info and I'd love to know how to include that in my work flow. Here's my supervisor config:
[program:paymentwebapp]
directory=/home/sai/paymentWebApp
command=/home/sai/paymentWebApp/venv/bin/gunicorn --workers 1 --threads 3 wsgi:app
user=sai
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
stderr_logfile=/var/log/paymentwebapp/paymentwebapp.err.log
stdout_logfile=/var/log/paymentwebapp/paymentwebapp.out.log
Here's my flask config right now:
from os import environ, path
from dotenv import load_dotenv
DB_NAME = "main"
class Config:
"""Base config."""
#SESSION_COOKIE_NAME = environ.get('SESSION_COOKIE_NAME')
MAX_CONTENT_LENGTH = 16*1000*1000
RECEIPT_FOLDER = '../uploads/receipts'
IMPORT_FOLDER = 'uploads/imports'
UPLOAD_FOLDER = 'uploads'
EXPORT_FOLDER = '/uploads/exports'
UPLOAD_EXTENSIONS = ['.jpg', '.png', '.pdf', '.csv', '.xls', '.xlsx']
STATIC_FOLDER = 'static'
TEMPLATES_FOLDER = 'templates'
class ProdConfig(Config):
basedir = path.abspath(path.dirname(__file__))
load_dotenv('/home/sai/.env')
env_dict = dict(environ)
FLASK_ENV = 'production'
DEBUG = False
TESTING = False
SQLALCHEMY_DATABASE_URI = environ.get('PROD_DATABASE_URI')
SECRET_KEY = environ.get('SECRET_KEY')
SERVER_NAME = environ.get('SERVER_NAME')
SESSION_COOKIE_SECURE = True
WTF_CSRF_TIME_LIMIT = 600
#Uploads
class DevConfig(Config):
basedir = path.abspath(path.dirname(__file__))
load_dotenv('C:\saiscripts\intercept_branch\Payment Web App Project\.env')
env_dict = dict(environ)
FLASK_ENV = 'development'
DEBUG = True
SQLALCHEMY_DATABASE_URI = environ.get('DEV_DATABASE_URI')
SECRET_KEY = environ.get('SECRET_KEY')
class ProdTestConfig(DevConfig):
'''
Developer config settings but production database server
'''
SQLALCHEMY_DATABASE_URI = environ.get('PROD_DATABASE_URI')
if __name__ == '__main__':
print(environ.get('SQLALCHEMY_DATABASE_URI'))
This is where I copied some code from a tutorial because I'm supposed to make a celery worker:
#!/usr/bin/env python
import os
#from app import create_app, celery
from website import create_app
app = create_app()
app.app_context().push()
from website import celery
I'm trying to add Celery to django to schedule tasks. I use Redis backend, and connect via unix socket. Setup was working until I have tried using password auth to redis.conf:
My settings.py:
CELERY_BROKER_URL = 'redis+socket:///home/username/domain/redis.sock?password=mypasswd'
CELERY_RESULT_BACKEND = 'redis+socket:///home/username/domain/redis.sock?password=mypasswd'
celery.py:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'public_python.settings')
# celery settings for the demo_project
app = Celery('public_python')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
Result:
[2021-07-19 21:22:14,003: ERROR/MainProcess] consumer: Cannot connect to redis+socket:///home/username/domain/redis.sock: Authentication required..
I have already tried adding:
CELERY_REDIS_PASSWORD='mypasswd'
(any any concievable combination of similar words), with no luck.
The normal format using a network based socket would look like this:
redis://:mypasswd#127.0.0.1:6379/0
For file based socket, you might want to try:
redis+socket://:mypasswd#/home/username/domain/redis.sock?virtual_host=0
I have a flask app setup to use the gevent WSGIServer in my code. I am also running gunicorn on the command line to start the server.
Should I be using WSGI server in the code when also running with gunicorn? Currently looks like this:
from flask import Flask
from gevent.pywsgi import WSGIServer
application = Flask(__name__)
#application.route("/")
def hello():
return "hello"
if __name__ == '__main__':
port = int(os.environ.get('PORT', 5000))
WSGIServer(('', port), application).serve_forever()
On the command line I am running gunicorn like:
gunicorn -w 4 myapp:application
Do I need the WSGIServer in my code, or just run it as application.run() on the default flask server?
According to Standalone WSGI Containers, The gunicorn and gevent.pywsgi are both WSGI Containers, and the gunicorn only reconize the entry named application.
So the code below if __name__ == '__main__': is not useful anymore.
If you want to use gevent,you could do:
gunicorn -k gevent -w 4 myapp:application