Setting Flask-SockerIO to run in Gunicorn Server - flask

I tried run the SocketIO server to run in Gunicorn in Ubuntu OS.
gevent
gevent-socketio
gunicorn
run.py
from views import app
from gevent import monkey
from socketio.server import SocketIOServer
import werkzeug.serving
monkey.patch_all()
PORT = 5000
#werkzeug.serving.run_with_reloader
def runServer()
print 'Listening on http://127.0.0.1:%s and on port 10843 (flash policy server)' % PORT
SocketIOServer(('', PORT), app, resource="socket.io").serve_forever()
if __name__ == '__main__':
runServer()
Command Used -
gunicorn --worker-class socketio.sgunicorn.GeventSocketIOWorker run:app
The Connections seems to be getting Dropped and Reconnected Often
Logs -
2013-12-24 22:40:59 [3331] [INFO] Starting gunicorn 0.13.4
2013-12-24 22:40:59 [3331] [INFO] Listening at: http://127.0.0.1:8000 (3331)
2013-12-24 22:40:59 [3331] [INFO] Using worker: socketio.sgunicorn.GeventSocketIOWorker
2013-12-24 22:40:59 [3334] [INFO] Booting worker with pid: 3334
2013-12-24 22:41:01 [3339] [INFO] Starting gunicorn 0.13.4
2013-12-24 22:41:01 [3339] [ERROR] Connection in use: ('127.0.0.1', 8000)
2013-12-24 22:41:01 [3339] [ERROR] Retrying in 1 second.
2013-12-24 22:41:02 [3339] [ERROR] Connection in use: ('127.0.0.1', 8000)
2013-12-24 22:41:02 [3339] [ERROR] Retrying in 1 second.
2013-12-24 22:41:03 [3339] [ERROR] Connection in use: ('127.0.0.1', 8000)
2013-12-24 22:41:03 [3339] [ERROR] Retrying in 1 second.
2013-12-24 22:41:04 [3339] [ERROR] Connection in use: ('127.0.0.1', 8000)
2013-12-24 22:41:04 [3339] [ERROR] Retrying in 1 second.

You could comment this line #werkzeug.serving.run_with_reloader and it should work.

Related

Call method once when Flask app started despite many Gunicorn workers

I have a simple Flask app that starts with Gunicorn which has 4 workers.
I want to clear and warmup cache when server restarted. But when I do this inside create_app() method it is executing 4 times.
def create_app(test_config=None):
app = Flask(__name__)
# ... different configuration here
t = threading.Thread(target=reset_cache, args=(app,))
t.start()
return app
[2022-10-28 09:33:33 +0000] [7] [INFO] Booting worker with pid: 7
[2022-10-28 09:33:33 +0000] [8] [INFO] Booting worker with pid: 8
[2022-10-28 09:33:33 +0000] [9] [INFO] Booting worker with pid: 9
[2022-10-28 09:33:33 +0000] [10] [INFO] Booting worker with pid: 10
2022-10-28 09:33:36,908 INFO webapp reset_cache:38 Clearing cache
2022-10-28 09:33:36,908 INFO webapp reset_cache:38 Clearing cache
2022-10-28 09:33:36,908 INFO webapp reset_cache:38 Clearing cache
2022-10-28 09:33:36,909 INFO webapp reset_cache:38 Clearing cache
How to make it only one-time without using any queues, rq-workers or celery?
Signals, mutex, some special check of worker id (but it is always dynamic)?
Tried Haven't found any solution so far.
I used Redis locks for that.
Here is an example using flask-caching, which I had in project, but you can replace set client from whatever place you have redis client:
import time
from webapp.models import cache # cache = flask_caching.Cache()
def reset_cache(app):
with app.app_context():
client = app.extensions["cache"][cache]._write_client # redis client
lock = client.lock("warmup-cache-key")
locked = lock.acquire(blocking=False, blocking_timeout=1)
if locked:
app.logger.info("Clearing cache")
cache.clear()
app.logger.info("Warming up cache")
# function call here with `cache.set(...)`
app.logger.info("Completed warmup cache")
# time.sleep(5) # add some delay if procedure is really fast
lock.release()
It can be easily extended with threads, loops or whatever you need to set value to cache.

Django Login suddenly stopped working - timing out

My Django project used to work perfectly fine for the last 90 days.
There has been no new code deployment during this time.
Running supervisor -> gunicorn to serve the application and to the front nginx.
Unfortunately it just stopped serving the login page (standard framework login).
I wrote a small view that checks if the DB connection is working and it comes up within seconds.
def updown(request):
from django.shortcuts import HttpResponse
from django.db import connections
from django.db.utils import OperationalError
status = True
# Check database connection
if status is True:
db_conn = connections['default']
try:
c = db_conn.cursor()
except OperationalError:
status = False
error = 'No connection to database'
else:
status = True
if status is True:
message = 'OK'
elif status is False:
message = 'NOK' + ' \n' + error
return HttpResponse(message)
This delivers back an OK.
But the second I am trying to reach /admin or anything else requiring the login, it times out.
wget http://127.0.0.1:8000
--2022-07-20 22:54:58-- http://127.0.0.1:8000/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response... 302 Found
Location: /business/dashboard/ [following]
--2022-07-20 22:54:58-- http://127.0.0.1:8000/business/dashboard/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response... 302 Found
Location: /account/login/?next=/business/dashboard/ [following]
--2022-07-20 22:54:58-- http://127.0.0.1:8000/account/login/? next=/business/dashboard/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response... No data received.
Retrying.
--2022-07-20 22:55:30-- (try: 2) http://127.0.0.1:8000/account/login/?next=/business/dashboard/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response...
Supervisor / Gunicorn Log is not helpful at all:
[2022-07-20 23:06:34 +0200] [980] [INFO] Starting gunicorn 20.1.0
[2022-07-20 23:06:34 +0200] [980] [INFO] Listening at: http://127.0.0.1:8000 (980)
[2022-07-20 23:06:34 +0200] [980] [INFO] Using worker: sync
[2022-07-20 23:06:34 +0200] [986] [INFO] Booting worker with pid: 986
[2022-07-20 23:08:01 +0200] [980] [CRITICAL] WORKER TIMEOUT (pid:986)
[2022-07-20 23:08:02 +0200] [980] [WARNING] Worker with pid 986 was terminated due to signal 9
[2022-07-20 23:08:02 +0200] [1249] [INFO] Booting worker with pid: 1249
[2022-07-20 23:12:26 +0200] [980] [CRITICAL] WORKER TIMEOUT (pid:1249)
[2022-07-20 23:12:27 +0200] [980] [WARNING] Worker with pid 1249 was terminated due to signal 9
[2022-07-20 23:12:27 +0200] [1515] [INFO] Booting worker with pid: 1515
Nginx is just giving:
502 Bad Gateway
I don't see anything in the logs, I don't see any error when running the dev server from Django, also Sentry is not showing anything. Totally lost.
I am running Django 4.0.x and all libraries are updated.
The check up script for the database is only checking the connection. Due to misconfiguration of the database replication, the db was connecting and also reading, but when writing it hang.
The login page tries to write a session to the tables, which failed in this case.

Flask SocketIO connection faild on production

I just deployed my flask application on development server after i checked that the socketio works well on regular run server and also using gunicorn with eventlet on local,
Now i deployed my flask app and it runners well when i open any page (HTTP) like API or so,
But when i try to connect to the websockets it says the following error in the console tab in the browser
Firefox can’t establish a connection to the server at ws://server-ip/chat/socket.io/?EIO=4&transport=websocket&sid=QClYLXcK0D0sSVYNAAAM.
This is my frontend using socketio cdn
<script src="https://cdn.socket.io/4.3.2/socket.io.min.js" integrity="sha384-KAZ4DtjNhLChOB/hxXuKqhMLYvx3b5MlT55xPEiNmREKRzeEm+RVPlTnAn0ajQNs" crossorigin="anonymous"></script>
var socket = io.connect('http://server-ip/chat/send/', {"path" : "/chat/socket.io"});
I set "path" here to the correct socket.io url, If i tried to remove it and just type the url it gives
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://37.76.245.93/socket.io/?EIO=4&transport=polling&t=NrcpeSQ. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).
So i added it to redirect it to the correct url but it can't connect it using ws as shown above
I am using this command on server to run flask
gunicorn --worker-class eventlet -w 1 --bind 0.0.0.0:8000 --timeout 500 --keep-alive 500 wsgi:app
and this is my wsgi file
from chat import app
from dotenv import load_dotenv, find_dotenv
from flask_socketio import SocketIO
from messages.socket import socket_handle
load_dotenv(find_dotenv())
app = app(settings="chat.settings.dev")
socket = SocketIO(app, cors_allowed_origins=app.config['ALLOWED_CORS'])
socket_handle(socket)
The 'socket_handle' function just appends the join and message_handle functions with socket decorator to them, I think something is preventing the server to work on ws but i don't know why
I know that this need to be run as ASGI not WSGI but as socketio docs says i think using eventlet will solve this, But i also tried to replace my wsgi.py file to this
from chat import app
from dotenv import load_dotenv, find_dotenv
from flask_socketio import SocketIO
from messages.socket import socket_handle
from asgiref.wsgi import WsgiToAsgi
load_dotenv(find_dotenv())
apps = app(settings="chat.settings.dev")
socket = SocketIO(apps, cors_allowed_origins=apps.config['ALLOWED_CORS'])
socket_handle(socket)
asgi_app = WsgiToAsgi(apps)
And when i run Gunicorn command i get this
gunicorn --worker-class eventlet -w 1 --bind 0.0.0.0:8000 --timeout 500 --keep-alive 500 wsgi:asgi_app
[2021-11-28 16:17:42 +0200] [39043] [INFO] Starting gunicorn 20.1.0
[2021-11-28 16:17:42 +0200] [39043] [INFO] Listening at: http://0.0.0.0:8000 (39043)
[2021-11-28 16:17:42 +0200] [39043] [INFO] Using worker: eventlet
[2021-11-28 16:17:42 +0200] [39054] [INFO] Booting worker with pid: 39054
[2021-11-28 16:17:47 +0200] [39054] [ERROR] Error handling request /socket.io/?EIO=4&transport=polling&t=NrcwBTe
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chat-Tb0n1QCf/lib/python3.9/site-packages/gunicorn/workers/base_async.py", line 55, in handle
self.handle_request(listener_name, req, client, addr)
File "/root/.local/share/virtualenvs/chat-Tb0n1QCf/lib/python3.9/site-packages/gunicorn/workers/base_async.py", line 108, in handle_request
respiter = self.wsgi(environ, resp.start_response)
TypeError: __call__() missing 1 required positional argument: 'send'
^C[2021-11-28 16:17:48 +0200] [39043] [INFO] Handling signal: int
[2021-11-28 16:17:48 +0200] [39054] [INFO] Worker exiting (pid: 39054)
[2021-11-28 16:17:48 +0200] [39043] [INFO] Shutting down: Master
I am using latest flask & socketio versions

Supervisor/Gunicorn/Django: supervisor unable to RUN gunicorn ('fixed' on STARTING)

I try to deploy my Django project using Nginx/Gunicorn and supervisor.
When I run gunicorn directly it works:
(envCov) zebra#zebra:~/intensecov_app/intensecov$ gunicorn coverage.wsgi:application
[2020-05-27 09:41:59 +0000] [45637] [INFO] Starting gunicorn 20.0.4
[2020-05-27 09:41:59 +0000] [45637] [INFO] Listening at: http://127.0.0.1:8000 (45637)
[2020-05-27 09:41:59 +0000] [45637] [INFO] Using worker: sync
[2020-05-27 09:41:59 +0000] [45639] [INFO] Booting worker with pid: 45639
Issue came when I try to used supervisor after config (see below).
I run this 3 command:
(envCov) zebra#zebra:~/intensecov_app/intensecov$ sudo supervisorctl reread
intensecov-gunicorn: available
(envCov) zebra#zebra:~/intensecov_app/intensecov$ sudo supervisorctl update
intensecov-gunicorn: added process group
(envCov) zebra#zebra:~/intensecov_app/intensecov$ sudo supervisorctl status
intensecov-gunicorn STARTING
As you can see, gunciron programm is STARTING but never RUNNING
I try to 'manually' restart but git an error :
(envCov) zebra#zebra:~/intensecov_app/intensecov$ sudo supervisorctl restart intensecov-gunicorn
intensecov-gunicorn: stopped
intensecov-gunicorn: ERROR (spawn error)
/etc/supervisor/conf.d/intensecov-gunicorn.conf
[program:intensecov-gunicorn]
command = /home/zebra/envs/envCov/bin/gunicorn coverage.wsgi:application
user = zebra
directory = /home/zebra/intensecov_app
autostart = true
autorestart = true
I fix my issue by changing directory path
[program:intensecov-gunicorn]
command = /home/zebra/envs/envCov/bin/gunicorn coverage.wsgi:application
user = zebra
directory = /home/zebra/intensecov_app/intensecov ***path modifyed***
autostart = true
autorestart = true

connect() to unix:///tmp/uwsgi_dev.sock failed (111: Connection refused) while connecting to upstream

I'm running django application with uwsgi + Nginx with crontab command given below
* * * * * /usr/local/bin/lockrun --lockfile /path/to/lock_file -- uwsgi --close-on-exec -s /path/to/socket_file --chdir /path/to/project/settings/folder/ --pp .. -w project_name.wsgi -C666 -p 3 -H /path/to/virtualenv/folder/ 1>> /path/to/log_file 2>> /path/to/error_log
but nginx error log file shows the error
*83 connect() to unix:///path/to/socket_file failed (111: Connection refused) while connecting to upstream, client: xxx.xxx.xx.xxx, server:
localhost, request: "GET /auth/status/ HTTP/1.1", upstream:
"uwsgi://unix:///path/to/socket_file:", host: "xx.xxx.xx.xxx",
referrer: "http://xx.xxx.xx.xxx/"
Check if .sock file is created:
ls /path/to/socket_file
If it is created, check permissions:
ls -l /path/to/socket_file
Check where is the socket file created, may be due to chdir command, it has created somewhere else.
lsof | grep socket_file
If they are fine, can you share ngnix config details?
It's simple and better to use a file to supply configuration parameters.
Sample uwsgi.ini file
[uwsgi]
print = -------------------------------------------------
print = Hello UWSGI is ready to start...
print = -------------------------------------------------
# the base directory (full path)
chdir = /path/to/chdir/
#Application's wsgi file/module
module = wsgi:application
# process-related settings
master = true
# maximum number of worker processes
processes = 8
socket = /path/to/myuwsgi.sock
pidfile = /opt/uwsgi/mywsgi.pid
chmod-socket = 660
logto2 = /log/path/uwsgi.log
to apply these while uwsgi is starting up, do
uwsgi --ini uwsgi.ini
Alternatively, to direct logs to specific file:
uwsgi <your other paramerters> --logto2 /log/path/uwsgi.log