Gunicorn Socket file is missing - django

I am running a Django project and I want Gunicorn and Nginx to serve this project.
I am using Gunicorn 19.5.
The problem is basically that my socket file doesn't get created and I tried everything (chmod 777, chown, ...) and nothing is working.
Here is my bash script to start gunicorn:
#!/bin/bash
NAME="hello_project" # Name of the application
DJANGODIR=/home/ubuntu/git/hello_project # Django project directory
SOCKFILE=/home/ubuntu/git/hello_project/run/gunicorn.sock # we will communicte using this unix socket
LOG_FILE=/home/ubuntu/git/hello_project/logs/gunicorn.log
USER=ubuntu # the user to run as
GROUP=ubuntu # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
MAX_REQUESTS=1 # reload the application server for each request
DJANGO_SETTINGS_MODULE=hello_project.settings # which settings file should Django use
DJANGO_WSGI_MODULE=hello_project.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source ~/.virtualenvs/hello_project/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn’t exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use –daemon)
exec ~/.virtualenvs/hello_project/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
–name $NAME \
–workers $NUM_WORKERS \
–max-requests $MAX_REQUESTS \
–user=$USER –group=$GROUP \
–bind=unix:$SOCKFILE \
–log-level=debug \
–log-file=$LOG_FILE
I am getting crazy here, there is no log file where I can look at, I even tried to create the socket file myself with no success. I suspect it's a permission problem but even when I run the script with sudo, it doesn't work.
My directory "run" gets created though.
Here is the nginx error from the error.log file:
2016/05/19 16:56:32 [crit] 11053#0: *28 connect() to unix:/home/ubuntu/git/hello_project/run/gunicorn.sock failed (2: No such file or directory) while connecting to upstream...
Any clue ?
Thanks a lot,

Try running the script without doing exec ~/.virtualenvs/hello_project/bin/gunicorn.
Instead try running it with gunicorn then the args.
This is assuming you have permission to install gunicorn on the system without a virtualenv.
I have a gist that starts the same process you're trying to run. (Django, Nginx, Gunicorn)
https://gist.github.com/marcusshepp/129c822e2065e20122d8
I see you're using supervisor so you're going to not want to use the --deamon option that I use in my script.

Related

H13 (Connection closed without response) errors on Heroku scale down

I am running Django app in Docker image with uWSGI, supervisor and nginx on Heroku.
I am often getting H13 (Connection closed without response) errors when the app is scaling down:
This problem generates following log events:
2022-10-12T09:35:13.231318+00:00 heroku web.3 - - State changed from up to down
2022-10-12T09:35:13.774228+00:00 heroku web.3 - - Stopping all processes with SIGTERM
2022-10-12T09:35:14.028602+00:00 heroku router - - at=error code=H13 desc="Connection closed without response" method=GET path="/comments/api/assets-uuidasset/xxxx-xxxx-xxxx-xxxx-xxxxx/count/?_=1665564563"
I expect the problem lies in either the socket not closing on SIGTERM signal or nginx closing ungracefully with SIGTERM signal (it should receive SIGQUIT for graceful shutdown) or something similar.
The first case is described in this article regarding Puma and Ruby: https://www.schneems.com/2019/07/12/puma-4-hammering-out-h13sa-debugging-story/
The second case is described here: https://canonical.com/blog/avoiding-dropped-connections-in-nginx-containers-with-stopsignal-sigquit
After three weeks of work I had been finally able to fix this issue.
Short answer:
Avoid using Heroku to run Docker images if you can.
Heroku sends SIGTERM to ALL processes in the dyno, which is something that is very hard to deal with. You will need patch almost every process inside the Docker container to count with SIGTERM and terminate nicely.
Standard way of terminating Docker container is with docker stop command which sends SIGTERM ONLY to root process (entrypoint), where it can be dealt with.
Heroku has very arbitrary process of terminating the instance incompatible with existing applications as well as existing Docker image deployments. And according to my communication with Heroku they are unable to change this in the future.
Long answer:
There was not one single issue but 5 separate issues.
In order to terminate the instance successfully following conditions needs to be fulfilled:
Nginx has to be terminate first and start last (so the Heroku router stops sending requests, this is similar to Puma) and it has to be graceful, which is usually done with SIGQUIT signal.
Other applications needs to terminate gracefully in correct order - in my case first Nginx, than Gunicorn and PGBouncer as the last. The order of terminating the applications is important - e.g. PGBouncer must terminate after Gunicorn to not interrupt running SQL queries.
The docker-entrypoint.sh needs to catch the SIGTERM signal. This didn't show up when I was testing locally.
In order to achieve this I had to deal with every application separately:
Nginx:
I had to patch Nginx to swich SIGTERM and SIGQUIT signals, so I run following command in my Dockerfile:
# Compile nginx and patch it to switch SIGTERM and SIGQUIT signals
RUN curl -L http://nginx.org/download/nginx-1.22.0.tar.gz -o nginx.tar.gz \
&& tar -xvzf nginx.tar.gz \
&& cd nginx-1.22.0 \
&& sed -i "s/ QUIT$/TIUQ/g" src/core/ngx_config.h \
&& sed -i "s/ TERM$/QUIT/g" src/core/ngx_config.h \
&& sed -i "s/ TIUQ$/TERM/g" src/core/ngx_config.h \
&& ./configure --without-http_rewrite_module \
&& make \
&& make install \
&& cd .. \
&& rm nginx-1.22.0 -rf \
&& rm nginx.tar.gz
Issue I created
uWSGI/Gunicorn:
I gave up on uWSGI and swiched to Gunicorn (which terminates gracefully on SIGTERM), but I had to patch it anyways in the end, because it needs to terminate later than Nginx. I disabled SIGTERM signal and mapped it's function on SIGUSR1
My patched version is here: https://github.com/PetrDlouhy/gunicorn/commit/1414112358f445ce714c5d4f572d78172b993b79
I install it with:
RUN poetry run pip install -e git+https://github.com/PetrDlouhy/gunicorn#no_sigterm#egg=gunicorn[gthread] \
&& cd `poetry env info -p`/src/gunicorn/ \
&& git config core.repositoryformatversion 0 # Needed for Dockerfile.test only untill next version of Dulwich is released \
&& cd /project
Issue I created
PGBouncer:
I also deployed PGBouncer which I had to modify to not react on SIGTERM with:
# Compile pgbouncer and patch it to switch SIGTERM and SIGQUIT signals
RUN curl -L https://github.com/pgbouncer/pgbouncer/releases/download/pgbouncer_1_17_0/pgbouncer-1.17.0.tar.gz -o pgbouncer.tar.gz \
&& tar -xvzf pgbouncer.tar.gz \
&& cd pgbouncer-1.17.0 \
&& sed -i "s/got SIGTERM, fast exit/PGBouncer got SIGTERM, do nothing/" src/main.c \
&& sed -i "s/ exit(1);$//g" src/main.c \
&& ./configure \
&& make \
&& make install \
&& cd .. \
&& rm pgbouncer-1.17.0 -rf \
&& rm pgbouncer.tar.gz
It still can be brought down gracefully with SIGINT.
Issue I created
docker-entrypoint.sh
I had to trap SIGTERM in my docker-entrypoint.sh with:
_term() {
echo "Caught SIGTERM signal. Do nothing here, because Heroku already sent signal everywhere."
}
trap _term SIGTERM
supervisor
In order to not receive R12 errors all processes needs to terminate before 30 second Heroku graceful period. I achieved it by setting priorities in supervisord.conf:
[supervisord]
nodaemon=true
[program:gunicorn]
command=poetry run newrelic-admin run-program gunicorn wsgi:application -c /etc/gunicorn/gunicorn.conf.py
priority=2
stopsignal=USR1
...
[program:nginx]
command=/usr/local/nginx/sbin/nginx -c /etc/nginx/nginx.conf
priority=3
...
[program:pgbouncer]
command=/usr/local/bin/pgbouncer /project/pgbouncer/pgbouncer.ini
priority=1
stopsignal=INT
...
Testing the solutions:
In order to test what was going on, I had to develop some testing techniques which might come handy in different but similar cases.
I created a view which waits 10 seconds before answer and bind it on /slow_view url.
Then I started the server in Docker instance, made query to the slow view with curl -I "http://localhost:8080/slow_view" and made second connection to the Docker instance and executed kill command with pkill -SIGTERM . or e.g. pkill -SIGTERM gunicorn.
I could also run the kill command on testing Heroku dyno where I connected with heroku ps:exec --dyno web.1 --app my_app.

Problem in running (flask+) gunicorn using supervisor: no application module specified

all. Got a weird problem in running gunicorn with supervisor to serve a flask application. Before I start stating my problem, just want to say that I'm aware of this SO post. Nevertheless, my problem was that I could run gunicorn directly in the foreground to server the app. The problem only occurred when I used supervisor.
Here's the structure of my flask application:
myproject/
|---webapp/
| |---__init__.py # create_app function is here
| |--- ... # other files in webapp
|---wsgi.py
|---manage.py
|---venv # gunicorn is in venv/bin
The wsgi.py file looks like the following
from webapp import create_app
app = create_app()
if __name__ == "__main__":
app.run()
Sitting in myproject directory, I can launch the app by running
$ venv/bin/gunicorn -w 4 wsgi:app
Then I created a webapp.conf in /etc/supervisor/conf.d/ directory, as follows.
[program:webapp]
directory=/home/<my username>/projects/myproject
command=/home/<my username>/projects/myproject/venv/bin/gunicorn -w 4 wsgi:app
user=<my username>
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=<some path>
stderr_logfile=<some path>
When I reloaded my supervisor, the stdout log file said
usage: gunicorn [OPTIONS] [APP_MODULE] gunicorn: error: No application module specified.
My question is, why do I get an error here if I don't get an error when running gunicorn directly? What's the difference?
By the way, I then tried to use systemd to run the gunicorn following this post, and systemd can launch the gunicorn service normally.
Any idea what I did wrong with supervisor? Thanks very much.

zabbix agent service failed, PID not readable

I'm trying to run zabbix-agent 3.0.4 on CentOS7, systemd failed to start the zabbix agent, from journalctl -xe
PID file /run/zabbix/zabbix_agentd.pid not readable (yes?) after start.
node=localhost.localdomain type=SERVICE_START msg=audit(1475848200.601:17994): pid=1 uid=0 auid=4294967298 ses=...
zabbix-agent.service never wrote its PID file. Failing.
Failed to start Zabbix Agent.
There is no permission error, and I try to re-configure the PID path to /tmp folder in zabbix-agent.service and zabbix_agentd.conf, it doesn't work.
Very weird, anyone has idea? Thank you in advance.
=====
Investigating a little bit, the PID should be under /run/zabbix folder, I create manually the zabbix_agentd.pid, and it disappears after 1 second. Really weird.
I had the same issue and it was related to selinux. So I allowed zabbix_agent_t via semanage
yum install policycoreutils-python
semanage permissive -a zabbix_agent_t
Giving the full permissions 7777 to that pid file will help to resolve the issue.
i had this too and it was Selinux, it was disabled but i had to
run the command
That's work for me.
Prerequisites: Centos 7, zabbix-server 3.4 and zabbix-agent 3.4 runing on same host.
Solution steps:
Install zabbix-server and zabbix-agent (no matter how - via yum or building from source code).
Check first if there is already separate users exist in /etc/passwd. If there is already zabbix users exist go to p.4.
Create separate groups and users for zabbix-server and zabbix-agent.
Example (you can specify usernames on your desire):
groupadd zabbix-agent
useradd -g zabbix-agent zabbix-agent
groupadd zabbix
useradd -g zabbix zabbix
Specify PID and LOG file location in Zabbix config files. Example:
For zabbix-server: in /etc/zabbix/zabbix_server.conf:
PidFile=/run/zabbix/zabbix_server.pid
LogFile=/var/log/zabbix/zabbix_server.log
For zabbix-agent: in /etc/zabbix/zabbix_agentd.conf:
PidFile=/run/zabbix-agent/zabbix-agent.pid
LogFile=/var/log/zabbix-agent/zabbix-agent.log
Create appropriate directories (if they haven't been creatred previously) as were specified in config files and change owners for this directories:
mkdir /var/log/zabbix-agent
mkdir /run/zabbix-agent
chown zabbix-agent:zabbix-agent /var/log/zabbix-agent
chown zabbix-agent:zabbix-agent /run/zabbix-agent
mkdir /var/log/zabbix
mkdir /run/zabbix
chown zabbix:zabbix /var/log/zabbix-agent
chown zabbix:zabbix /run/zabbix-agent
Check systemd config for zabbix services and add Username= and Group= in [Service] section under which services will run. Example:
For zabbix-server: /etc/systemd/system/multi-user.target.wants/zabbix-server.service:
[Unit]
Description=Zabbix Server
After=syslog.target
After=network.target
[Service]
Environment="CONFFILE=/etc/zabbix/zabbix_server.conf"
EnvironmentFile=-/etc/sysconfig/zabbix-server
Type=forking
Restart=on-failure
PIDFile=/run/zabbix/zabbix_server.pid
KillMode=control-group
ExecStart=/usr/sbin/zabbix_server -c $CONFFILE
ExecStop=/bin/kill -SIGTERM $MAINPID
RestartSec=10s
TimeoutSec=0
User=zabbix
Group=zabbix
[Install]
WantedBy=multi-user.target
For zabbix-agent: /etc/systemd/system/multi-user.target.wants/zabbix-agent.service:
[Unit]
Description=Zabbix Agent
After=syslog.target
After=network.target
[Service]
Environment="CONFFILE=/etc/zabbix/zabbix_agentd.conf"
EnvironmentFile=-/etc/sysconfig/zabbix-agent
Type=forking
Restart=on-failure
PIDFile=/run/zabbix-agent/zabbix-agent.pid
KillMode=control-group
ExecStart=/usr/sbin/zabbix_agentd -c $CONFFILE
ExecStop=/bin/kill -SIGTERM $MAINPID
RestartSec=10s
User=zabbix-agent
Group=zabbix-agent
[Install]
WantedBy=multi-user.target
If there is no such configs - you can find them in:
/usr/lib/systemd/system/
OR
Enable zabbix-agent.service service and thereby create symlink in /etc/systemd/system/multi-user.target.wants/ directory to /usr/lib/systemd/system/zabbix-agent.service
Run services:
systemctl start zabbix-server
systemctl start zabbix-agent
Check users under which services had been started (first column):
ps -aux | grep zabbix
or via top command.
Disable SELinux and Firewalld and you're good to go

Upstart/uWSGI/Flask doesn't log exceptions

I have the following config file for Upstart, and it starts the Flask server fine, but whenever there is an exception in the app the log file doesn't have the exception information.
start on [2345]
stop on [06]
respawn
script
cd /var/www/binary-fission/server
export BF_CONFIG=config/staging.py
exec uwsgi --http 0.0.0.0:5000 --wsgi-file server.py --callable app --master --threads 2 --processes 4 --logto /var/log/binary-fission/server.log
end script
However, if I run the same uwsgi command manually without Upstart, the exception is logged.
How do I make upstart+uwgi log the exception from a Flask application?
It turned out that turning on the "PROPAGATE_EXCEPTIONS" option in the flask configuration file (config/staging.py) fixed the issue. This is because in that configuration file, "DEBUG" is turned off which turns "PROPAGATE_EXCEPTIONS" off at the same time.
When I ran uwsgi command manually, I didn't specify the configuration file and my Flask app fell back to the default configuration with "DEBUG" on.

uWSGI works as process but not as daemon

For my current flask deployment, I had to set up a uwsgi server.
This is how I have created the uwsgi daemon:
sudo vim /etc/init/uwsgi.conf
# file: /etc/init/uwsgi.conf
description "uWSGI server"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec /myproject/myproject-env/bin/uwsgi --uid www-data --gid www-data --home /myproject/myproject-env/site/F11/Engineering/ --socket /tmp/uwsgi.sock --chmod-socket --module F11 --callable app --pythonpath /myproject/myproject-env/site/F11/Engineering/ -H /myproject/myproject-env
However after running this successfully: sudo start uwsgi
uwsgi start/running, process 1286
And trying to access the application via browser:
I get a 502 Bad Gateway
and an error entry in nginx error.log:
2013/06/13 23:47:28 [error] 743#0: *296 upstream prematurely closed
connection while reading response header from upstream, client:
xx.161.xx.228, server: myproject.com, request: "GET /show_records/2013/6 HTTP/1.1", upstream:
"uwsgi://unix:///tmp/uwsgi.sock:", host: "myproject.com"
But the sock file has the permission it needs:
srw-rw-rw- 1 www-data www-data 0 Jun 13 23:46 /tmp/uwsgi.sock
If I run the exec command from above in the command line as a process, it works perfectly fine. Why is the daemon not working correctly please?
btw Nginx is running as
vim /etc/nginx/nginx.conf
user www-data;
and vim /etc/nginx/sites-available/default
location / {
uwsgi_pass unix:///tmp/uwsgi.sock;
include uwsgi_params;
}
and it is started as sudo service nginx start
I am running this on Ubuntu 12.04 LTS.
I hope I have provided all the necessary data, hope someone can guide me into the right direction. Thanks.
Finally I have solved this problem after working nearly 2 days on it. I hope this solution will help other flask/uwsgi users that are experiencing a similar problem.
I had two major issues that caused this.
1) The best way to find the problems with a daemon is obviously a log file and a cleaner structure.
sudo vim /etc/init/uwsgi.conf
Change the daemon script to the following:
# file: /etc/init/uwsgi.conf
description "uWSGI server"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec /home/ubuntu/uwsgi-1.9.12/uwsgi -c /myproject/uwsgi.ini
vim /myproject/uwsgi.ini
[uwsgi]
socket = /tmp/uwsgi.sock
master = true
enable-threads = true
processes = 5
chdir= /myproject/F11/Engineering
module=F11:app
virtualenv = /myproject/myproject-env/
uid = www-data
gid = www-data
logto = /myproject/error.log
This is much cleaner way of setting up the daemon. Also notice the last line how to setup the log file. Initially I had set the log file to /var/log/uwsgi/error.log. After a lot of sweat and tears I realized the daemon is running as www-data and hence can not access the /var/log/uwsgi/error.log since the error.log was owned by root:root. This made the uwsgi fail silently.
I found it much more efficient to just point the log file to my own /myproject, where the daemon has guaranteed access as www-data. And also don't forget to make the whole project accessible to www-data or the daemon will fail with an Internal Server error message. -->
sudo chown www-data:www-data -R /myproject/
Restart uwsgi daemon:
sudo service uwsgi restart
2) Now you have three log files to lookout for:
tail -f /var/log/upstart/uwsgi.log --> Shows problems with your daemon upon start
tail -f /var/log/nginx/error.log --> Shows permission problems when wsgi access is refused, often because /tmp/uwsgi.sock file is owned by root instead of www-data. In that case simply delete the sock file sudo rm /tmp/uwsgi.sock
tail -f /myproject/error.log --> Shows errors thrown by uwsgi in your application
This combination of log files helped me to figure out that I also had a bad import with Flask-Babel in my Flask application. Bad in that sense, that the way I utilized the library was falling back to the system's locale to determine the datetime format.
File "/myproject/F11/Engineering/f11_app/templates/show_records.html", line 25, in block "body"
<td>{{ record.record_date|format_date }}</td>
File "./f11_app/filters.py", line 7, in format_date
day = babel_dates.format_date(value, "EE")
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 459, in format_date
return pattern.apply(date, locale)
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 702, in apply
return self % DateTimeFormat(datetime, locale)
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 699, in __mod__
return self.format % other
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 734, in __getitem__
return self.format_weekday(char, num)
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 821, in format_weekday
return get_day_names(width, context, self.locale)[weekday]
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 69, in get_day_names
return Locale.parse(locale).days[context][width]
AttributeError: 'NoneType' object has no attribute 'days'
This is the way I was using the Flask filter:
import babel.dates as babel_dates
#app.template_filter('format_date')
def format_date(value):
day = babel_dates.format_date(value, "EE")
return '{0} {1}'.format(day.upper(), affix(value.day))
The strangest part is that this code is working perfectly fine within the dev environment (!). It works even fine when running the uwsgi as a root process from the command line. But it fails when ran by the www-data daemon. This must have something to do with how the locale is set, which Flask-Babel is trying to fall back to.
When I changed the import like this, it all worked finally with the daemon:
from flask.ext.babel import format_date
#app.template_filter('format_date1')
def format_date1(value):
day = format_date(value, "EE")
return '{0} {1}'.format(day.upper(), affix(value.day))
Hence be careful when using Eclipse/Aptana Studio that is trying to pick the right namespace for your classes in code. It can really turn ugly.
It is now working perfectly fine as a uwsgi daemon on an Amazon Ec2 (Ubuntu 12.04) since 2 days. I hope this experience helps fellow python developers.
I gave up, there was no uwsgi.log generated, and nginx just kept complaining about :
2014/03/06 01:06:28 [error] 23175#0: *22 upstream prematurely closed connection while reading response header from upstream, client: client.IP, server: my.server.IP, request: "GET / HTTP/1.1", upstream: "uwsgi://unix:/var/web/the_gelatospot/uwsgi.sock:", host: "host.ip"
for every single request. This only happened if running uwsgi as a service, as a process it started fine under any user. So this would work from command line (page responds):
$exec /var/web/the_gelatospot/mez_server.sh
This didn't (/etc/init/site_service.conf):
description "mez sites virtualenv and uwsgi_django" start on runlevel
[2345] stop on runlevel [06] respawn respawn limit 10 5 exec
/var/web/the_gelatospot/mez_server.sh
The process would start but on each request nginx would complain about the closed connection. Strangely I have this same config working just fine for 2 other apps using the same nginx version and same uwsgi version as well as both apps are mezzanine CMS apps. I tried everything I could think of and what was suggested. In the end I switched to gunicorn which works fine:
#!/bin/bash
NAME="the_gelatospot" # Name of the application
DJANGODIR=/var/web/the_gelatospot # Django project directory
SOCKFILE=/var/web/the_gelatospot/gunicorn.sock # we will communicte using this unix socket
USER=ec2-user
GROUP=ec2-user # the user to run as, the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=settings
#DJANGO_SETTINGS_MODULE=the_gelatospot.settings # which settings file should Django use
#DJANGO_WSGI_MODULE=the_gelatospot.wsgi # WSGI module name
DJANGO_WSGI_MODULE=wsgi
echo "Starting $NAME as `the_gelatospot`"
# Activate the virtual environment
cd $DJANGODIR
source ../mez_env/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
cd ..
# Start your Django GUnicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec gunicorn -k eventlet ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--log-level=debug \
--bind=unix:$SOCKFILE
And here is the one that wouldn't work as a service (nginx complaining of prematurely closed connection and no app log data coming through).
#!/bin/bash
DJANGODIR=/var/web/the_gelatospot/ # Django project directory
cd $DJANGODIR
source ../mez_env/bin/activate
uwsgi --ini uwsgi.ini
And the uwsgi.ini:
[uwsgi]
uid = 222
gid = 500
socket = /var/web/the_gelatospot/uwsgi.sock
virtualenv = /var/web/mez_env
chdir = /var/web/the_gelatospot/
wsgi-file = /var/web/the_gelatospot/wsgi.py
pythonpath = ..
env = DJANGO_SETTINGS_MODULE=the_gelatospot.settings
die-on-term = true
master = true
chmod-socket = 666
;experiment using uwsgitop
worker = 1
;gevent = 100
processes = 1
daemonize = /var/log/nginx/uwsgi.log
logto = /var/log/nginx/uwsgi.logi
log-maxsize = 10000000
enable-threads = true
I went from gunicorn to uWSGI last year and I had no issues with it till now, it also seemed a bit faster than gunicorn. Right now I'm thinking of sticking to gunicorn. Its getting better, puts up much nicer numbers with eventlet installed, and it is easier to configure.
Hope this workaround helps. I would still like to know the issue with uWSGI and nginx but I'm stumped.
Update:
So using gunicorn allowed me to run the server as a service and while toying in mezzanine I ran into this error: FileSystemEncodingChanged
To fix this I found the solution here:
https://groups.google.com/forum/#!msg/mezzanine-users/bdln_Y99zQw/9HrhNSKFyZsJ
And I had to modify it a bit since I don't use supervisord, I only use upstart and a shell script. I added this to right before I execute gunicorn in my mez_server.sh file:
export LANG=en_US.UTF-8, LC_ALL=en_US.UTF-8, LC_LANG=en_US.UTF-8
exec gunicorn -k eventlet ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--log-level=debug \
--bind=unix:$SOCKFILE
I also tried this fix using uWSGI like so ( a lot shorter since using uwsgi.ini ):
export LANG=en_US.UTF-8, LC_ALL=en_US.UTF-8, LC_LANG=en_US.UTF-8
exec uwsgi --ini uwsgi.ini
I am still sticking to gunicorn since it still worked with the problem and lead me in the right direction to resolve it. I was very disappointed that uWSGI provided no output in the log file even with these params, I only seen the server start process and that's it:
daemonize = /var/log/nginx/uwsgi.log
logto = /var/log/nginx/uwsgi.logi
While nginx kept throwing the disconnect error, uWSGI sat there like nothing is happening.
As a single line running with daemon true command is
gunicorn app.wsgi:application -b 127.0.0.1:8000 --daemon
bind your app with 127.0.0.1:8000 and --deamon force it to run as daemon
but define a gunicorn_config.cfg file and run with -c flag is good practice
for more:
https://gunicorn-docs.readthedocs.org/en/develop/configure.html