Nginx: Permission denied to Gunicorn socket on CentOS 7 - django

I'm working in a Django project deployment. I'm working in a CentOS 7 server provided ma EC2 (AWS). I have tried to fix this bug by many ways but I cant understand what am I missing.
I'm using ningx and gunicorn to deploy my project. I have created my /etc/systemd/system/myproject.servicefile with the following content:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=centos
Group=nginx
WorkingDirectory=/home/centos/myproject_app
ExecStart=/home/centos/myproject_app/django_env/bin/gunicorn --workers 3 --bind unix:/home/centos/myproject_app/django.sock app.wsgi:application
[Install]
WantedBy=multi-user.target
When I run sudo systemctl restart myproject.serviceand sudo systemctl enable myproject.service, the django.sock file is correctly generated into /home/centos/myproject_app/.
I have created my nginx conf flie in the folder /etc/nginx/sites-available/ with the following content:
server {
listen 80;
server_name my_ip;
charset utf-8;
client_max_body_size 10m;
client_body_buffer_size 128k;
# serve static files
location /static/ {
alias /home/centos/myproject_app/app/static/;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/centos/myproject_app/django.sock;
}
}
After, I restart nginx with the following command:
sudo systemctl restart nginx
If I run the command sudo nginx -t, the reponse is:
nginx: configuration file /etc/nginx/nginx.conf test is successful
When I visit my_ip in a web browser, I'm getting a 502 bad gateway response.
If I check the nginx error log, I see the following message:
1 connect() to unix:/home/centos/myproject_app/django.sock failed (13: Permission denied) while connecting to upstream
I really have tried a lot of solutions changing the sock file permissions. But I cant understand how to fix it. How can I fix this permissions bug?... Thank you so much

If all the permissions under the myproject_app folder are correct, and centos user or nginx group have access to the files, I would say it looks like a Security Enhanced Linux (SELinux) issue.
I had a similar problem, but with RHEL 7. I managed to solve it by executing the following command:
sudo semanage permissive -a httpd_t
It's related to the security policies of SELinux, you have to add the httpd_t to the list of permissive domains.
This post from the NGINX blog may be helpful: NGINX: SELinux Changes when Upgrading to RHEL 6.6 / CentOS 6.6
Motivated by a similar issue, I wrote a tutorial a while ago on How to Deploy a Django Application on RHEL 7. It should be very similar for CentOS 7.

Most probably one of two
1- the directory is not accessible to nginx /home/centos/myproject_app/
$ ls -la /home/centos/myproject_app/
if it is not accessible try to change the path to /etc/nginx
if not then try the command
$ /home/centos/myproject_app/django_env/bin/gunicorn --workers 3 --bind unix:/home/centos/myproject_app/django.sock app.wsgi:application
if still not working then activate the environment and python manage.py runserver 0.0.0.0:8000 go to the browser and go to http://ip:8000 the problem may be here, but it the command of gunicorn worked well, then the problem in directory access for nginx user

Exact same problem here.
Removing Group=www-data fixed the issue for me

Related

Debuggin django log on NGINX - DigitalOcean

I have a Django application (droplet) on DigitalOcean but I have an issue showing information in a table.
Everything works on my local server but when I deploy to the server on DigitalOcean I don't know where I can see the activity of the server like print outputs. I can see the gunicorn and nginx logs but none of those logs show the Django activity.
What should I do?
Create your Log directory
mkdir logs in your-app directory
To get your django requests and error log; you'd have to create your indicate the log directory in your nginx server config file.
/etc/nginx/sites-available/your-app-config:
server {
...
access_log /directory/to/your-app/logs/access.log;
error_log /directory/to/your-app/logs/error.log;
...
}
Ensure you have a symbolic link to the sites-enabled folder:
sudo ln -s /etc/nginx/sites-available/your-app /etc/nginx/sites-enabled/your-app
Check that your configuration settings are okay
sudo nginx -t
Reload Nginx
sudo systemctl reload nginx
Check your log files to read requests & error logs

Nginx unable to proxy django application through gunicorn

I'm trying to deploy a Django app in a Ubuntu Server 18.04 using Nginx and Gunicorn. Every tool seems to work properly, at least, from logs and status points of view.
The point is that if I log into my server using SSH and try to use curl, gunicorn is able to see the request and handle it. However, if I write directly my IP, I get simply the typical Welcome to nginx home page and the request is completely invisible to gunicorn, so it seems nginx is unable to locate and pass the request to the gunicorn socket.
I'm using nginx 1.14.0, Django 2.2.1, Python 3.6.7, gunicorn 19.9.0 and PostgreSQL 10.8.
This is my nginx config
server {
listen 80;
server_name localhost;
location /favicon.ico { access_log off; log_not_found off; }
location /static/ {
alias /home/django/myproject/myproject;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
And these are my gunicorn.sock
[Unit]
Description=gunicorn socket
[Socket]
ListenStream=/run/gunicorn.sock
[Install]
WantedBy=sockets.target
and gunicorn.service
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=django
Group=www-data
WorkingDirectory=/home/django/myproject/myproject
ExecStart=/home/django/myproject/myproject/myproject/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
MyProject.wsgi:application
[Install]
WantedBy=multi-user.target
I've been following this guide (https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04), where most of all has worked as expected, but with the difference that my project is not a completely new one as in the tutorial, but cloned from my git repo (however, it's tested and the code works properly)
I was expecting the Django admin to be accessible from my browser already with this config, but it's not. I try to access my IP from my browser and I get Welcome to nginx but also 404 if I visit /admin. In addition, the gunicorn logs shows no requests.
In the other hand, if I log through SSH into my server and I execute curl --unix-socket /run/gunicorn.sock localhost, I can see in the gunicorn logs the request done by curl.
Any help is welcome.. I've been here for hours and I'm not able to get even 1 request from outside the server.
PD: it's also not something related to the ports in the server, since when I access the root of my IP, I receive the nginx answer. It just seems like Nginx has no config at all.
in your nginx config, you should use your proper server_name instead of localhost
server_name mydomain.com;
If not, you will fall back to the default nginx server, which returns the "welcome to nginx" message. You can change which virtual server is default by changing the order of servers, removing the nginx default, or using the default_server parameter. You can also listen to multiple server names.
listen 80 default_server;
server_name mydomain.com localhost;
If the Host header field does not match a server name, NGINX Plus routes the request to the default server for the port on which the request arrived. The default server is the first one listed in the nginx.conf file, unless you include the default_server parameter to the listen directive to explicitly designate a server as the default.
https://docs.nginx.com/nginx/admin-guide/web-server/web-server/#setting-up-virtual-servers
Remember that you have to reload the nginx config after making changes. sudo nginx -s reload for example.
Finally, I've got it working properly. You were right about the config of nginx, although my real problem was not to delete/modify default config file for nginx in sites_enabled folder. Thus, when I was setting listen 80 default_server I got the following error
[emerg] 10619#0: a duplicate default server for 0.0.0.0:80 in
/etc/nginx/sites-enabled/mysite.com:4
Anyway, I had a problem with the static files which I still not knowing why it works like that. I needed to set DEBUG = True to be able to see static files of the admin module.
I'll keep on investigating the proper way of serving static files in production for the admin panel.
Thank you so much for the help!

Django settings secret key environment variable 502 nginx

I have a Django webapp running on an Ubuntu server using nginx and gunicorn. I'm trying to get my settings.py set up properly in regards to using environment variables to hide secret information such as the SECRET_KEY as well as API keys.
I've tried putting export SECRET_KEY='secret_key' in .bashrc as well as .profile, and using SECRET_KEY=os.environ['SECRET_KEY'] in my settings.py file, but this throws a 502 bad gateway error with nginx and its version at the bottom, upon restarting gunicorn. I'm not sure what else to try, as I'm pretty new to setting up servers.
I believe this is the init file for my gunicorn service:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=myuser
Group=www-data
WorkingDirectory=/home/myuser/myproject/mysite
ExecStart=/home/myuser/myproject/mysite/myprojectenv/bin/gunicorn --workers 3 --bind unix:/home/myuser/myproject/mysite/mysite.sock mysite.wsgi:application
[Install]
WantedBy=multi-user.target
I found this error in the nginx error log when trying to request the site, where it gives the 502 bad gateway:
*20 connect() to unix:/home/myuser/myproject/mysite/mysite.sock failed (2: No such file or directory)
I solved this issue by putting my environment variables within the gunicorn.service file, located in /etc/systemd/system/, as export works only in the current shell.
Env variables were input in the file as the following format:
[Service]
Environment="SECRET_KEY=secret-key-string"

Can't reach Django default app via Gunicorn on AWS EC2 instance

I've been struggling with this problem for two days without success. I've created an instance of the default Django (1.6.1) app called "testdj", installed it on an Amazon AWS EC2 t1.micro instance running Ubuntu Server 13.10, and I'm trying to reach the default Django "It worked!" page via Gunicorn (v. 18). When I start gunicorn from the command line:
gunicorn testdj.wsgi:application --bind [ec2-public-dns]:8001
I can see the page when I enter this URL:
http://[ec2-public-dns]:8001
However, if I use a "start-gunicorn" bash script I created after reading Karzynski's blogpost "Setting Up Django with Nginx, Gunicorn, virtualenv, supervisor, and PostgreSQL", I always get an error. When I enter this URL...
http://[ec2-public-dns]
... I get this error:
Error 502 - Bad Request
The server could not resolve your request for uri: http://[ec2-public-dns]
Here is the start-gunicorn script:
#!/bin/bash
NAME="testdj"
DJANGODIR=/usr/share/nginx/html/testdj
SOCKFILE=/usr/share/nginx/html/testdj/run/gunicorn.sock
USER=testdj
GROUP=testdj
NUM_WORKERS=3
DJANGO_SETTINGS_MODULE=testdj.settings
DJANGO_WSGI_MODULE=testdj.wsgi
WORKON_HOME=/home/testdj/venv
source `which virtualenvwrapper.sh`
workon $NAME
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGO_DIR:$PYTHONPATH
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--access-logfile /tmp/gunicorn-access.log \
--error-logfile /tmp/gunicorn-error.log \
--log-level=debug \
--bind=unix:$SOCKFILE
As you can see, I've created a special account on my server called "testdj" to run the app under. I'm running my Django app in a virtual environment. I haven't changed the Django wsgi.py file at all. As I eventually want to use nginx as my reverse proxy, I've installed nginx and put the Django app in nginx's default root directory /usr/share/nginx/html. User/group www-data owns /usr/share/nginx and everything below except that user/group "testdj" owns /usr/share/nginx/html/testdj and everything below it. /usr/share/nginx/html/testdj and all its subdirectories have perms 775 and I've added www-data to the testdj group.
I do have nginx installed but I don't have the nginx service running. I did try starting it up and enabling an nginx virtual server using the following configuration file but the error still occurred.
upstream testdj_app_server {
server unix:/usr/share/nginx/html/testdj/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name ec2-[my-public-dns-ip].us-west-2.compute.amazonaws.com;
client_max_body_size 4G;
access_log /var/log/nginx/testdj-access.log;
error_log /var/log/nginx/testdj-error.log;
location /static/ {
alias /usr/share/nginx/html/testdj/static/;
}
location /media/ {
alias /usr/share/nginx/html/testdj/media/;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
# This must match "upstream" directive above
proxy_pass http://testdj_app_server;
break;
}
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /usr/share/nginx/html/testdj/static/;
}
}
The problem seems to be with gunicorn because if I replace the "--bind=unix:$SOCKFILE" in my start-gunicorn script with "--bind --[ec2-public-dns]:8000", I can see the Django default page. However, I don't want to bind to my public DNS name on port 8000, I want to run on port 80 and use nginx as my front end reverse proxy.
I initially had AWS inbound security group rules that limited access to the site to HTTP on ports 80, 8000, and 8001 to my laptop but even if I delete these rules and leave the site wide open, I still get the 502 error message.
My gunicorn access log doesn't show any activity and the only thing I see in the gunicorn error log is gunicorn starting up. When I access the default Django page, there are no errors in the error log:
2014-02-03 18:41:01 [19023] [INFO] Starting gunicorn 18.0
2014-02-03 18:41:01 [19023] [DEBUG] Arbiter booted
2014-02-03 18:41:01 [19023] [INFO] Listening at: unix:/usr/share/nginx/html/testdj/run/gunicorn.sock (19023)
2014-02-03 18:41:01 [19023] [INFO] Using worker: sync
2014-02-03 18:41:01 [19068] [INFO] Booting worker with pid: 19068
2014-02-03 18:41:01 [19069] [INFO] Booting worker with pid: 19069
2014-02-03 18:41:01 [19070] [INFO] Booting worker with pid: 19070
Does anyone know what's happening here? It doesn't look like I'm even getting to gunicorn. I apologize for the long post but there seems to be a lot of "moving parts" to this problem. I'd be very grateful for any help as I've tried many different things but all to no avail. I've also looked at other questions here where other people had similar problems but I didn't see anything pertinent to this problem. Thanks!
I repeated my configuration process line-for-line on my Linode server and had no problems at all. I have to assume that this problem has something to do with the way AWS EC2 instances are configured, probably with respect to security.
I was running into the same problem today. Daniel Roseman explained it to me in the comments here:
port 8000 is not open to the outside by default; you'd either need to
fiddle with your load balancer/firewall settings to open it, or run
gunicorn on port 80 (which will mean killing nginx and starting
gunicorn as the superuser). Much easier to just get the nginx settings
right; there is a perfectly usable configuration on the gunicorn
deploy docs page.
It seems it's possible to run Gunicorn directly, but EC2 isn't set up to do so by default.

Running Gunicorn behind chrooted nginx inside virtualenv

I can this setup to work if I start gunicorn manually or if I add gunicorn to my django installed apps. But when I try to start gunicorn with systemd the gunicorn socket and service start fine but they don't serve anything to Nginx; I get a 502 bad gateway.
Nginx is running under the "http" user/group, chroot jail. I used pythonbrew to setup the virtualenvs so gunicorn is installed in my home directory under .pythonbrew. The vitualenv directory is owned by my user and the adm group.
I'm pretty sure there is a permission issue somewhere, because everything works if I start gunicorn but not if systemd starts it. I've tried changing the user and group directives inside the gunicorn.service file, but nothing worked; if root start the server then I get no errors and a 502, if my user starts it I get no errors and 504.
I have checked the Nginx logs and there are no errors, so I'm sure it's a gunicorn issue. Should I have the virtualenv in the app directory? Who should be the owner of the app directory? How can I narrow down the issue?
/usr/lib/systemd/system/gunicorn-app.service
#!/bin/sh
[Unit]
Description=gunicorn-app
[Service]
ExecStart=/home/noel/.pythonbrew/venvs/Python-3.3.0/nlp/bin/gunicorn_django
User=http
Group=http
Restart=always
WorkingDirectory = /home/noel/.pythonbrew/venvs/Python-3.3.0/nlp/bin
[Install]
WantedBy=multi-user.target
/usr/lib/systemd/system/gunicorn-app.socket
[Unit]
Description=gunicorn-app socket
[Socket]
ListenStream=/run/unicorn.sock
ListenStream=0.0.0.0:9000
ListenStream=[::]:8000
[Install]
WantedBy=sockets.target
I realize this is kind of a sprawling question, but I'm sure I can pinpoint the issue with a few pointers. Thanks.
Update
I'm starting to narrow this down. When I run gunicorn manually and then run ps aux|grep gunicorn then I see two processes that are started: master and worker. But when I start gunicorn with systemd there is only one process started. I tried adding Type=forking to my gunicorn.services file, but then I get an error when loading service. I thought that maybe gunicorn wasn't running under the virtualenv or the venv isn't getting activated?
Does anyone know what I'm doing wrong here? Maybe gunicorn isn't running in the venv?
I had a similar problem on OSX with launchd.
The issue was I needed to allow for the process to spawn sub processes.
Try adding Type=forking:
[Unit]
Description=gunicorn-app
[Service]
Type=forking
I know this isn't the best way, but I was able to get it working by adding gunicorn to the list of django INSTALLED_APPS. Then I just created a new systemd service:
[Unit]
Description=hack way to start gunicorn and django
[Service]
User=http
Group=http
ExecStart=/srv/http/www/nlp.com/nlp/bin/python /srv/http/www/nlp.com/nlp/nlp/manage.py run_gunicorn
Restart=always
[Install]
WantedBy=multi-user.target
There must be a better way, but judging by the lack of responses not many people know what that better way is.