Running Gunicorn behind chrooted nginx inside virtualenv - django

I can this setup to work if I start gunicorn manually or if I add gunicorn to my django installed apps. But when I try to start gunicorn with systemd the gunicorn socket and service start fine but they don't serve anything to Nginx; I get a 502 bad gateway.
Nginx is running under the "http" user/group, chroot jail. I used pythonbrew to setup the virtualenvs so gunicorn is installed in my home directory under .pythonbrew. The vitualenv directory is owned by my user and the adm group.
I'm pretty sure there is a permission issue somewhere, because everything works if I start gunicorn but not if systemd starts it. I've tried changing the user and group directives inside the gunicorn.service file, but nothing worked; if root start the server then I get no errors and a 502, if my user starts it I get no errors and 504.
I have checked the Nginx logs and there are no errors, so I'm sure it's a gunicorn issue. Should I have the virtualenv in the app directory? Who should be the owner of the app directory? How can I narrow down the issue?
/usr/lib/systemd/system/gunicorn-app.service
#!/bin/sh
[Unit]
Description=gunicorn-app
[Service]
ExecStart=/home/noel/.pythonbrew/venvs/Python-3.3.0/nlp/bin/gunicorn_django
User=http
Group=http
Restart=always
WorkingDirectory = /home/noel/.pythonbrew/venvs/Python-3.3.0/nlp/bin
[Install]
WantedBy=multi-user.target
/usr/lib/systemd/system/gunicorn-app.socket
[Unit]
Description=gunicorn-app socket
[Socket]
ListenStream=/run/unicorn.sock
ListenStream=0.0.0.0:9000
ListenStream=[::]:8000
[Install]
WantedBy=sockets.target
I realize this is kind of a sprawling question, but I'm sure I can pinpoint the issue with a few pointers. Thanks.
Update
I'm starting to narrow this down. When I run gunicorn manually and then run ps aux|grep gunicorn then I see two processes that are started: master and worker. But when I start gunicorn with systemd there is only one process started. I tried adding Type=forking to my gunicorn.services file, but then I get an error when loading service. I thought that maybe gunicorn wasn't running under the virtualenv or the venv isn't getting activated?
Does anyone know what I'm doing wrong here? Maybe gunicorn isn't running in the venv?

I had a similar problem on OSX with launchd.
The issue was I needed to allow for the process to spawn sub processes.
Try adding Type=forking:
[Unit]
Description=gunicorn-app
[Service]
Type=forking

I know this isn't the best way, but I was able to get it working by adding gunicorn to the list of django INSTALLED_APPS. Then I just created a new systemd service:
[Unit]
Description=hack way to start gunicorn and django
[Service]
User=http
Group=http
ExecStart=/srv/http/www/nlp.com/nlp/bin/python /srv/http/www/nlp.com/nlp/nlp/manage.py run_gunicorn
Restart=always
[Install]
WantedBy=multi-user.target
There must be a better way, but judging by the lack of responses not many people know what that better way is.

Related

Do Server work while i dont connect? If yes how can i do?

I deployed my django app in ubuntu server. I want to give to API for mobile application. So i followed some source and i deployed. For I want to deploye django, I use gunicorn and ngnix.
Server is working with this command:
gunicorn --bind 0.0.0.0:8000 myapp.wsgi
I can give API with this way. Everything is okey.
But when i close cmd which connected to server, server stop. Can server work while i dont connect by my computer? Must not close cmd and my computer? Or can i do other way?
I fixed it! We just need to add "--daemon" in our code. For example:
gunicorn --bind 0.0.0.0:8000 myapp.wsgi --daemon

How to make djangoQ run as a service using systemd

How to make DjangoQ run as a service using systemd?
There is a gap, djangoQ installation and running is like learning to run a Django project.
But now,
you want to push the project which is using DjangoQ to production.
The problem is you are not aware of server and Linux, maybe you are even coming from Windows....
So can somebody tell us, How to run DjangoQ in ubuntu using systemd
I am getting the error that ExecStart path is not absolute...
I came from here:
Stack Question
But that answer is not complete, too much tacit knowledge unshared...
[Unit]
Description= Async Tasks Runner
After=network.target remote-fs.target
[Service]
ExecStart=home/shreejvgassociates/ot_britain_college/env/bin/python manage.py qcluster --pythonpath home/shreejvgassociates/ot_britain_college/OverTimeCollege --settin$
RootDirectory=home/shreejvgassociates/ot_britain_college/OverTimeCollege
User=shreejvgassociates
Group=shreejvgassociates
Restart=always
[Install]
WantedBy=multi-user.target
It works well. Try it.
[Unit]
Description=Django-Q Cluster for site TestSite
After=multi-user.target
[Service]
Type=simple
ExecStart=/home/test/TestSite/venv/bin/python3 /home/test/TestSite/manage.py qcluster
[Install]
WantedBy=multi-user.target

The HTML file changed on server is not reflected

Deployed django project using DigitalOcean.
After confirming with the server IP address, the site was displayed.
However, there is a part that I want to modify, and the template folder of django is HTML
I edited some.
And even after checking nginx after reloading, it was not reflected.
However, the cause is unknown because of the first deployment.
I would like to ask about this cause.
Does that mean that the displayed HTML is not a template folder display?
I would like to know how to fix it.
Postscript
please tell me.
[Unit]
Description=gunicorn daemon (apasn)
Requires=apasn.socket
After=network.target
[Service]
User=administrator
Group=www-data
WorkingDirectory=/home/administrator/apasn
ExecStart=/home/administrator/apasn/venv/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn/apasn.sock \
person_manager.wsgi:application
[Install]
WantedBy=multi-user.target
Django uses a cache with the templates so restarting nginx won't do anything right away. There are a few things that you can do:
First, try restarting gunicorn:
sudo systemctl restart gunicorn
See if that fixes it. If not, switch debug mode on with DEBUG = True in settings.py and then restart gunicorn. You will definitely see the changes at that point. Then, turn debug mode back off with DEBUG = False and then restart gunicorn again.

gunicorn: Is there a better way to reload gunicorn in described situation?

I have a django project with gunicorn and nginx.
I'm deploying this project with saltstack
In this project, I have a config.ini file that django views read.
In case of nginx, I made that if nginx.conf changes, a state cmd.run service nginx restart with - onchanges - file: nginx_conf restarts the service.
but in case of gunicorn, I can detect the change of config.ini, but I don't know how to reload the gunicorn.
when gunicorn starts, I gave an option --reload but does this option detects change of config.ini not only django project's files'?
If not, what command should I use? (ex: gunicorn reload) ??
thank you.
ps. I saw kill -HUP pid but I think salt wouldn't knows gunicorn's pid..
The --reload option looks for changes to the source code not config. And --reload shouldn't be used in production anyway.
I would either:
1) Tell gunicorn to write a pid file with --pid /path/to/pid/file and then get salt to kill the pid followed by a restart.
2) Get salt to run a pkill gunicorn followed by a restart.
Don't run shell commands to manage services, use service states.
/path/to/nginx.conf:
file.managed:
# ...
/path/to/config.ini:
file.managed:
# ...
nginx:
service.running:
- enabled: true
- watch:
- file: /path/to/nginx.conf
django-app:
service.running:
- enabled: true
- reload: true
- watch:
- file: /path/to/config.ini
You may need to create a service definition for gunicorn yourself. Here's a very basic systemd example:
[Unit]
Description=My django app
After=network.target
[Service]
Type=notify
User=www-data
Group=www-data
WorkingDirectory=/path/to/source
ExecStart=/path/to/venv/bin/python gunicorn project.wsgi:application
ExecReload=/bin/kill -s HUP $MAINPID
KillMode=mixed
[Install]
WantedBy=multi-user.target

Nginx: Permission denied to Gunicorn socket on CentOS 7

I'm working in a Django project deployment. I'm working in a CentOS 7 server provided ma EC2 (AWS). I have tried to fix this bug by many ways but I cant understand what am I missing.
I'm using ningx and gunicorn to deploy my project. I have created my /etc/systemd/system/myproject.servicefile with the following content:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=centos
Group=nginx
WorkingDirectory=/home/centos/myproject_app
ExecStart=/home/centos/myproject_app/django_env/bin/gunicorn --workers 3 --bind unix:/home/centos/myproject_app/django.sock app.wsgi:application
[Install]
WantedBy=multi-user.target
When I run sudo systemctl restart myproject.serviceand sudo systemctl enable myproject.service, the django.sock file is correctly generated into /home/centos/myproject_app/.
I have created my nginx conf flie in the folder /etc/nginx/sites-available/ with the following content:
server {
listen 80;
server_name my_ip;
charset utf-8;
client_max_body_size 10m;
client_body_buffer_size 128k;
# serve static files
location /static/ {
alias /home/centos/myproject_app/app/static/;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/centos/myproject_app/django.sock;
}
}
After, I restart nginx with the following command:
sudo systemctl restart nginx
If I run the command sudo nginx -t, the reponse is:
nginx: configuration file /etc/nginx/nginx.conf test is successful
When I visit my_ip in a web browser, I'm getting a 502 bad gateway response.
If I check the nginx error log, I see the following message:
1 connect() to unix:/home/centos/myproject_app/django.sock failed (13: Permission denied) while connecting to upstream
I really have tried a lot of solutions changing the sock file permissions. But I cant understand how to fix it. How can I fix this permissions bug?... Thank you so much
If all the permissions under the myproject_app folder are correct, and centos user or nginx group have access to the files, I would say it looks like a Security Enhanced Linux (SELinux) issue.
I had a similar problem, but with RHEL 7. I managed to solve it by executing the following command:
sudo semanage permissive -a httpd_t
It's related to the security policies of SELinux, you have to add the httpd_t to the list of permissive domains.
This post from the NGINX blog may be helpful: NGINX: SELinux Changes when Upgrading to RHEL 6.6 / CentOS 6.6
Motivated by a similar issue, I wrote a tutorial a while ago on How to Deploy a Django Application on RHEL 7. It should be very similar for CentOS 7.
Most probably one of two
1- the directory is not accessible to nginx /home/centos/myproject_app/
$ ls -la /home/centos/myproject_app/
if it is not accessible try to change the path to /etc/nginx
if not then try the command
$ /home/centos/myproject_app/django_env/bin/gunicorn --workers 3 --bind unix:/home/centos/myproject_app/django.sock app.wsgi:application
if still not working then activate the environment and python manage.py runserver 0.0.0.0:8000 go to the browser and go to http://ip:8000 the problem may be here, but it the command of gunicorn worked well, then the problem in directory access for nginx user
Exact same problem here.
Removing Group=www-data fixed the issue for me