Django settings secret key environment variable 502 nginx - django

I have a Django webapp running on an Ubuntu server using nginx and gunicorn. I'm trying to get my settings.py set up properly in regards to using environment variables to hide secret information such as the SECRET_KEY as well as API keys.
I've tried putting export SECRET_KEY='secret_key' in .bashrc as well as .profile, and using SECRET_KEY=os.environ['SECRET_KEY'] in my settings.py file, but this throws a 502 bad gateway error with nginx and its version at the bottom, upon restarting gunicorn. I'm not sure what else to try, as I'm pretty new to setting up servers.
I believe this is the init file for my gunicorn service:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=myuser
Group=www-data
WorkingDirectory=/home/myuser/myproject/mysite
ExecStart=/home/myuser/myproject/mysite/myprojectenv/bin/gunicorn --workers 3 --bind unix:/home/myuser/myproject/mysite/mysite.sock mysite.wsgi:application
[Install]
WantedBy=multi-user.target
I found this error in the nginx error log when trying to request the site, where it gives the 502 bad gateway:
*20 connect() to unix:/home/myuser/myproject/mysite/mysite.sock failed (2: No such file or directory)

I solved this issue by putting my environment variables within the gunicorn.service file, located in /etc/systemd/system/, as export works only in the current shell.
Env variables were input in the file as the following format:
[Service]
Environment="SECRET_KEY=secret-key-string"

Related

Django model save not working in daemon mode but working with runserver

I am saving github repo to server once user add their github repo, see this models.
class Repo(models.Model):
url = models.CharField(help_text='github repo cloneable',max_length=600)
def save(self, *args, **kwargs):
# os.system('git clone https://github.com/somegithubrepo.git')
os.system('git clone {}'.format(self.url))
super(Repo, self).save(*args, **kwargs)
Everything is working fine what I want in both local server and remote server like digitalOcean Droplet, when I add public github repo, the clone always success.
it works when I run the server like this: python3 manage.py runserver 0.0.0.0:800
But when I ran in daemon mode with gunicorn and nginx, it doesn't work,
Everything is working even saving the data in database, only it is not cloning in daemon mode, what's wrong with it?
this is my gunicorn.service
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/var/www/myproject
Environment="PATH=/root/.local/share/virtualenvs/myproject-9citYRnS/bin"
ExecStart=/usr/local/bin/pipenv run gunicorn --access-logfile - --workers 3 --bind unix:/var/www/myproject/config.sock -m 007 myproject.wsgi:application
[Install]
WantedBy=multi-user.target
Note again, everything is working even the gunicorn, the data is saving in database, only it is not cloning github repo, even not firing any error.\
What's the wrong with this? can anyone please help me to fix the issue?
In the systemd unit file, you overwrite the root user's PATH variable in Environment with the path of the virtualenv. Therefore the root user does not have the usual items in the PATH, e.g. /usr/bin where the git command usually exists. So you need to use the absolute path of git, e.g. /usr/bin/git, or add the git's bin directory to the PATH, for example:
Environment="PATH=/usr/bin:/root/.local/share/virtualenvs/myproject-9citYRnS/bin"

The HTML file changed on server is not reflected

Deployed django project using DigitalOcean.
After confirming with the server IP address, the site was displayed.
However, there is a part that I want to modify, and the template folder of django is HTML
I edited some.
And even after checking nginx after reloading, it was not reflected.
However, the cause is unknown because of the first deployment.
I would like to ask about this cause.
Does that mean that the displayed HTML is not a template folder display?
I would like to know how to fix it.
Postscript
please tell me.
[Unit]
Description=gunicorn daemon (apasn)
Requires=apasn.socket
After=network.target
[Service]
User=administrator
Group=www-data
WorkingDirectory=/home/administrator/apasn
ExecStart=/home/administrator/apasn/venv/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn/apasn.sock \
person_manager.wsgi:application
[Install]
WantedBy=multi-user.target
Django uses a cache with the templates so restarting nginx won't do anything right away. There are a few things that you can do:
First, try restarting gunicorn:
sudo systemctl restart gunicorn
See if that fixes it. If not, switch debug mode on with DEBUG = True in settings.py and then restart gunicorn. You will definitely see the changes at that point. Then, turn debug mode back off with DEBUG = False and then restart gunicorn again.

gunicorn: Is there a better way to reload gunicorn in described situation?

I have a django project with gunicorn and nginx.
I'm deploying this project with saltstack
In this project, I have a config.ini file that django views read.
In case of nginx, I made that if nginx.conf changes, a state cmd.run service nginx restart with - onchanges - file: nginx_conf restarts the service.
but in case of gunicorn, I can detect the change of config.ini, but I don't know how to reload the gunicorn.
when gunicorn starts, I gave an option --reload but does this option detects change of config.ini not only django project's files'?
If not, what command should I use? (ex: gunicorn reload) ??
thank you.
ps. I saw kill -HUP pid but I think salt wouldn't knows gunicorn's pid..
The --reload option looks for changes to the source code not config. And --reload shouldn't be used in production anyway.
I would either:
1) Tell gunicorn to write a pid file with --pid /path/to/pid/file and then get salt to kill the pid followed by a restart.
2) Get salt to run a pkill gunicorn followed by a restart.
Don't run shell commands to manage services, use service states.
/path/to/nginx.conf:
file.managed:
# ...
/path/to/config.ini:
file.managed:
# ...
nginx:
service.running:
- enabled: true
- watch:
- file: /path/to/nginx.conf
django-app:
service.running:
- enabled: true
- reload: true
- watch:
- file: /path/to/config.ini
You may need to create a service definition for gunicorn yourself. Here's a very basic systemd example:
[Unit]
Description=My django app
After=network.target
[Service]
Type=notify
User=www-data
Group=www-data
WorkingDirectory=/path/to/source
ExecStart=/path/to/venv/bin/python gunicorn project.wsgi:application
ExecReload=/bin/kill -s HUP $MAINPID
KillMode=mixed
[Install]
WantedBy=multi-user.target

Nginx: Permission denied to Gunicorn socket on CentOS 7

I'm working in a Django project deployment. I'm working in a CentOS 7 server provided ma EC2 (AWS). I have tried to fix this bug by many ways but I cant understand what am I missing.
I'm using ningx and gunicorn to deploy my project. I have created my /etc/systemd/system/myproject.servicefile with the following content:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=centos
Group=nginx
WorkingDirectory=/home/centos/myproject_app
ExecStart=/home/centos/myproject_app/django_env/bin/gunicorn --workers 3 --bind unix:/home/centos/myproject_app/django.sock app.wsgi:application
[Install]
WantedBy=multi-user.target
When I run sudo systemctl restart myproject.serviceand sudo systemctl enable myproject.service, the django.sock file is correctly generated into /home/centos/myproject_app/.
I have created my nginx conf flie in the folder /etc/nginx/sites-available/ with the following content:
server {
listen 80;
server_name my_ip;
charset utf-8;
client_max_body_size 10m;
client_body_buffer_size 128k;
# serve static files
location /static/ {
alias /home/centos/myproject_app/app/static/;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/centos/myproject_app/django.sock;
}
}
After, I restart nginx with the following command:
sudo systemctl restart nginx
If I run the command sudo nginx -t, the reponse is:
nginx: configuration file /etc/nginx/nginx.conf test is successful
When I visit my_ip in a web browser, I'm getting a 502 bad gateway response.
If I check the nginx error log, I see the following message:
1 connect() to unix:/home/centos/myproject_app/django.sock failed (13: Permission denied) while connecting to upstream
I really have tried a lot of solutions changing the sock file permissions. But I cant understand how to fix it. How can I fix this permissions bug?... Thank you so much
If all the permissions under the myproject_app folder are correct, and centos user or nginx group have access to the files, I would say it looks like a Security Enhanced Linux (SELinux) issue.
I had a similar problem, but with RHEL 7. I managed to solve it by executing the following command:
sudo semanage permissive -a httpd_t
It's related to the security policies of SELinux, you have to add the httpd_t to the list of permissive domains.
This post from the NGINX blog may be helpful: NGINX: SELinux Changes when Upgrading to RHEL 6.6 / CentOS 6.6
Motivated by a similar issue, I wrote a tutorial a while ago on How to Deploy a Django Application on RHEL 7. It should be very similar for CentOS 7.
Most probably one of two
1- the directory is not accessible to nginx /home/centos/myproject_app/
$ ls -la /home/centos/myproject_app/
if it is not accessible try to change the path to /etc/nginx
if not then try the command
$ /home/centos/myproject_app/django_env/bin/gunicorn --workers 3 --bind unix:/home/centos/myproject_app/django.sock app.wsgi:application
if still not working then activate the environment and python manage.py runserver 0.0.0.0:8000 go to the browser and go to http://ip:8000 the problem may be here, but it the command of gunicorn worked well, then the problem in directory access for nginx user
Exact same problem here.
Removing Group=www-data fixed the issue for me

Running Gunicorn behind chrooted nginx inside virtualenv

I can this setup to work if I start gunicorn manually or if I add gunicorn to my django installed apps. But when I try to start gunicorn with systemd the gunicorn socket and service start fine but they don't serve anything to Nginx; I get a 502 bad gateway.
Nginx is running under the "http" user/group, chroot jail. I used pythonbrew to setup the virtualenvs so gunicorn is installed in my home directory under .pythonbrew. The vitualenv directory is owned by my user and the adm group.
I'm pretty sure there is a permission issue somewhere, because everything works if I start gunicorn but not if systemd starts it. I've tried changing the user and group directives inside the gunicorn.service file, but nothing worked; if root start the server then I get no errors and a 502, if my user starts it I get no errors and 504.
I have checked the Nginx logs and there are no errors, so I'm sure it's a gunicorn issue. Should I have the virtualenv in the app directory? Who should be the owner of the app directory? How can I narrow down the issue?
/usr/lib/systemd/system/gunicorn-app.service
#!/bin/sh
[Unit]
Description=gunicorn-app
[Service]
ExecStart=/home/noel/.pythonbrew/venvs/Python-3.3.0/nlp/bin/gunicorn_django
User=http
Group=http
Restart=always
WorkingDirectory = /home/noel/.pythonbrew/venvs/Python-3.3.0/nlp/bin
[Install]
WantedBy=multi-user.target
/usr/lib/systemd/system/gunicorn-app.socket
[Unit]
Description=gunicorn-app socket
[Socket]
ListenStream=/run/unicorn.sock
ListenStream=0.0.0.0:9000
ListenStream=[::]:8000
[Install]
WantedBy=sockets.target
I realize this is kind of a sprawling question, but I'm sure I can pinpoint the issue with a few pointers. Thanks.
Update
I'm starting to narrow this down. When I run gunicorn manually and then run ps aux|grep gunicorn then I see two processes that are started: master and worker. But when I start gunicorn with systemd there is only one process started. I tried adding Type=forking to my gunicorn.services file, but then I get an error when loading service. I thought that maybe gunicorn wasn't running under the virtualenv or the venv isn't getting activated?
Does anyone know what I'm doing wrong here? Maybe gunicorn isn't running in the venv?
I had a similar problem on OSX with launchd.
The issue was I needed to allow for the process to spawn sub processes.
Try adding Type=forking:
[Unit]
Description=gunicorn-app
[Service]
Type=forking
I know this isn't the best way, but I was able to get it working by adding gunicorn to the list of django INSTALLED_APPS. Then I just created a new systemd service:
[Unit]
Description=hack way to start gunicorn and django
[Service]
User=http
Group=http
ExecStart=/srv/http/www/nlp.com/nlp/bin/python /srv/http/www/nlp.com/nlp/nlp/manage.py run_gunicorn
Restart=always
[Install]
WantedBy=multi-user.target
There must be a better way, but judging by the lack of responses not many people know what that better way is.