The HTML file changed on server is not reflected - django

Deployed django project using DigitalOcean.
After confirming with the server IP address, the site was displayed.
However, there is a part that I want to modify, and the template folder of django is HTML
I edited some.
And even after checking nginx after reloading, it was not reflected.
However, the cause is unknown because of the first deployment.
I would like to ask about this cause.
Does that mean that the displayed HTML is not a template folder display?
I would like to know how to fix it.
Postscript
please tell me.
[Unit]
Description=gunicorn daemon (apasn)
Requires=apasn.socket
After=network.target
[Service]
User=administrator
Group=www-data
WorkingDirectory=/home/administrator/apasn
ExecStart=/home/administrator/apasn/venv/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn/apasn.sock \
person_manager.wsgi:application
[Install]
WantedBy=multi-user.target

Django uses a cache with the templates so restarting nginx won't do anything right away. There are a few things that you can do:
First, try restarting gunicorn:
sudo systemctl restart gunicorn
See if that fixes it. If not, switch debug mode on with DEBUG = True in settings.py and then restart gunicorn. You will definitely see the changes at that point. Then, turn debug mode back off with DEBUG = False and then restart gunicorn again.

Related

Django model save not working in daemon mode but working with runserver

I am saving github repo to server once user add their github repo, see this models.
class Repo(models.Model):
url = models.CharField(help_text='github repo cloneable',max_length=600)
def save(self, *args, **kwargs):
# os.system('git clone https://github.com/somegithubrepo.git')
os.system('git clone {}'.format(self.url))
super(Repo, self).save(*args, **kwargs)
Everything is working fine what I want in both local server and remote server like digitalOcean Droplet, when I add public github repo, the clone always success.
it works when I run the server like this: python3 manage.py runserver 0.0.0.0:800
But when I ran in daemon mode with gunicorn and nginx, it doesn't work,
Everything is working even saving the data in database, only it is not cloning in daemon mode, what's wrong with it?
this is my gunicorn.service
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/var/www/myproject
Environment="PATH=/root/.local/share/virtualenvs/myproject-9citYRnS/bin"
ExecStart=/usr/local/bin/pipenv run gunicorn --access-logfile - --workers 3 --bind unix:/var/www/myproject/config.sock -m 007 myproject.wsgi:application
[Install]
WantedBy=multi-user.target
Note again, everything is working even the gunicorn, the data is saving in database, only it is not cloning github repo, even not firing any error.\
What's the wrong with this? can anyone please help me to fix the issue?
In the systemd unit file, you overwrite the root user's PATH variable in Environment with the path of the virtualenv. Therefore the root user does not have the usual items in the PATH, e.g. /usr/bin where the git command usually exists. So you need to use the absolute path of git, e.g. /usr/bin/git, or add the git's bin directory to the PATH, for example:
Environment="PATH=/usr/bin:/root/.local/share/virtualenvs/myproject-9citYRnS/bin"

How to make djangoQ run as a service using systemd

How to make DjangoQ run as a service using systemd?
There is a gap, djangoQ installation and running is like learning to run a Django project.
But now,
you want to push the project which is using DjangoQ to production.
The problem is you are not aware of server and Linux, maybe you are even coming from Windows....
So can somebody tell us, How to run DjangoQ in ubuntu using systemd
I am getting the error that ExecStart path is not absolute...
I came from here:
Stack Question
But that answer is not complete, too much tacit knowledge unshared...
[Unit]
Description= Async Tasks Runner
After=network.target remote-fs.target
[Service]
ExecStart=home/shreejvgassociates/ot_britain_college/env/bin/python manage.py qcluster --pythonpath home/shreejvgassociates/ot_britain_college/OverTimeCollege --settin$
RootDirectory=home/shreejvgassociates/ot_britain_college/OverTimeCollege
User=shreejvgassociates
Group=shreejvgassociates
Restart=always
[Install]
WantedBy=multi-user.target
It works well. Try it.
[Unit]
Description=Django-Q Cluster for site TestSite
After=multi-user.target
[Service]
Type=simple
ExecStart=/home/test/TestSite/venv/bin/python3 /home/test/TestSite/manage.py qcluster
[Install]
WantedBy=multi-user.target

Where does Django stdout output go when running with nginx and Gunicorn?

In my my_application/settings.py file, for example, I have a couple of print statements, thus:
print( 'running settings.py: ALLOWED_HOSTS: ' )
print( '\n'.join( ALLOWED_HOSTS ) )
... where does this output actually go on a remote server running nginx and Gunicorn?
NB I am aware this may be an egregious security breach to print ALLOWED_HOSTS anywhere, for all I know. This is merely an example: I am at the learning/experimentation stage.
Edit after AKX's answer
I found no way of getting stdout to be directed to journalctl, even with the -R switch.
... after many experiments and frustrations and baffling 502 errors, I finally found a way: my systemd config file now looks like this:
[Unit]
Description=Gunicorn server for mysite.xyz
[Service]
Restart=on-failure
User=mike
WorkingDirectory=/home/mike/sites/mysite.xyz
EnvironmentFile=/home/mike/sites/mysite.xyz/.env
ExecStart=/home/mike/sites/mysite.xyz/virtualenv/bin/gunicorn \
--bind unix:/tmp/mysite.xyz.socket \
-R \
--capture-output \
--error-logfile /home/mike/gunicorn-error.log \
superlists.wsgi:application
[Install]
WantedBy=multi-user.target
... indeed, I find that it is not sufficient to include the -R switch: you also seem to have to include the --capture-output switch and the --error-logfile switch (NB I am not clear whether there is a default destination for error output if you don't set that switch).
With the above config file, stdout from settings.py goes to the file ~/gunicorn-error.log. Hurrah.
Wherever gunicorn's stdout is directed to.
If you run things under systemd, for instance, they'll end up into the system journal which you can read with journalctl.
When you run gunicorn there is special parameter named --access-logfile, which specifies the access log file to write to. If you prefer stdout then set the '-' as value parameter.
For example:
gunicorn --access-logfile - demoapp.wsgi

gunicorn: Is there a better way to reload gunicorn in described situation?

I have a django project with gunicorn and nginx.
I'm deploying this project with saltstack
In this project, I have a config.ini file that django views read.
In case of nginx, I made that if nginx.conf changes, a state cmd.run service nginx restart with - onchanges - file: nginx_conf restarts the service.
but in case of gunicorn, I can detect the change of config.ini, but I don't know how to reload the gunicorn.
when gunicorn starts, I gave an option --reload but does this option detects change of config.ini not only django project's files'?
If not, what command should I use? (ex: gunicorn reload) ??
thank you.
ps. I saw kill -HUP pid but I think salt wouldn't knows gunicorn's pid..
The --reload option looks for changes to the source code not config. And --reload shouldn't be used in production anyway.
I would either:
1) Tell gunicorn to write a pid file with --pid /path/to/pid/file and then get salt to kill the pid followed by a restart.
2) Get salt to run a pkill gunicorn followed by a restart.
Don't run shell commands to manage services, use service states.
/path/to/nginx.conf:
file.managed:
# ...
/path/to/config.ini:
file.managed:
# ...
nginx:
service.running:
- enabled: true
- watch:
- file: /path/to/nginx.conf
django-app:
service.running:
- enabled: true
- reload: true
- watch:
- file: /path/to/config.ini
You may need to create a service definition for gunicorn yourself. Here's a very basic systemd example:
[Unit]
Description=My django app
After=network.target
[Service]
Type=notify
User=www-data
Group=www-data
WorkingDirectory=/path/to/source
ExecStart=/path/to/venv/bin/python gunicorn project.wsgi:application
ExecReload=/bin/kill -s HUP $MAINPID
KillMode=mixed
[Install]
WantedBy=multi-user.target

Running Gunicorn behind chrooted nginx inside virtualenv

I can this setup to work if I start gunicorn manually or if I add gunicorn to my django installed apps. But when I try to start gunicorn with systemd the gunicorn socket and service start fine but they don't serve anything to Nginx; I get a 502 bad gateway.
Nginx is running under the "http" user/group, chroot jail. I used pythonbrew to setup the virtualenvs so gunicorn is installed in my home directory under .pythonbrew. The vitualenv directory is owned by my user and the adm group.
I'm pretty sure there is a permission issue somewhere, because everything works if I start gunicorn but not if systemd starts it. I've tried changing the user and group directives inside the gunicorn.service file, but nothing worked; if root start the server then I get no errors and a 502, if my user starts it I get no errors and 504.
I have checked the Nginx logs and there are no errors, so I'm sure it's a gunicorn issue. Should I have the virtualenv in the app directory? Who should be the owner of the app directory? How can I narrow down the issue?
/usr/lib/systemd/system/gunicorn-app.service
#!/bin/sh
[Unit]
Description=gunicorn-app
[Service]
ExecStart=/home/noel/.pythonbrew/venvs/Python-3.3.0/nlp/bin/gunicorn_django
User=http
Group=http
Restart=always
WorkingDirectory = /home/noel/.pythonbrew/venvs/Python-3.3.0/nlp/bin
[Install]
WantedBy=multi-user.target
/usr/lib/systemd/system/gunicorn-app.socket
[Unit]
Description=gunicorn-app socket
[Socket]
ListenStream=/run/unicorn.sock
ListenStream=0.0.0.0:9000
ListenStream=[::]:8000
[Install]
WantedBy=sockets.target
I realize this is kind of a sprawling question, but I'm sure I can pinpoint the issue with a few pointers. Thanks.
Update
I'm starting to narrow this down. When I run gunicorn manually and then run ps aux|grep gunicorn then I see two processes that are started: master and worker. But when I start gunicorn with systemd there is only one process started. I tried adding Type=forking to my gunicorn.services file, but then I get an error when loading service. I thought that maybe gunicorn wasn't running under the virtualenv or the venv isn't getting activated?
Does anyone know what I'm doing wrong here? Maybe gunicorn isn't running in the venv?
I had a similar problem on OSX with launchd.
The issue was I needed to allow for the process to spawn sub processes.
Try adding Type=forking:
[Unit]
Description=gunicorn-app
[Service]
Type=forking
I know this isn't the best way, but I was able to get it working by adding gunicorn to the list of django INSTALLED_APPS. Then I just created a new systemd service:
[Unit]
Description=hack way to start gunicorn and django
[Service]
User=http
Group=http
ExecStart=/srv/http/www/nlp.com/nlp/bin/python /srv/http/www/nlp.com/nlp/nlp/manage.py run_gunicorn
Restart=always
[Install]
WantedBy=multi-user.target
There must be a better way, but judging by the lack of responses not many people know what that better way is.