Website with Django 3 and django_plotly_dash does not update website - django

I have a website that contains a dashboard that is based on plotly-dash. I want to change it and I can see the changes when running ./manage.py runserver, but not when serving is with nginx. I did
./manage.py collectstatic
Also deleted all .pyc files and __pycache__ and restarted supervisor and nginx with :
sudo supervisorctl stop all && sudo supervisorctl reread && sudo supervisorctl update && sudo supervisorctl start all && sudo service nginx restart
Which runs through. What did I miss?

I had to kill some processes myself using sudo killall python. It seems sometimes supervisor forgets to kill some processes.

Related

Why new pulled code from git not applied when Gunicorn running on ubuntu server?

I have a django project on ubuntu server that works with Nginx and Gunicorn correctrly.
I change some django code and push on git server and then pull from git server on ubuntu server. It seems new changes not implemented on server. How to fix it?
I restart and reload Gunicorn and Nginx but not happend.
I found the answer we can enter command like this for restarting Gunicorn:
sudo systemctl daemon-reload
sudo systemctl restart gunicorn

How to Deploy Multiple Django Apps on Single server using Ngnix & GUnicorn?

I am trying to Deploy Two Django apps with single AWS EC2 Instance having same IP.
But it always failed when I have added the second App.sock and test Supervisor.
I fond some body asked similar question before. but Not answered properly, and my use case is little different. ( Run multiple django project with nginx and gunicorn )
I have followed these steps:
.
Cloned my project from Git *
pip install -r requiernments.txt
pip3 install gunicorn
sudo apt-get install nginx -y
sudo apt-get install supervisor -y
cd /etc/supervisor/conf.d
sudo touch testapp2.conf
sudo nano testapp2.conf
Updated config file same as below
[program:gunicorn]
directory=/home/ubuntu/projects/testapp2/testerapp
command=/home/ubuntu/projects/testapp2/venv/bin/gunicorn --workers 3 --bind unix:/home/ubuntu/projects/testapp2/testerapp/app.sock testerapp.wsgi:application
autostart=true
autorestart=true
stderr_logfile=/home/ubuntu/projects/testapp2/log/gunicorn.err.log
stdout_logfile=/home/ubuntu/projects/testapp2/log/gunicorn.out.log
[group:guni]
programs:gunicorn
*----------
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl status
The below steps will work and Site available on browser if there is only One configuration above. But when i have added an additional configuration, it shows 502 Bad Gateway on the Browser. Please help me to solve this issue.
You can add one more config file in supervisor conf.d and use different port numbers in the different Django apps.

Restart django on linux

Ive deployed a django app on linux using nginx, supervisor and gunicorn.
I have made updates (using GIT and manually in settings.py). The files changed but the application has not changed (i.e. it still functions the same as before the changes were added).
I've tried lots of commands but none of them work e.g.
//reload/restart nginx
sudo service nginx restart
sudo systemctl reload nginx
sudo systemctl restart nginx
//reload supervisor
sudo supervisorctl reload
//restart gunicorn (supervisor)
sudo supervisorctl restart gunicorn
//reboot ubuntu using systemctl
sudo systemctl reboot
//reboot ubuntu
sudo reboot now
//force reboot ubuntu
sudo reboot -f
Any help is greatly appreciated.

Django changed files not synchronizing changes on ubuntu nginx gunicorn

My django project working fine which is deployed on ubuntu server with nginx, Gunicorn. But when I make changes in Template files, Views, Forms, Models and upload using Filezilla, No changes synchronize/ Or project is not showing made changes by me.
Can scholars and seniors help in this please.
I think this will help if you are using gunicorn sock file
sudo systemctl daemon-reload
sudo systemctl restart gunicorn
sudo nginx -t && sudo systemctl restart nginx
you can read from here

Application served by uWSGI with Supervisord from Docker

I am trying to serve a Django application with uWSGI from Docker. I am using supervisord to start the process for me at the end of the Dockerfile. When I run the image, it says that the uWSGI process starts and succeeds, but I'm unable to view the application at the URL I thought would display it. Perhaps I do not have things set up/configured correctly?
I am not having supervisord start nginx right now because I am currently serving static files via Amazon S3, and want to first focus on getting the wsgi up and running.
I am successful in running the application using uwsgi locally by doing uwsgi --init uwsgi.ini:local, but I am having trouble moving it into docker.
Here is my Dockerfile
FROM ubuntu:14.04
# Get most recent apt-get
RUN apt-get -y update
# Install python and other tools
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
RUN apt-get install -y python3 python3-dev python-distribute
RUN apt-get install -y nginx supervisor
# Get Python3 version of pip
RUN apt-get -y install python3-setuptools
RUN easy_install3 pip
RUN pip install uwsgi
RUN apt-get install -y python-software-properties
# Install GEOS
RUN apt-get -y install binutils libproj-dev gdal-bin
# Install node.js
RUN apt-get install -y nodejs npm
# Install postgresql dependencies
RUN apt-get update && \
apt-get install -y postgresql libpq-dev && \
rm -rf /var/lib/apt/lists
ADD . /home/docker/code
# Setup config files
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN rm /etc/nginx/sites-enabled/default
RUN ln -s /home/docker/code/nginx-app.conf /etc/nginx/sites-enabled/
RUN ln -s /home/docker/code/supervisor-app.conf /etc/supervisor/conf.d/
RUN pip install -r /home/docker/code/app/requirements.txt
EXPOSE 8080
CMD ["supervisord", "-c", "/home/docker/code/supervisor-app.conf", "-n"]
And here is my uwsgi.ini
[uwsgi]
# this config will be loaded if nothing specific is specified
# load base config from below
ini = :base
# %d is the dir this configuration file is in
socket = %dmy_app.sock
master = true
processes = 4
[dev]
ini = :base
# socket (uwsgi) is not the same as http, nor http-socket
socket = :8001
[local]
ini = :base
http = :8000
# set the virtual env to use
home=/Users/my_user/.virtualenvs/my_env
[base]
# chdir to the folder of this config file, plus app/website
chdir = %dmy_app/
# load the module from wsgi.py, it is a python path from
# the directory above.
module=my_app.wsgi:application
# allow anyone to connect to the socket. This is very permissive
chmod-socket=666
http = :8080
And here is my supervisor-app.conf file
[program:app-uwsgi]
command = /usr/local/bin/uwsgi --ini /home/docker/code/uwsgi.ini
From a MAC using boot2docker, I am trying to access the application at $(boot2docker ip):8080
Ultimately I want to upload this container to AWS Elastic Beanstalk, with not only a uWSGI process running, but a celery worker running as well.
When I run my container, I can see from the logs that both supervisor and uwsgi successfully start. I was able to get things running on my local machine both using uwsgi by itself and uwsgi through supervisor, but for some reason when I containerize the thing I can't find it anywhere.
Here is what is logged when I boot up the docker container
2014-12-25 15:08:03,950 CRIT Supervisor running as root (no user in config file)
2014-12-25 15:08:03,953 INFO supervisord started with pid 1
2014-12-25 15:08:04,957 INFO spawned: 'uwsgi' with pid 9
2014-12-25 15:08:05,970 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
How are you starting the docker container?
I don't see any CMD or ENTRYPOINT script, so I'm unclear as to how anything is getting started.
In general, I would advise avoiding things like supervisord unless absolutely necessary, just start uWSGI in the foreground from a CMD line. Try adding the following as the last line in the Dockerfile:
CMD ["/usr/local/bin/uwsgi", "--ini", "/home/docker/code/uwsgi.ini"]
and then just run with docker run -p 8000:8000 image_name. You should get some reply from uWSGI. If that works, I recommend you move the other services (postgres, node, to separate containers). There are official images for Node, Python and Postgres which should save you some time.
Remember, Docker containers only run as long as their main process (which must be in the foreground).