Completing uWSGI/Nginx setup for Django site - django

So a few months ago I setup a django blog on an Ubuntu server with Digital Ocean using this tutorial
Note: please excuse my links below but SO will not let me post more than 2 links due to my account's lack of points.
digitalocean[ dot] com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-ubuntu-16-04
the only problem was this was done as a brand new blog and I wanted to put my own one that was on my local computer there but I tried to do this by just uploading files via ssh to copy over the old ones and ended up making a mess and had to scrap it.
I just started again on a new server and have done the basic setup along with cloning my own django blog onto the server from Github as well as installing postgresql and now Im following this:
uwsgi-docs.readthedocs[ dot ]io/en/latest/tutorials/Django_and_nginx.html
So far I have completed the following successfully:
installed uwsgi
run the test.py 'hello world' file successfully with:
uwsgi --http :8000 --wsgi-file test.py
test run the django site on the sever with:
python manage.py runserver my_ip_here:8000
the above appears to be working as I can see the bare basics of my site but not css etc)
done a test run of the site with:
uwsgi --http :8000 --module mysite.wsgi
run collect static which seems to have been successful
installed nginx and I can see the 'Welcome to nginx!' when visiting the ip
created the mysite_nginx.conf file in my main directory
What Im having problems with
this part from the readthedocs tutorial doesn't work for me, when I visit the paths of any of my images:
To check that media files are being served correctly, add an image called media.png to the /path/to/your/project/project/media directory, then visit example.com:8000/media/media.png - if this works, you’ll know at least that nginx is serving files correctly.
running this does NOT return the 'hello world' test that I should see
uwsgi --socket :8001 --wsgi-file test.py
after making the appropriate changes in mysite_nginx.conf this doesn't return anything at :8000 either (my sock file is present though)
uwsgi --socket mysite.sock --wsgi-file test.py
Some other things to add:
my error.log at /var/log/nginx/error.log is empty with no messages in, not sure if this is normal
this is my mysite_nginx.conf file - http://pastebin[ dot ]com/CGcc8unv
when I run this command as specified by the readthedocs tutorial
uwsgi --socket :8001 --wsgi-file test.py
and then go to mysite:8001 I get these errors in the shell
invalid request block size: 21573 (max 4096)...skip
invalid request block size: 21573 (max 4096)...skip
I setup the symlink as the readthedocs tutorial specified and have double checked that.
I do not have an nginx .ini file yet as where Im at in the readthedocs tutorial it hasn't specified to creat that yet.
as I said earlier I can still return my site running some uwsgi commands and I can see the 'welcome to nginx' message at my site/ip.

Related

python manage.py runserver shows an old webapp page I developed

I am learning Django so I've created many Django webapps under one directory. For example,
\webapps
\polls
\polls
\api
\manage.py
...
\ponynote
\ponynote
\frontend
\manage.py
...
I didn't use a virtualenv for developing django apps. I don't know whether it's the reason that causes the problem as below.
App 1
python manage.py runserver works all fine. (default port 8000)
App 2
python manage.py runserver still shows the App 1 page.
Method I tried:
change the port python manage.py runserver 8001, it shows App 2 page.
try to find the process ID PID, and kill it. No sign of port 8000.
However, this isn't the best solution since I can't change the port everytime when developing a new django app. Does anyone have a clue why this happens? Kudos.
Problem solved:
remove web browser cache. In my case, it's Chrome.
One effective solution would be to create a bash script for your use. Create 2 separate bash scripts for your projects (The same dir where your project's manage.py can be found).
For App 1:
# script- App 1
python manage.py runserver 0.0.0.0:8000
For App 2:
# script- App 2
python manage.py runserver 0.0.0.0:8080
And for running:
./yourbashscriptfile

How to force application's stdout logs through uwsgi?

I have Django application running behind uwsgi inside Docker container. uwsgi is started via ENTRYPOINT and CMD parameters in Dockerfile. I succesfully connect it to separated Nginx container and check expected results in browser.
So far, so good.
Now I would like see application logs in Django container. But I am not able to find right combination of Django's settings LOGGING variable and uwsgi switches. I just see uwsgi standard logs which is useless for me.
Is it possible at all? It seems to me, that I must make some wrapper BASH script, like:
uwsgi --socket 0.0.0.0:80 --die-on-term --module myapp.wsgi:application --chdir /src --daemonize /dev/null
tail -f /common.log```
... set LOGGING inside Django to write into /common.log and tail it to output.
Is there some more elegant solution?
Updated 2016-02-24:
Yes, it is possible. I made mistake somewhere in my first tests. I published working example on https://github.com/msgre/uwsgi_logging.
use
log-master=true
in your uwsgi-conf.ini
or
--log-master
if you pass it as param

Django: How to customize an runserver_socketio command for manage.py

I'm writing a realtime chatroom similar to this package with Django. It runs a separate WebSocket server with command
python manage.py runserver_socketio
But I can't figure out how to make the runserver_socketio command load my handler. The only related code I can find in the package is here in django-socketio/django_socketio/management/commands/runserver_socketio.py:
server = SocketIOServer(bind, handler, resource="socket.io")
....
handler = WSGIHandler()
But why on earth is this handler related to my code?
I got it. the manage.py runserver_socketio command starts almost an identical server as manage.py runserver does. The only difference is that this new server can handle websocket protocol.
To see this ,suppose runserver runs on 127.0.0.1:8000 and runserver_socketio on 127.0.0.1:9000. Just visit 127.0.0.1:9000 and you will get the same webpage with 127.0.0.1:8000.
The secret lies in django-socketio/django_socketio/example_project/urls.py, which referencing django-socketio/django_socketio/urls.py. In this second urls.py, we can see that it loads events.py in our project.

How to use Django logging with gunicorn

I have a Django 1.6 site running with gunicorn, managed by supervisor. During tests and runserver I have logging on the console, but with gunicorn the statements don't show up anywhere (not even ERROR level logs). They should be in /var/log/supervisor/foo-stderr---supervisor-51QcIl.log but they're not. I have celery running on a different machine using supervisor and its debug statements show up fine in its supervisor error file.
Edit:
Running gunicorn in the foreground shows that none of my error messages are being logged to stderr like they are when running manage.py. This is definitely a gunicorn problem and not a supervisor problem.
I got a response on GitHub:
https://github.com/benoitc/gunicorn/issues/708
Since you have passed disable_existing_loggers the Gunicorn loggers are disabled when Django loads your logging configuration. If you are setting this because you want to disable some default Django logging configuration, make sure you add back the gunicorn loggers, gunicorn.error and gunicorn.access with whatever handlers you want.
In /etc/supervisor/conf.d/your-app.confyou should set log paths:
stdout_logfile=/var/log/your-app.log
stderr_logfile=/var/log/your-app.log
First, in your supervisor config for the gunicorn script, be sure to define
stdout_logfile=/path/to/logfile.log
redirect_stderr=true
That will make stdout and stderr go to the same file.
Now, on your gunicorn script, be sure to call the process with the following argument
gunicorn YourWSGIModule:app --log-level=critical

Running Django-Celery in Production

I've built a Django web application and some Django-Piston services. Using a web interface a user submits some data which is POSTed to a web service and that web service in turn uses Django-celery to start a background task.
Everything works fine in the development environment using manage.py. Now I'm trying to move this to production on a proper apache server. The web application and web services work fine in production but I'm having serious issues starting celeryd as a daemon. Based on these instructions: http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#running-the-worker-as-a-daemon I've created a celeryconfig.py file and stuck it in the /usr/bin directory (this is where celeryd is location on my arch linux server).
CELERYD_CHDIR="/srv/http/ControllerFramework/"
DJANGO_SETTINGS_MODULE="settings"
CELERYD="/srv/http/ControllerFramework/manage.py celeryd"
However when I try to start celeryd from the command line I get the following error:
"Missing connection string! Do you have "
celery.exceptions.ImproperlyConfigured: Missing connection string! Do you have CELERY_RESULT_DBURI set to a real value?
Not sure where to go from here. Below is my settings.py section as it pertains to this problem:
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "controllerFramework"
BROKER_PASSWORD = "******"
BROKER_VHOST = "localhost"
So I ended up having a chat with the project lead on django-celery. Couple of things. First off celery must be run using 'manage.py celeryd'. Secondly, in the settings.py file you have to 'import djcelery'
This import issue may be fixed in the next version but for now you have to do this.