How to force application's stdout logs through uwsgi? - django

I have Django application running behind uwsgi inside Docker container. uwsgi is started via ENTRYPOINT and CMD parameters in Dockerfile. I succesfully connect it to separated Nginx container and check expected results in browser.
So far, so good.
Now I would like see application logs in Django container. But I am not able to find right combination of Django's settings LOGGING variable and uwsgi switches. I just see uwsgi standard logs which is useless for me.
Is it possible at all? It seems to me, that I must make some wrapper BASH script, like:
uwsgi --socket 0.0.0.0:80 --die-on-term --module myapp.wsgi:application --chdir /src --daemonize /dev/null
tail -f /common.log```
... set LOGGING inside Django to write into /common.log and tail it to output.
Is there some more elegant solution?
Updated 2016-02-24:
Yes, it is possible. I made mistake somewhere in my first tests. I published working example on https://github.com/msgre/uwsgi_logging.

use
log-master=true
in your uwsgi-conf.ini
or
--log-master
if you pass it as param

Related

How to deploy a django/gunicorn application as a systemd service?

Context
For the last few years, I have been only using docker to containerize and distribute django applications. For the first time, I am asked to set things so it could be deployed the old way. I am looking for a way to nearly achieve the levels of encapsulation and automation I am used to while avoiding any pitfalls
Current heuristic
So, the application (my_app):
is build as a .whl
comes with gunicorn
is published on a private Nexus and
can be installed on any of the client's server with a simple pip install
(it relies on Apache as reverse proxy / static files server and Postgresql but this is irrelevant for this discussion)
To make sure that the app will remain available, I thought about running it as a service managed by systemd. Based on this article, here is what /etc/systemd/system/my_app.service could look like
[Unit]
Description=My app service
After=network.target
StartLimitIntervalSec=0[Service]
Type=simple
Restart=always
RestartSec=1
User=centos
ExecStart=gunicorn my_app.wsgi:application --bind 8000
[Install]
WantedBy=multi-user.target
and, from what I understand, I could provide this service with environment variables to be read by os.environ.get() somewhere in /etc/systemd/system/my_app.service.d/.env
SECRET_KEY=somthing
SQL_ENGINE=somthing
SQL_DATABASE=somthing
SQL_USER=somthing
SQL_PASSWORD=somthing
SQL_HOST=somthing
SQL_PORT=somthing
DATABASE=somthing
Opened questions
First if all, are there any obvious pitfalls?
upgrade. Will restarting the service be enough to upgrade the application after a pip install --upgrade my_app?
library conflicts Is it possible to make sure that gunicorn my_app.wsgi:application will use the correct versions of libraries used by the project? Or do I need to use a virtualenv? If so, should source /path/to/the/venv/bin/activate be used in ExecStart or is there a cleaner way to make sure ExecStart runs in a separate virtual environment?
where should python manage.py migrate and python manage.py collectstatic live? Should I use those in /etc/systemd/system/my_app.service
Maybe you can try, but I'm also not sure if it works. This is for the third question.
You have to create a virtualenv, install gunicorn and django in it.
Create a file python like this "gunicorn_config.py"
Fill like this, or can be adjusted as needed:
#gunicorn_config.py
command = 'env/bin/gunicorn'
pythonpath = 'env/myproject'
bind = '127.0.0.1:8000'
in the systemd ExecStart fill env/bin/gunicorn -c env/gunicorn_config.py myproject.wsgi.

How does a django app start it's virtualenv?

I'm trying to understand how virtual environment gets invoked. The website I have been tasked to manage has a .venv directory. When I ssh into the site to work on it I understand I need to invoke it with source .venv/bin/activate. My question is: how does the web application invoke the virtual environment? How do I know it is using the .venv, not the global python?
More detail: It's a Drupal website with Django kind of glommed onto it. Apache it the main server. I believe the Django is served by gunicorn. The designer left town
_
Okay, I've found how, in my case, the virtualenv was being invoked for the django.
BASE_DIR/run/gunicorn script has:
#GUNICORN='/usr/bin/gunicorn'
GUNICORN=".venv/bin/gunicorn"
GUNICORN_CONF="$BASE_DIR/run/gconf.py"
.....
$GUNICORN --config $GUNICORN_CONF --daemon --pid $PIDFILE $MODULE
So this takes us into the .venv where the gunicorn script starts with:
#!/media/disk/www/aavso_apps/.venv/bin/python
Voila
Just use absolute path when calling python in virtualenv.
For example your virtualenv is located in /var/webapps/yoursite/env
So you must call it /var/webapps/yoursite/env/bin/python
If you use just Django behind a reverse proxy, Django will use whatever is the python environment for the user that started the server was determined in which python command. If you're using a management tool like Gunicorn, you can specify which environment to use in those configs. Although Gunicorn itself requires us to activate the virtual environment before invoking Gunicorn
EDIT:
Since you're using Gunicorn, take a look at this, https://www.digitalocean.com/community/tutorials/how-to-deploy-python-wsgi-apps-using-gunicorn-http-server-behind-nginx

Completing uWSGI/Nginx setup for Django site

So a few months ago I setup a django blog on an Ubuntu server with Digital Ocean using this tutorial
Note: please excuse my links below but SO will not let me post more than 2 links due to my account's lack of points.
digitalocean[ dot] com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-ubuntu-16-04
the only problem was this was done as a brand new blog and I wanted to put my own one that was on my local computer there but I tried to do this by just uploading files via ssh to copy over the old ones and ended up making a mess and had to scrap it.
I just started again on a new server and have done the basic setup along with cloning my own django blog onto the server from Github as well as installing postgresql and now Im following this:
uwsgi-docs.readthedocs[ dot ]io/en/latest/tutorials/Django_and_nginx.html
So far I have completed the following successfully:
installed uwsgi
run the test.py 'hello world' file successfully with:
uwsgi --http :8000 --wsgi-file test.py
test run the django site on the sever with:
python manage.py runserver my_ip_here:8000
the above appears to be working as I can see the bare basics of my site but not css etc)
done a test run of the site with:
uwsgi --http :8000 --module mysite.wsgi
run collect static which seems to have been successful
installed nginx and I can see the 'Welcome to nginx!' when visiting the ip
created the mysite_nginx.conf file in my main directory
What Im having problems with
this part from the readthedocs tutorial doesn't work for me, when I visit the paths of any of my images:
To check that media files are being served correctly, add an image called media.png to the /path/to/your/project/project/media directory, then visit example.com:8000/media/media.png - if this works, you’ll know at least that nginx is serving files correctly.
running this does NOT return the 'hello world' test that I should see
uwsgi --socket :8001 --wsgi-file test.py
after making the appropriate changes in mysite_nginx.conf this doesn't return anything at :8000 either (my sock file is present though)
uwsgi --socket mysite.sock --wsgi-file test.py
Some other things to add:
my error.log at /var/log/nginx/error.log is empty with no messages in, not sure if this is normal
this is my mysite_nginx.conf file - http://pastebin[ dot ]com/CGcc8unv
when I run this command as specified by the readthedocs tutorial
uwsgi --socket :8001 --wsgi-file test.py
and then go to mysite:8001 I get these errors in the shell
invalid request block size: 21573 (max 4096)...skip
invalid request block size: 21573 (max 4096)...skip
I setup the symlink as the readthedocs tutorial specified and have double checked that.
I do not have an nginx .ini file yet as where Im at in the readthedocs tutorial it hasn't specified to creat that yet.
as I said earlier I can still return my site running some uwsgi commands and I can see the 'welcome to nginx' message at my site/ip.

How to use Django logging with gunicorn

I have a Django 1.6 site running with gunicorn, managed by supervisor. During tests and runserver I have logging on the console, but with gunicorn the statements don't show up anywhere (not even ERROR level logs). They should be in /var/log/supervisor/foo-stderr---supervisor-51QcIl.log but they're not. I have celery running on a different machine using supervisor and its debug statements show up fine in its supervisor error file.
Edit:
Running gunicorn in the foreground shows that none of my error messages are being logged to stderr like they are when running manage.py. This is definitely a gunicorn problem and not a supervisor problem.
I got a response on GitHub:
https://github.com/benoitc/gunicorn/issues/708
Since you have passed disable_existing_loggers the Gunicorn loggers are disabled when Django loads your logging configuration. If you are setting this because you want to disable some default Django logging configuration, make sure you add back the gunicorn loggers, gunicorn.error and gunicorn.access with whatever handlers you want.
In /etc/supervisor/conf.d/your-app.confyou should set log paths:
stdout_logfile=/var/log/your-app.log
stderr_logfile=/var/log/your-app.log
First, in your supervisor config for the gunicorn script, be sure to define
stdout_logfile=/path/to/logfile.log
redirect_stderr=true
That will make stdout and stderr go to the same file.
Now, on your gunicorn script, be sure to call the process with the following argument
gunicorn YourWSGIModule:app --log-level=critical

django-celery works in development, fails in wsgi production: How to debug?

I'm using the django celery task queue, and it works fine in development, but not at all in wsgi production. Even more frustrating, it used to work in production, but I somehow broke it.
"sudo rabbitmqctl status" tells me that the rabbitmq server is working. Everything also seems peachy in django: objects are created, and routed to the task manager without problems. But then their status just stays as "queued" indefinitely. The way I've written my code, they should switch to "error" or "ready," as soon as anything gets returned from the celery task. So I assume there's something wrong with the queue.
Two related questions:
Any ideas what the problem might be?
How do I debug celery? Outside of the manage.py celeryd command, I'm not sure how to peer into its inner workings. Are there log files or something I can use?
Thanks!
PS - I've seen this question, but he seems to want to run celery from manage.py, not wsgi.
After much searching, the most complete answer I found for this question is here. These directions flesh out the skimpy official directions for daemonizing celeryd. I'll copy the gist here, but you should follow the link, because Michael has explained some parts in more detail.
The main idea is that you need scripts in three places:
/etc/init.d/celeryd
/etc/default/celeryd
myApp/settings.py
Settings.py appears to be the same as in development mode. So if that's already set up, there are four steps to shifting to production:
Download the daemon script since it's not included in the installation:
https://github.com/celery/celery/tree/3.0/extra/generic-init.d/
Put it in /etc/init.d/celeryd
Make a file in /etc/default/celeryd, and put the variables here into it:
http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#example-django-configuration
Start the script
This solved my problem.
I think the reason you are not getting any response from celery, is because celeryd server might not be running. You could find out about it by doing ps -ef |grep celeryd. In order to figure out what is the error while trying to run celeryd, you might want to do the following.
In your settings.py file you could give the path to the celery log file CELERYD_LOG_FILE = <Path to the log file>
and while running celeryd server you could specify the level manage.py celeryd -l DEBUG.