Gunicorn is creating workers in every second - django

I am running Django using Gunicorn behind Nginx. In one of my installation, when I run the gunicorn process, I keep getting debug output, it's like workers are being created in every second (I assume this because django is loading very slow and note the message "[20205] [DEBUG] 3 workers"). You can check the detail output at this gist
In similar setup, I am running 3 more installations without any such issues and respective site loads almost instantly.
Any idea why this is happening? Thanks.

The polling of the workers every second on --log-level debug was introduced in gunicorn==19.2.
Change the log level to info.

Related

Django: Run Constantly Running Background Task On IIS-Hosted Application

I have a Django web application hosted on IIS. I subprocess should ALWAYS BE running alongside the web application. When I run the application locally using
python manage.py runserver
the background task runs perfectly while the application is running. However, hosted on IIS the background task does not appear to run. How do I make the task run even when hosted on IIS?
In the manage.py file of Django I have the following code:
def run_background():
return subprocess.Popen(["python", "background.py"], creationflag=subprocess.CREATE_NEW_PROCESS_GROUP)
run_background()
execute_from_command_line(sys.argv)
I do not know how to resolve this issue.
Would something like Celery work to indefinitely run a task? How would I do this? Please give step by step instructions.
You could set appplication set to auto-start by following below steps:
Select Site -> advance setting->Preload enable=”true”
Select Application pool->advance setting->start mode=”always running”, Under the Process Model section, set the Idle Time-out (minutes) option to 0 and Under the Recycling section, set the Regular Time Interval (minutes) option to 0
Run iisreset command from the command prompt.
Also, check that you set FastCGI setting:
Regards,
Jalpa.

Heroku H12 errors with Django

I have a Djano 1.8 application running on 6 Heroku dynos. Every once in a while, a request that is normally very fast (200ms) comes in and hits the Heroku router 30 second timeout. Then that dyno times out other follow-on requests, sometimes taking 25 minutes to stabilize and go back to serving requests without timing out.
I'm using gunicorn with default web_concurrency (2 workers per dyno). I read Heroku's article on timeout behavior here, and am considering adding something like --timeout 20 to my gunicorn startup in the Procfile. That seems to be what they are recommending. Do you think that will solve the problem? From looking at the gunicorn documentation on the timeout setting it says:
Workers silent for more than this many seconds are killed and restarted.
But my workers aren't silent are they? They are trying to process requests but are hung and unable to for some reason. Also, when I run gunicorn -h I can see that the default setting for timeout is 30. So if that shouldn't the long running task on my app be killed well before 5, 10 minutes, etc. Do I understand that correctly?

How to use Django logging with gunicorn

I have a Django 1.6 site running with gunicorn, managed by supervisor. During tests and runserver I have logging on the console, but with gunicorn the statements don't show up anywhere (not even ERROR level logs). They should be in /var/log/supervisor/foo-stderr---supervisor-51QcIl.log but they're not. I have celery running on a different machine using supervisor and its debug statements show up fine in its supervisor error file.
Edit:
Running gunicorn in the foreground shows that none of my error messages are being logged to stderr like they are when running manage.py. This is definitely a gunicorn problem and not a supervisor problem.
I got a response on GitHub:
https://github.com/benoitc/gunicorn/issues/708
Since you have passed disable_existing_loggers the Gunicorn loggers are disabled when Django loads your logging configuration. If you are setting this because you want to disable some default Django logging configuration, make sure you add back the gunicorn loggers, gunicorn.error and gunicorn.access with whatever handlers you want.
In /etc/supervisor/conf.d/your-app.confyou should set log paths:
stdout_logfile=/var/log/your-app.log
stderr_logfile=/var/log/your-app.log
First, in your supervisor config for the gunicorn script, be sure to define
stdout_logfile=/path/to/logfile.log
redirect_stderr=true
That will make stdout and stderr go to the same file.
Now, on your gunicorn script, be sure to call the process with the following argument
gunicorn YourWSGIModule:app --log-level=critical

Django with celery: scheduled task (ETA) executed multiple times in parallel

I'm developing a web application with Django which uses Celery to process asynchronous tasks, especially for transactional emails.
One on my email task is scheduled with the ETA option but it's executed multiple times in parallel resulting in mail chain, very anoying. I can't figure out exactly why.
I checked twice my Django code and I'm sure that it is publish only one time.
I'm using Redis as a broker/backend result.
My Celery daemon is hosted on Heroku and launched via this command:
python manage.py celeryd -E -B --loglevel=INFO
Thanks for your help.
EDIT: I find a valid solution here thanks to a guy on the #celery IRC channel: http://loose-bits.com/2010/10/distributed-task-locking-in-celery.html
Have you checked the Ensuring a task is only executed one at a time docs?

django-celery works in development, fails in wsgi production: How to debug?

I'm using the django celery task queue, and it works fine in development, but not at all in wsgi production. Even more frustrating, it used to work in production, but I somehow broke it.
"sudo rabbitmqctl status" tells me that the rabbitmq server is working. Everything also seems peachy in django: objects are created, and routed to the task manager without problems. But then their status just stays as "queued" indefinitely. The way I've written my code, they should switch to "error" or "ready," as soon as anything gets returned from the celery task. So I assume there's something wrong with the queue.
Two related questions:
Any ideas what the problem might be?
How do I debug celery? Outside of the manage.py celeryd command, I'm not sure how to peer into its inner workings. Are there log files or something I can use?
Thanks!
PS - I've seen this question, but he seems to want to run celery from manage.py, not wsgi.
After much searching, the most complete answer I found for this question is here. These directions flesh out the skimpy official directions for daemonizing celeryd. I'll copy the gist here, but you should follow the link, because Michael has explained some parts in more detail.
The main idea is that you need scripts in three places:
/etc/init.d/celeryd
/etc/default/celeryd
myApp/settings.py
Settings.py appears to be the same as in development mode. So if that's already set up, there are four steps to shifting to production:
Download the daemon script since it's not included in the installation:
https://github.com/celery/celery/tree/3.0/extra/generic-init.d/
Put it in /etc/init.d/celeryd
Make a file in /etc/default/celeryd, and put the variables here into it:
http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#example-django-configuration
Start the script
This solved my problem.
I think the reason you are not getting any response from celery, is because celeryd server might not be running. You could find out about it by doing ps -ef |grep celeryd. In order to figure out what is the error while trying to run celeryd, you might want to do the following.
In your settings.py file you could give the path to the celery log file CELERYD_LOG_FILE = <Path to the log file>
and while running celeryd server you could specify the level manage.py celeryd -l DEBUG.