Django-celery seems to use the current django project's settings.py file to read the configuration for celery, and, correct me if I am wrong, there is no way to override this behavior. The problem is that my celery configuration is in a different file outside of the django project, and I need to somehow tell django-celery to use that file instead. How do I do this?
Generally, django-celery used for one project(but many applications). Of course, you can make symlink of the settings.py from one project to other and run Django-celery as:
./manage.py celeryd --settings=path.to.symlink
but in my opinion would be better decision to use celery as daemon with common settings and CELERY_IMPORTS to tasks from any django projects
Related
Reading the daemonization Celery documentation, if you're running Celery on Linux with systemd, you set it up with two files:
/etc/systemd/system/celery.service
/etc/conf.d/celery
I'm using Celery in a Django site with django-celery-beat, and the documentation is a little confusing on this point:
Example Django configuration
Django users now uses [sic] the exact same template as above, but make sure that the module that defines your Celery app instance also sets a default value for DJANGO_SETTINGS_MODULE as shown in the example Django project in First steps with Django.
The docs don't just come out and say, put your daemonization settings in settings.py and it will all work out, bla, bla. From another SO posts this user seems to have run into the same confusion where Django instructions imply you use init.d method.
Bonus point if you can answer if it's possible to run Celery and RabbitMQ both configured and with the Django instance (if that makes sense).
I'm thinking not Celery if only because daemon variables include CELERYD_ and first steps with django say: "...all Celery configuration options must be specified in uppercase instead of lowercase, and start with CELERY_"
When starting celerybeat I get the following error:
Restarting celery periodic task scheduler
Stopping celerybeat... NOT RUNNING
Starting celerybeat...
Error: One or more models did not validate:
collections.collection: 'language' has a relation with model <class 'languages.models.Language'>, which has either not been installed or is abstract.
collections.translation: 'language' has a relation with model <class 'languages.models.Language'>, which has either not been installed or is abstract.
But the model languages has for sure been added to my django settings, uwsgi and celery starts up fine and everything else but celerybeat work as it should.
It's as if celerybeat works of an old settings file, but that should not be possible or is it? I have recently also moved my settings file.
Found the problem. I had earlier moved my settings files, but not changed this in the celery settings files. So solution was to find the files:
celeryd
celerybeat
in etc/default/
and change the path to where the settings files has been moved to.
sudo nano celeryd
and edit
There are 3 ways to run a django application with gunicorn:
Standard gunicorn + wsgi (ref django doc)
gunicorn project.wsgi:application
Using gunicorn django integration (ref gunicorn doc and django doc):
python manage.py run_gunicorn
Using gunicorn_django command (ref gunicorn doc)
gunicorn_django [OPTIONS] [SETTINGS_PATH]
Django's documentation suggests using 1., which is not even listed as an option on Gunicorn documentation.
Is there any best practice on the best way to run a django app with gunicorn, and what are the foreseable advantages/disadvantages of these different solutions?
Taking a glimpse at gunicorn's code it looks like they pretty much all do the same: 2. seems to be creating a wsgi app using django's internals, and 3. uses 2.
If that's the case, I wouldn't even understand what's the reason for not simply using "1." all the time, especially since a wsgi.py file is autocreated for you since django 1.4; if that's true maybe simply a documentation improvement should be suggested...
Also, best practice for gunicorn settings with django would be great. Using 1., does it make sense to set some defaults in the wsgi file and avoid additional settings?
References:
Should I use django-gunicorn integration or wsgi? only concerns choices 1. and 3., there's no hint for the settings and the answer gives no rationale
Deploying Django with gunicorn and nginx give some broader information but is not strictly related nor answer this question
Django Gunicorn wsgi about version "4", which is launching gunicorn -c configfile and configfile will point to django_settings to django
Django WSGI and Gunicorn is just a bit confusing :) mixing up 1. and 3. Of course wsgi.py is used only with 1.
After checking out I'd say that the best way is using gunicorn + wsgi
$ gunicorn project.wsgi:application
It's now both confirmed in gunicorn docs: if you run Django 1.4 or newer, it’s highly recommended to simply run your application with the WSGI interface using the gunicorn command and django as linked above.
It also avoids adding gunicorn as installed app, which means it's not a requirement to install gunicorn to test your app which might be useful from time to time.
About Settings
The Django settings file to be used can be passed through an ENV variable, or customized in the wsgi.py file. I sometimes create several wsgi.py files if I have multiple settings (eg. multiple websites) that have to run from the same project - See Django Doc for more info.
A one-liner solution that does not require any new file from Carl's comment:
DJANGO_SETTINGS_MODULE=project.settings.prod gunicorn project.wsgi:application
sounds like a nicer way (though I'll probably end up writing it in some shell commands to make it easy to "remember").
Gunicorn settings can be passed as -c settings_file, but I'm exploring other ways and will try to update this answer if I find any. Using environment variables seems a workaroud, but only for limited cases
In particular it would be nice to get/share some settings between django and gunicorn; gunicorn documentation says:
Currently, only Paster applications have access to framework
specific settings. If you have ideas for providing settings to WSGI
applications or pulling information from Django’s settings.py feel
free to open an issue to let us know.
(Update: haven't found any smarter way, but after all env variables are enough for my most-common cases).
I guess using run_gunicorn is the way to go, it's also the simplest way to use it.
It's basically the same as usign gunicorn project.wsgi:application but needs gunicorn to be added to INSTALLED_APPS so that django recognizes the run_gunicorn command, therefore it's probably not the default way...
Using gunicorn_django is more or less deprecated, as the documentation also states here...
Found a solution that works both for manage.py (on local machine) and with gunicorn
create a file that has all the evniroment stuff
# mysite/settings/set_env.py
import os
_environment = os.getenv('environment', None)
if _environment == "production":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings.production")
elif _environment == "development":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings.development")
else:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings.local")
and import this file in your manage.py and wsgi.py files, no need to change anything else in gunicorn execution
I'm new to Git. I need to setup Git to deploy a Django website to the production server. My question here is to know what is the best way of doing this.
By now I only have a Master branch. My problem here is that Development environment is not equal to the Production environment. How can I have the two environments(Development and Production) in Git? Should I use two new Branches(Development and Production). Please give me a clue on this.
Other question... when I finish to upload/push the code to the Production server I need to restart the Gunicorn(serves Django website). How can I do this?
And the most important question... Should I use Git to do this or I have better options?
Best Regards,
The first question you must solve is your project structure. Usually the difference between development and the production environment is setting.py and url.py. So why you firstly separate those? :) For example you can have one main settings.py where you define all the default settings which are in common. Then at the end of the file you just import the settings_dev.py and settting_prod.py for exemple:
try:
from settings_prod import *
except ImportError:
pass
try:
from settings_dev import *
except ImportError:
pass
Then simply you can overload all the setting you want and have custom settings of the project (for example installed apps). The same logic you can use for urls.py file.
Then you can simply ignore adding the *_dev files to repo and on the server side you can just checkout the code from repo and restart http server. To automatize this for now I can't give the right name of app to use. Sometimes simple python script could be solution like: watching if the file datetime changed and if yes, just run restart command for http.
Hope that helped.
Ignas
You can follow this brunching model - http://nvie.com/posts/a-successful-git-branching-model/
And, git is ok but use Fabric for deployment.
I've recently been playing around with django and celery. One annoying thing during development is the fact that I have to restart the celery daemon each time I modify a task. When I'm developing, I usually like to use 'manage.py runserver' which automatically reloads the django framework on modifications to my apps.
Is there a way to add a hook to the reloading process that runserver does so that it automatically restarts the celery daemon I have running?
Alternatively, does celery have a similar monitor-and-reload-on-change mode that I should be using for development?
Django-supervisor works very well for this purpose. You can have it start the Django server, Celery, and anything else you need, and have different configurations for development and production servers. It also knows to reload the celery daemon when your code changes.
https://github.com/rfk/django-supervisor
I believe you can set CELERY_ALWAYS_EAGER to true.
Yes. Django provides auto reload hook, which can be used to restart other scripts.
Here is a simple management command which prints a message on reload
import subprocess
from django.core.management.base import BaseCommand
from django.utils import autoreload
def reload():
print('Code changed. Auto reloading...')
class Command(BaseCommand):
def handle(self, *args, **options):
autoreload.main(reload)
Now you can save to a reload.py and run it with python manage.py reload. A management command to reload celery workers is available here.
Celery didn't have any feature for reload code or for auto restart when the code change, than you have to restart it manually.
There isn't a way for add an hook, and I think not worthwhile of edit the source code of django just for perform a restart.
Personally while I'm developing i prefere to see the output shell of celery that is decorated with color instead of tail the logs, is more readable.
Celery 2.5 has an experimental runtime option --autoreload that could be used for this purpose, too. Here's more detail in the release notes. That being said, I think django-supervisor (via #Lee Semel) looks like the better way of doing things. I thought I would post this alternative here in case other readers do not want to have to configure another app for asynchronous processing.