I'm trying to setup djangocelery to store task results in the databse.
I set:
CELERY_RESULT_BACKEND = 'djcelery.backends.database.DatabaseBackend'
then I synced and migrated the db (no errors).
Celery is working and tasks get processed (I can get the results), but admin shows there is no tasks. In the database are two tables celery_taskmeta and djcelery_taskmeta. First one is holding the results and second one is displayed in admin. Anyone has insight how to configure it properly?
Check the doc, when you use djcelery, set CELERY_RESULT_BACKEND="database" or don't even bother to write this line because djcelery sets it by default.
The result is stored in celery_taskmeta table, you should register djcelery.models.TaskMeta to admin by yourself:
# in some admin.py, which is contained by an app after `djcelery` in `INSTALLED_APPS`
# or directly in djcelery/admin.py
from djcelery.models import TaskMeta
class TaskMetaAdmin(admin.ModelAdmin):
readonly_fields = ('result',)
admin.site.register(TaskMeta, TaskMetaAdmin)
Related question with right answer is here.
You should actually run
python manage.py celery worker -E
and
python manage.py celerycam
After that tasks results will be displayed in admin (Djcelery › Tasks)
Moving the config update e.g.
app.conf.update(CELERY_RESULT_BACKEND='djcelery.backends.database.DatabaseBackend')
to the end of file celery.py did the trick for me .
Related
I've added a field to my django model. Now, since I've had some instances of this model already, they not include this field resulting in django error page. I'm not able to manually delete those instances from admin panel.
I've searched that python manage.py flush is a way to go for reseting whole db, but I would like to delete only this particular model.
Is there a way to do this from the CLI ?
Solution:
For starting question, I accept #IainShelvington answer:
python manage.py shell
# then
from app_name.models import model_name
model_name.objects.all().delete()
Additionally, thanks to #AbdulAzizBarkat for providing:
python manage.py migrate <app_name> zero
Reading the daemonization Celery documentation, if you're running Celery on Linux with systemd, you set it up with two files:
/etc/systemd/system/celery.service
/etc/conf.d/celery
I'm using Celery in a Django site with django-celery-beat, and the documentation is a little confusing on this point:
Example Django configuration
Django users now uses [sic] the exact same template as above, but make sure that the module that defines your Celery app instance also sets a default value for DJANGO_SETTINGS_MODULE as shown in the example Django project in First steps with Django.
The docs don't just come out and say, put your daemonization settings in settings.py and it will all work out, bla, bla. From another SO posts this user seems to have run into the same confusion where Django instructions imply you use init.d method.
Bonus point if you can answer if it's possible to run Celery and RabbitMQ both configured and with the Django instance (if that makes sense).
I'm thinking not Celery if only because daemon variables include CELERYD_ and first steps with django say: "...all Celery configuration options must be specified in uppercase instead of lowercase, and start with CELERY_"
In my django project i have to check if a database/table exist before starting application, i don't know how is better insert the code for check.
I try to add in views.py into login function a try except block but i was try to find an elegant and more effective solution.
Thanks in advance
To check before the app starts then you can use the AppConfig.ready() function. This gets called before the application starts, but after your project has started. If you want to check before the project starts then you will have to hook into the method you use to start your project, eg within wsgi.py or even manage.py
AppConfig.ready() docs Note that the docs specifically say
avoid interacting with the database in your ready() implementation.
But your use case may justify doing this.
The ready function is called when you run the commands from manage.py eg
manage.py shell / manage.py migrate / etc
It won't get called when your site is visited of course. If you want to run a DB check in response to a visitor action then that code should go into your view
You put the ready() function in your apps.py:
from django.apps import AppConfig
from django.db import connection
class MyAppConfig(AppConfig):
name = 'MyApp'
def ready(self):
print("i am the ready function and the database test code goes here")
# put your test code here, eg you could read all the tables from sqlite
with connection.cursor() as cursor:
cursor.execute("SELECT name FROM sqlite_master;")
rows=cursor.fetchall()
print (rows)
You can write your custom django admin commands and run it by using python manage.py your_command right before calling python manage.py runserver. There are detailed examples at the official documentation.
One of the advantages of using commands is that testing commands are fairly easy. Django has a package for calling commands django.core.management.call_command which enables funtionally test your command.
Every time I wipe the database of my Django app during testing (or when cloning or deployin), I have to go into /admin and set up permissions and groups. Where would I put the code that would populate the DB with them and what would it look like?
For this you can use fixtures.
For example:
python manage.py dumpdata auth > fixtures/auth.json
This will store all models of package 'auth' (Users, Groups Relations) into auth.json
After Deployment you can use the following command to load:
python manage.py loaddata auth fixtures/auth.json
This will restore your prev state of 'auth'.
Maybe it's good for you to switch to South, a very famous part of Django to migrate databases instead of recreating them.
You can provide fixtures with the initial required data and it will be automatically inserted when you syncdb. See docs
I'm trying to sync data between two django installations (production and testing). I'm doing this using ./manage.py dumpdata --natural on production, then ./manage.py loaddata into a freshly syncdb'ed database on testing.
Everything was working fine until I added a new custom permission. The production syncdb loaded this new permission in a different order (with different primary key) than a new syncdb on an empty database does. Consequently, it gets a different ID. So despite using natural keys, when I attempt to load the data, I'm getting this error when the first out-of-order permission object is loaded:
IntegrityError: duplicate key value violates unique constraint "auth_permission_content_type_id_codename_key"
The easiest way I can think of to fix this is to remove all data from every table in the testing installation -- that is, to use syncdb just to create tables, and not to also load initial data. But syncdb doesn't let you skip the initial data/signals step. Short of enumerating every model or table name explicitly, how can I remove all initial data after calling syncdb? Or is there a way to create just the empty tables without using syncdb?
./manage.py flush isn't what I'm after -- it reloads initial data and triggers syncdb signals.
According to the help for flush command (I'm using Django 1.3.1) the SQL that is executed is the same SQL obtained from ./manage.py sqlflush, and then the initial data fixtures is reinstalled.
$ python manage.py help flush
Usage: manage.py flush [options]
Executes ``sqlflush`` on the current database.
To get the same data wiping capabilities minus the fixture loading you can obtain the SQL by calling ./manage.py sqlflush and then execute that SQL using Django's built-in support for executing arbitrary SQL:
from django.core.management import call_command, setup_environ
from your_django_project import settings
setup_environ(settings)
from django.db import connection
from StringIO import StringIO
def main():
# 'call' manage.py flush and capture its outputted sql
command_output = StringIO()
call_command("sqlflush", stdout=command_output)
command_output.seek(0)
flush_sql = command_output.read()
# execute the sql
# from: https://docs.djangoproject.com/en/dev/topics/db/sql/#executing-custom-sql-directly
cursor = connection.cursor()
cursor.execute(flush_sql)
print "db has been reset"
if __name__ == '__main__':
main()
This has the added benefit that you can modify the SQL from ./manage.py sqlflush before execution to avoid wiping tables that you might want to leave intact.
Also, according to the current Django docs, in Django 1.5 a new parameter ./manage.py flush --no-initial-data will reset the data and not load the initial data fixture.
For Django <= 1.4, you can use the reset management command.
./manage.py sqlreset myapp1 myapp2