I have a setup inside a virtual environment:
Django-nonrel-1.6
mongodb-engine
djangotoolbox
Everything works fine, the only problem is during running tests. Every time django tries to flush database after running a test function it throws an error:
Traceback (most recent call last):
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/test/testcases.py", line 187, in __call__
self._post_teardown()
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/test/testcases.py", line 796, in _post_teardown
self._fixture_teardown()
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/test/testcases.py", line 889, in _fixture_teardown
return super(TestCase, self)._fixture_teardown()
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/test/testcases.py", line 817, in _fixture_teardown
inhibit_post_syncdb=self.available_apps is not None)
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 159, in call_command
return klass.execute(*args, **defaults)
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/core/management/base.py", line 285, in execute
output = self.handle(*args, **options)
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/core/management/base.py", line 415, in handle
return self.handle_noargs(**options)
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/core/management/commands/flush.py", line 79, in handle_noargs
six.reraise(CommandError, CommandError(new_msg), sys.exc_info()[2])
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/core/management/commands/flush.py", line 67, in handle_noargs
savepoint=connection.features.can_rollback_ddl):
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/db/transaction.py", line 251, in __enter__
"The outermost 'atomic' block cannot use "
CommandError: Database test_dev_db couldn't be flushed. Possible reasons:
* The database isn't running or isn't configured correctly.
* At least one of the expected database tables doesn't exist.
* The SQL was invalid.
Hint: Look at the output of 'django-admin.py sqlflush'. That's the SQL this command wasn't able to run.
The full error: The outermost 'atomic' block cannot use savepoint = False when autocommit is off.
So for each test case, I have a pass or a fail, like it suppose to, but I also get this annoying error.
I did run django-admin.py sqlflush --settings=dev_settings --pythonpath=., which flushed my development database just fine, with no errors.
In a couple test functions I checked a few models pulled from database, and it seems to be flushing and recreating objects just fine, so that's why it is not affecting actual test cases.
I went though the whole traceback and I kind of understand why it happens, but I cannot figure out how to deal with. Any help is appreciated.
Edit
Just tried running tests with Django-nonrel-1.5, there was no problems. It seems like a bug in 1.6 version.
Use SimpleTestCase or a custom TestCase
class CustomTestCase(TestCase):
def _fixture_teardown(self):
for db_name in self._databases_names(include_mirrors=False):
call_command('custom_flush', verbosity=0, interactive=False,
database=db_name, skip_validation=True,
reset_sequences=False,
allow_cascade=self.available_apps is not None,
inhibit_post_syncdb=self.available_apps is not None)
Since the problem is transaction.atomic in command flush, you may have to write your own flush.
Related
I'm trying to run a test script, following this django doc (which is the version being used here). It quickly fails with a long stack. I've selected what's the possible culprit.
File "/home/user11/app-master/en/lib/python3.8/site-packages/django/db/migrations/operations/special.py", line 190, in database_forwards
self.code(from_state.apps, schema_editor)
File "/home/user11/app-master/app/colegiados/migrations/0002_auto_20200128_1646.py", line 185, in migrate
add_sistema_entidade_e_orgao_composicao(apps, sistema)
File "/home/user11/app-master/app/colegiados/migrations/0002_auto_20200128_1646.py", line 16, in add_sistema_entidade_e_orgao_composicao
user = get_user(apps)
File "/home/user11/app-master/app/colegiados/migrations/0002_auto_20200128_1646.py", line 7, in get_user
return User.objects.filter(
File "/home/user11/app-master/en/lib/python3.8/site-packages/django/db/models/query.py", line 318, in __getitem__
return qs._result_cache[0]
IndexError: list index out of range
As a workaround, I modified django's query.py
if qs._result_cache:
return qs._result_cache[0]
else:
return ""
Which worked, until the next error:
File "/home/user11/app-master/en/lib/python3.8/site-packages/django/db/migrations/operations/special.py", line 190, in database_forwards
self.code(from_state.apps, schema_editor)
File "/home/user11/app-master/app/core/migrations/0016_auto_20201120_0934.py", line 106, in migrate
sistema = Sistema.objects.get(nom_alias=sistema)
File "/home/user11/app-master/en/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/user11/app-master/en/lib/python3.8/site-packages/django/db/models/query.py", line 439, in get
raise self.model.DoesNotExist(
__fake__.DoesNotExist: Sistema matching query does not exist.
Now I'm stuck. The test_database gets created with all the tables up to these migrations' errors, with the vast majority lacking any data. Among those that are empty is the table that this last error seems to refer to.
Note that I'm not the developer, I had no hand in creating the DB being used nor any of the migrations. I strongly suspect the database (Postgres12) has to be created/restored from a "minimal" backup before migrations can work properly. Could that be the reason for these failures? If so, what are my options for running a django test that doesn't alter the deployed database? Any options for running the test as a query block and then doing a rollback, as it's using Postgres?
After some talk to the rest of the team, it was decided to reset all migrations to ensure this type of problem no longer happens. Maybe not the "expected" solution, but a decent solution.
I've been using Celery successfully with a Django site on Heroku but it's just started generating the error below, which stops it running. It looks like it's having trouble with postgres, but I'm stumped as to how to fix it, given it's Celery rather than my code that's having the problem (I assume...).
I'm using CloudAMPQ as a broker, and my Django settings include:
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
Here's the traceback from the Heroku logs:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.5/site-packages/kombu/utils/__init__.py", line 323, in __get__
return obj.__dict__[self.__name__]
KeyError: 'scheduler'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.OperationalError: SSL SYSCALL error: Bad file descriptor
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.5/site-packages/billiard/process.py", line 292, in _bootstrap
self.run()
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 553, in run
self.service.start(embedded_process=True)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 470, in start
humanize_seconds(self.scheduler.max_interval))
File "/app/.heroku/python/lib/python3.5/site-packages/kombu/utils/__init__.py", line 325, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 512, in scheduler
return self.get_scheduler()
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 507, in get_scheduler
lazy=lazy)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/utils/imports.py", line 53, in instantiate
return symbol_by_name(name)(*args, **kwargs)
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 151, in __init__
Scheduler.__init__(self, *args, **kwargs)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 185, in __init__
self.setup_schedule()
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 158, in setup_schedule
self.install_default_entries(self.schedule)
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 251, in schedule
self._schedule = self.all_as_schedule()
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 164, in all_as_schedule
for model in self.Model.objects.enabled():
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/query.py", line 258, in __iter__
self._fetch_all()
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/query.py", line 1074, in _fetch_all
self._result_cache = list(self.iterator())
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/query.py", line 52, in __iter__
results = compiler.execute_sql()
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 848, in execute_sql
cursor.execute(sql, params)
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/app/.heroku/python/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
django.db.utils.OperationalError: SSL SYSCALL error: Bad file descriptor
I've resolved the issue now... there was a line of my Django code which had caused an Internal Server Error in the past -- I think, early on in Django starting up, it was trying to access a model before the migrations that created the model had run.
I'd resolved that but noticed these "SSL SYSCALL error"s started about the same time. So I removed that line of code, and Celery has started up again.
It could be coincidence. And I don't understand why this fixed things.
Ideally I'd still like to understand what the error above actually means so I'd have a better chance of fixing such a thing in the future.
I had the same error and i solved.
In my case the problem was a custom AppConfig in one of my django apps.
my folder
my_django_app/
__init__.py
apps.py
...
I have a file my_django_app/__init__.py like this
default_app_config="my_django_app.apps.MyAppConfig"
and the file my_django_app/apps.py like this
class ChallengeConfig(AppConfig):
name = 'appConfig'
verbose_name = "djangoAppConfig"
def ready(self):
... # custom logic
After removeing the line default_app_config="my_django_app.apps.MyAppConfig" celery works again
If you have something like this, remove your custom AppConfig and use a celery periodic task instead of a custom AppConfig. django_celery_beat do something strange with AppConfig so if you have a custom AppConfig it generate the problem
My python requirements:
Django==1.11.29
celery==4.4.7
django_celery_beat==1.6.0
I know this was asked years ago, but this was about the only question out there which related to my problem.
In my case, it was down to the CONN_MAX_AGE being set at 600. Reduced to 0 so that there are unlimited persistent connections.
From the docs:
Persistent connections avoid the overhead of re-establishing a connection to the database in each request. They’re controlled by the CONN_MAX_AGE parameter which defines the maximum lifetime of a connection. It can be set independently for each database.
The default value is 0, preserving the historical behavior of closing the database connection at the end of each request. To enable persistent connections, set CONN_MAX_AGE to a positive integer of seconds. For unlimited persistent connections, set it to None.
Django opens a connection to the database when it first makes a database query. It keeps this connection open and reuses it in subsequent requests. Django closes the connection once it exceeds the maximum age defined by CONN_MAX_AGE or when it isn’t usable any longer.
In detail, Django automatically opens a connection to the database whenever it needs one and doesn’t have one already — either because this is the first connection, or because the previous connection was closed.
At the beginning of each request, Django closes the connection if it has reached its maximum age. If your database terminates idle connections after some time, you should set CONN_MAX_AGE to a lower value, so that Django doesn’t attempt to use a connection that has been terminated by the database server. (This problem may only affect very low traffic sites.)
At the end of each request, Django closes the connection if it has reached its maximum age or if it is in an unrecoverable error state. If any database errors have occurred while processing the requests, Django checks whether the connection still works, and closes it if it doesn’t. Thus, database errors affect at most one request; if the connection becomes unusable, the next request gets a fresh connection.
I am trying to run Selenium tests on a Django project (1.5.4), which uses South. I think South is conflicting with my tests when I try to inject initial data with fixtures, but I'm not sure why; I appreciate any help.
According to the Django documentation, fixtures are supposed to be loaded after the first syncdb and then all migrations are applied.
Question 1) Does this take into account South migrations?? Do I need to run those separately somehow?
The error I get when I run my tests makes it seem like my South migrations are still present in the test database after the first test...but I thought each test has its own database (and migrations / fixtures) created? The first test passes / fails, but each subsequent test raises this IntegrityError:
IntegrityError: Problem installing fixture '<PROJECT_PATH>/fixtures/toy_course.json': Could not load contenttypes.ContentType(pk=8): (1062, "Duplicate entry 'south-migrationhistory' for key 'app_label'")
This South documentation and SO question seem to indicate that I need to override some type of forwards method in order to get fixtures working, but I'm not entirely sure how to apply that to a testing situation instead of production (or if that is the solution I need).
Question 2) Am I supposed to override forwards in my test setup? Where would I do it?
My relevant test code:
from django.conf import settings
from selenium import webdriver
from functional_tests.test import SeleniumTestCase
class Resources(SeleniumTestCase):
fixtures = ['toy_course.json']
def setUp(self):
self.browser = webdriver.Chrome(settings.SELENIUM_WEBDRIVER)
self.browser.implicitly_wait(3)
def tearDown(self):
self.browser.quit()
def test_main_page_renders_correctly(self):
"""
User sees a properly formatted main page
"""
self.open('/RDB/')
h3_headers = self.browser.find_elements_by_tag_name('h3')
self.assertIn(
'Complete List of Resources',
[header.text for header in h3_headers])
self.assertTrue(self.check_exists_by_id('main_table'))
self.assertTrue(self.check_exists_by_id('searchDiv'))
self.assertTrue(self.check_exists_by_class_name('tablesorter'))
Thanks!
UPDATE
So per Alex's suggestion below and this South doc, I added this line to my settings.py:
SOUTH_TESTS_MIGRATE = False
But I am now getting 8 of 8 Errors (before I was getting 1 pass/fail on the first test, and then 7 Errors). The full error for a single test is below:
======================================================================
ERROR: test_table_sorts_on_click (functional_tests.tests.main_resources.Resources)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/test/testcases.py", line 259, in __call__
self._pre_setup()
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/test/testcases.py", line 479, in _pre_setup
self._fixture_setup()
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/test/testcases.py", line 518, in _fixture_setup
**{'verbosity': 0, 'database': db_name, 'skip_validation': True})
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/core/management/__init__.py", line 161, in call_command
return klass.execute(*args, **defaults)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/core/management/base.py", line 255, in execute
output = self.handle(*args, **options)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/core/management/commands/loaddata.py", line 193, in handle
obj.save(using=using)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/core/serializers/base.py", line 165, in save
models.Model.save_base(self.object, using=using, raw=True)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/db/models/base.py", line 626, in save_base
rows = manager.using(using).filter(pk=pk_val)._update(values)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/db/models/query.py", line 605, in _update
return query.get_compiler(self.db).execute_sql(None)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 1014, in execute_sql
cursor = super(SQLUpdateCompiler, self).execute_sql(result_type)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 840, in execute_sql
cursor.execute(sql, params)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 122, in execute
six.reraise(utils.IntegrityError, utils.IntegrityError(*tuple(e.args)), sys.exc_info()[2])
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 120, in execute
return self.cursor.execute(query, args)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/MySQLdb/cursors.py", line 201, in execute
self.errorhandler(self, exc, value)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
IntegrityError: Problem installing fixture '/<PATH TO PROJECT>/RDB/fixtures/toy_course.json': Could not load contenttypes.ContentType(pk=8): (1062, "Duplicate entry 'south-migrationhistory' for key 'app_label'")
The command I ran:
$ python manage.py test functional_tests
I'm not quite sure if I made the problem better, worse, or the same, but I seem to be more in-line with the documentation...
Thanks!
UPDATE #2 -- with solution
So a couple of other pages that helped me figure it out (in addition to Alex's pointer to the South doc). First, this person had a similar issue, and solved it using the SOUTH_TESTS_MIGRATE = False statement. So half my solution was to include that.
The second half of my solution was to fix my fixture document. I was dumping everything into my fixture with:
$ python manage.py datadump > RDB/fixtures/toy-course.json
This is, apparently, a bad way to do fixtures it with South--because it also dumps the South migration tables into the fixture. The post above shows the blogger using app-specific fixtures (which are also talked about in this SO post), and that was the key to getting my fixtures to work. The Django docs on fixtures do show the optional parameters to dump just an app, but I didn't know ignoring them would cause South to conflict. So the second half of my solution was to create my fixture to be app-specific:
$ python manage.py datadump RDB > RDB/fixtures/toy-course.json
And my tests now run fine (slow, but probably a different issue)!
Your test database is created using South migrations by default. Set SOUTH_TESTS_MIGRATE = False in your settings.py, quote from docs:
If this is False, South’s test runner integration will make the test
database be created using syncdb, rather than via migrations (the
default).
I have a function that will read data from a website, process it, and then load it into MongoDB. When I run this without threading it works fine but as soon as I set up celery tasks that just call this one function I frequently get the following error: "OperationFailure: database error: unauthorized db:dbname lock type:-1"
It's somewhat odd because if I run the non-celery version on multiple terminals, I do not get this error at all.
I suspect it has something to do with there not being an open connection to Mongo although in my code I'm opening one up right before every Mongo call.
The exact exception is below:
Task twitter[a974bfcc-d6ca-4baf-b36f-cae9143ce2d9] raised exception: OperationFailure(u'database error: unauthorized db:data lock type:-1 client:68.193.49.9',)
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/celery/execute/trace.py", line 36, in trace
return cls(states.SUCCESS, retval=fun(*args, **kwargs))
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/celery/app/task/__init__.py", line 232, in __call__
return self.run(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/celery/app/__init__.py", line 172, in run
return fun(*args, **kwargs)
File "/djangoblog/network/tasks.py", line 40, in twitter
n_twitter.GetTweetsTwitter(user)
File "/djangoblog/network/twitter.py", line 255, in GetTweetsTwitter
id = SaveTweet(user, network, tweet)
File "/djangoblog/network/twitter.py", line 150, in SaveTweet
if mmo.Moment.objects(user=user.id,source_id=id,network=network.id).count() == 0:
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/mongoengine/queryset.py", line 933, in count
return self._cursor.count(with_limit_and_skip=True)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/mongoengine/queryset.py", line 563, in _cursor
self._cursor_obj = self._collection.find(self._query,
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/mongoengine/queryset.py", line 493, in _collection
if self._collection_obj.name not in db.collection_names():
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pymongo/database.py", line 361, in collection_names
names = [r["name"] for r in results]
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pymongo/cursor.py", line 703, in next
if len(self.__data) or self._refresh():
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pymongo/cursor.py", line 666, in _refresh
self.__uuid_subtype))
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pymongo/cursor.py", line 628, in __send_message self.__tz_aware)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pymongo/helpers.py", line 101, in _unpack_response error_object["$err"])
OperationFailure: database error: unauthorized db:data lock type:-1 client:68.193.49.9
Sorry for the formatting but if you look at the line that starts with mmo.Moment there's a connection being opened right before that's called.
Doing a bit of research it looks as if it has something to do with the way threading is handled in PyMongo - http://api.mongodb.org/python/1.5.1/faq.html#how-does-connection-pooling-work-in-pymongo - I may need to start closing the connections but I'd expect MongoEngine to be doing this..
This is likely due to the fact that you are not calling db.authenticate() when you start the new connection and are using auth on MongoDB.
Regarding the closing of threads, I would recommend making sure you are using connection pooling and letting the driver manage the pools (calling close() or similar manually can lead to a lot of pain).
For more info see the note in the pymongo documentation about using authenticate() in a multi-threaded environment.
I'm trying to setup celery and django, but the celery_taskmeta table is not being created.
I've followed numerous (Recent) tutorials, added djcelery and djkombu to my installed_apps. added the 'BROKER_TRANSPORT = "djkombu.transport.DatabaseTransport"' line to my settings, etc.
I can run the daemon just fine, and it will execute tasks, but it spits out this traceback at the end:
==============
2011-08-05 16:21:16,231: ERROR/MainProcess] Task slate.modules.filebrowser.tasks.gen_thumb_task[0afc564b-cc54-4f4c-83f5-6db56fb23b76] raised exception: DatabaseError('no such table: celery_taskmeta',)
Traceback (most recent call last):
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/celery/worker/job.py", line 107, in execute_safe
return self.execute(*args, **kwargs)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/celery/worker/job.py", line 125, in execute
return super(WorkerTaskTrace, self).execute()
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/celery/execute/trace.py", line 79, in execute
retval = self._trace()
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/celery/execute/trace.py", line 93, in _trace
r = handler(trace.retval, trace.exc_type, trace.tb, trace.strtb)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/celery/worker/job.py", line 140, in handle_success
self.task.backend.mark_as_done(self.task_id, retval)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/celery/backends/base.py", line 54, in mark_as_done
return self.store_result(task_id, result, status=states.SUCCESS)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/celery/backends/base.py", line 194, in store_result
return self._store_result(task_id, result, status, traceback, **kwargs)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/djcelery/backends/database.py", line 20, in _store_result
traceback=traceback)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/djcelery/managers.py", line 36, in _inner
return fun(*args, **kwargs)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/djcelery/managers.py", line 154, in store_result
"traceback": traceback})
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/djcelery/managers.py", line 78, in update_or_create
return self.get_query_set().update_or_create(**kwargs)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/djcelery/managers.py", line 62, in update_or_create
obj, created = self.get_or_create(**kwargs)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/django/db/models/query.py", line 378, in get_or_create
return self.get(**lookup), False
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/django/db/models/query.py", line 344, in get
num = len(clone)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/django/db/models/query.py", line 82, in __len__
self._result_cache = list(self.iterator())
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/django/db/models/query.py", line 273, in iterator
for row in compiler.results_iter():
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 680, in results_iter
for rows in self.execute_sql(MULTI):
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 735, in execute_sql
cursor.execute(sql, params)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site- packages/django/db/backends/util.py", line 34, in execute
return self.cursor.execute(sql, params)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site- packages/django/db/backends/sqlite3/base.py", line 234, in execute
return Database.Cursor.execute(self, query, params)
DatabaseError: no such table: celery_taskmeta
-============================
so how the hell do i get this table created during syncdb?
The problem here is actually that South manages the djcelery tables. You need to migrate djcelery to it's new schema. If you upgraded djcelery from an earlier version and you already have a set of tables installed, you need to do a fake migration first:
python manage.py migrate djcelery 0001 --fake
python manage.py migrate djcelery
I had the same problems before but this fixed it.
I was also getting the following error:
DatabaseError: no such table: djkombu_queue
After looking into it a bit further I believe the correct way to solve this problem (pulled from here) it to add the following to INSTALLED_APPS:
INSTALLED_APPS = ('djcelery.transport', )
Adding kombu.transport.django felt incorrect.
Ran into the exact same issue, fresh install. Downgrading celery and django-celery to 2.2.7 and rerunning syncdb solved it (for the interim, anyway).
I was getting a similar error:
DatabaseError: no such table: djkombu_queue
In my case, I needed to add a Django app from a related technology to the INSTALLED_APPS setting. In my case, it was: kombu.transport.django
After that, I reran syncdb and everything was working. In your case, maybe add something in the celery egg to the path.
I got this error while running manage.py dumpdata. I tried two different 2.2.x versions of the celery and django-celery packages with a MySQL database. In my case upgrading to 2.2.7 didn't fix the issue. What did work was advice found on this Github Issue #34.
When using dumpdata on Django 1.3+, add the --exclude djcelery option. (Of course, if you are dumping only a subset of apps and models you won't get the missing table error anyway. And if you aren't using dumpdata in the first place, this answer doesn't apply.)
The problem is probably SQLite3. You can not use it concurrently in Django and it is throwing an misleading error. Switch to PostgreSQL or MySQL, especially for celeryd development.
Or, bite the bullet, and setup RabbitMQ ...