Django test fails to run migrations - django

I'm trying to run a test script, following this django doc (which is the version being used here). It quickly fails with a long stack. I've selected what's the possible culprit.
File "/home/user11/app-master/en/lib/python3.8/site-packages/django/db/migrations/operations/special.py", line 190, in database_forwards
self.code(from_state.apps, schema_editor)
File "/home/user11/app-master/app/colegiados/migrations/0002_auto_20200128_1646.py", line 185, in migrate
add_sistema_entidade_e_orgao_composicao(apps, sistema)
File "/home/user11/app-master/app/colegiados/migrations/0002_auto_20200128_1646.py", line 16, in add_sistema_entidade_e_orgao_composicao
user = get_user(apps)
File "/home/user11/app-master/app/colegiados/migrations/0002_auto_20200128_1646.py", line 7, in get_user
return User.objects.filter(
File "/home/user11/app-master/en/lib/python3.8/site-packages/django/db/models/query.py", line 318, in __getitem__
return qs._result_cache[0]
IndexError: list index out of range
As a workaround, I modified django's query.py
if qs._result_cache:
return qs._result_cache[0]
else:
return ""
Which worked, until the next error:
File "/home/user11/app-master/en/lib/python3.8/site-packages/django/db/migrations/operations/special.py", line 190, in database_forwards
self.code(from_state.apps, schema_editor)
File "/home/user11/app-master/app/core/migrations/0016_auto_20201120_0934.py", line 106, in migrate
sistema = Sistema.objects.get(nom_alias=sistema)
File "/home/user11/app-master/en/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/user11/app-master/en/lib/python3.8/site-packages/django/db/models/query.py", line 439, in get
raise self.model.DoesNotExist(
__fake__.DoesNotExist: Sistema matching query does not exist.
Now I'm stuck. The test_database gets created with all the tables up to these migrations' errors, with the vast majority lacking any data. Among those that are empty is the table that this last error seems to refer to.
Note that I'm not the developer, I had no hand in creating the DB being used nor any of the migrations. I strongly suspect the database (Postgres12) has to be created/restored from a "minimal" backup before migrations can work properly. Could that be the reason for these failures? If so, what are my options for running a django test that doesn't alter the deployed database? Any options for running the test as a query block and then doing a rollback, as it's using Postgres?

After some talk to the rest of the team, it was decided to reset all migrations to ensure this type of problem no longer happens. Maybe not the "expected" solution, but a decent solution.

Related

Failed to query Elasticsearch using : TransportErrorr(400, 'parsing_exception')

I have been trying to get the Elasticsearch to work in a Django application. It has been a problem because of the mess of compatibility considerations this apparently involves. I follow the recommendations, but still get an error when I actually perform a search.
Here is what I have
Django==2.1.7
Django-Haystack==2.5.1
Elasticsearch(django)==1.7.0
Elasticsearch(Linux app)==5.0.1
There is also DjangoCMS==3.7 and aldryn-search=1.0.1, but I am not sure how relevant those are.
Here is the error I get when I submit a search query via the basic text form.
GET /videos/modelresult/_search?_source=true [status:400 request:0.001s]
Failed to query Elasticsearch using '(video)': TransportError(400, 'parsing_exception')
Traceback (most recent call last):
File "/home/user-name/miniconda3/envs/project-web/lib/python3.7/site-packages/haystack/backends/elasticsearch_backend.py", line 524, in search
_source=True)
File "/home/user-name/miniconda3/envs/project-web/lib/python3.7/site-packages/elasticsearch/client/utils.py", line 69, in _wrapped
return func(*args, params=params, **kwargs)
File "/home/user-name/miniconda3/envs/project-web/lib/python3.7/site-packages/elasticsearch/client/__init__.py", line 527, in search
doc_type, '_search'), params=params, body=body)
File "/home/user-name/miniconda3/envs/project-web/lib/python3.7/site-packages/elasticsearch/transport.py", line 307, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/home/user-name/miniconda3/envs/project-web/lib/python3.7/site-packages/elasticsearch/connection/http_urllib3.py", line 93, in perform_request
self._raise_error(response.status, raw_data)
File "/home/user-name/miniconda3/envs/project-web/lib/python3.7/site-packages/elasticsearch/connection/base.py", line 105, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
elasticsearch.exceptions.RequestError: TransportError(400, 'parsing_exception')
Could someone tell me if this is an issue with compatibility or is there something else going on? How can I fix it?
The combination that appears to have worked for my setup is as follows. I believe the key was to drastically downgrade the Elasticsearch.
Elasticsearch=1.7.6 (with Java 8)
Django==2.1.7
Django-Haystack==2.8.1
elasticsearch==1.7.0
The two items below may or may not be relevant. I have not changed them.
DjangoCMS==3.7.0
aldryn-search==1.0.1

Django-nonrel + MongoDB error while running tests

I have a setup inside a virtual environment:
Django-nonrel-1.6
mongodb-engine
djangotoolbox
Everything works fine, the only problem is during running tests. Every time django tries to flush database after running a test function it throws an error:
Traceback (most recent call last):
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/test/testcases.py", line 187, in __call__
self._post_teardown()
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/test/testcases.py", line 796, in _post_teardown
self._fixture_teardown()
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/test/testcases.py", line 889, in _fixture_teardown
return super(TestCase, self)._fixture_teardown()
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/test/testcases.py", line 817, in _fixture_teardown
inhibit_post_syncdb=self.available_apps is not None)
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 159, in call_command
return klass.execute(*args, **defaults)
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/core/management/base.py", line 285, in execute
output = self.handle(*args, **options)
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/core/management/base.py", line 415, in handle
return self.handle_noargs(**options)
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/core/management/commands/flush.py", line 79, in handle_noargs
six.reraise(CommandError, CommandError(new_msg), sys.exc_info()[2])
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/core/management/commands/flush.py", line 67, in handle_noargs
savepoint=connection.features.can_rollback_ddl):
File "/home/lehins/.virtualenvs/studentpal/local/lib/python2.7/site-packages/django/db/transaction.py", line 251, in __enter__
"The outermost 'atomic' block cannot use "
CommandError: Database test_dev_db couldn't be flushed. Possible reasons:
* The database isn't running or isn't configured correctly.
* At least one of the expected database tables doesn't exist.
* The SQL was invalid.
Hint: Look at the output of 'django-admin.py sqlflush'. That's the SQL this command wasn't able to run.
The full error: The outermost 'atomic' block cannot use savepoint = False when autocommit is off.
So for each test case, I have a pass or a fail, like it suppose to, but I also get this annoying error.
I did run django-admin.py sqlflush --settings=dev_settings --pythonpath=., which flushed my development database just fine, with no errors.
In a couple test functions I checked a few models pulled from database, and it seems to be flushing and recreating objects just fine, so that's why it is not affecting actual test cases.
I went though the whole traceback and I kind of understand why it happens, but I cannot figure out how to deal with. Any help is appreciated.
Edit
Just tried running tests with Django-nonrel-1.5, there was no problems. It seems like a bug in 1.6 version.
Use SimpleTestCase or a custom TestCase
class CustomTestCase(TestCase):
def _fixture_teardown(self):
for db_name in self._databases_names(include_mirrors=False):
call_command('custom_flush', verbosity=0, interactive=False,
database=db_name, skip_validation=True,
reset_sequences=False,
allow_cascade=self.available_apps is not None,
inhibit_post_syncdb=self.available_apps is not None)
Since the problem is transaction.atomic in command flush, you may have to write your own flush.

Fixtures in Django testing with South / Selenium

I am trying to run Selenium tests on a Django project (1.5.4), which uses South. I think South is conflicting with my tests when I try to inject initial data with fixtures, but I'm not sure why; I appreciate any help.
According to the Django documentation, fixtures are supposed to be loaded after the first syncdb and then all migrations are applied.
Question 1) Does this take into account South migrations?? Do I need to run those separately somehow?
The error I get when I run my tests makes it seem like my South migrations are still present in the test database after the first test...but I thought each test has its own database (and migrations / fixtures) created? The first test passes / fails, but each subsequent test raises this IntegrityError:
IntegrityError: Problem installing fixture '<PROJECT_PATH>/fixtures/toy_course.json': Could not load contenttypes.ContentType(pk=8): (1062, "Duplicate entry 'south-migrationhistory' for key 'app_label'")
This South documentation and SO question seem to indicate that I need to override some type of forwards method in order to get fixtures working, but I'm not entirely sure how to apply that to a testing situation instead of production (or if that is the solution I need).
Question 2) Am I supposed to override forwards in my test setup? Where would I do it?
My relevant test code:
from django.conf import settings
from selenium import webdriver
from functional_tests.test import SeleniumTestCase
class Resources(SeleniumTestCase):
fixtures = ['toy_course.json']
def setUp(self):
self.browser = webdriver.Chrome(settings.SELENIUM_WEBDRIVER)
self.browser.implicitly_wait(3)
def tearDown(self):
self.browser.quit()
def test_main_page_renders_correctly(self):
"""
User sees a properly formatted main page
"""
self.open('/RDB/')
h3_headers = self.browser.find_elements_by_tag_name('h3')
self.assertIn(
'Complete List of Resources',
[header.text for header in h3_headers])
self.assertTrue(self.check_exists_by_id('main_table'))
self.assertTrue(self.check_exists_by_id('searchDiv'))
self.assertTrue(self.check_exists_by_class_name('tablesorter'))
Thanks!
UPDATE
So per Alex's suggestion below and this South doc, I added this line to my settings.py:
SOUTH_TESTS_MIGRATE = False
But I am now getting 8 of 8 Errors (before I was getting 1 pass/fail on the first test, and then 7 Errors). The full error for a single test is below:
======================================================================
ERROR: test_table_sorts_on_click (functional_tests.tests.main_resources.Resources)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/test/testcases.py", line 259, in __call__
self._pre_setup()
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/test/testcases.py", line 479, in _pre_setup
self._fixture_setup()
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/test/testcases.py", line 518, in _fixture_setup
**{'verbosity': 0, 'database': db_name, 'skip_validation': True})
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/core/management/__init__.py", line 161, in call_command
return klass.execute(*args, **defaults)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/core/management/base.py", line 255, in execute
output = self.handle(*args, **options)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/core/management/commands/loaddata.py", line 193, in handle
obj.save(using=using)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/core/serializers/base.py", line 165, in save
models.Model.save_base(self.object, using=using, raw=True)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/db/models/base.py", line 626, in save_base
rows = manager.using(using).filter(pk=pk_val)._update(values)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/db/models/query.py", line 605, in _update
return query.get_compiler(self.db).execute_sql(None)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 1014, in execute_sql
cursor = super(SQLUpdateCompiler, self).execute_sql(result_type)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 840, in execute_sql
cursor.execute(sql, params)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 122, in execute
six.reraise(utils.IntegrityError, utils.IntegrityError(*tuple(e.args)), sys.exc_info()[2])
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 120, in execute
return self.cursor.execute(query, args)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/MySQLdb/cursors.py", line 201, in execute
self.errorhandler(self, exc, value)
File "/<PATH TO VIRTUAL ENV>/virtual_environments/relate/lib/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
IntegrityError: Problem installing fixture '/<PATH TO PROJECT>/RDB/fixtures/toy_course.json': Could not load contenttypes.ContentType(pk=8): (1062, "Duplicate entry 'south-migrationhistory' for key 'app_label'")
The command I ran:
$ python manage.py test functional_tests
I'm not quite sure if I made the problem better, worse, or the same, but I seem to be more in-line with the documentation...
Thanks!
UPDATE #2 -- with solution
So a couple of other pages that helped me figure it out (in addition to Alex's pointer to the South doc). First, this person had a similar issue, and solved it using the SOUTH_TESTS_MIGRATE = False statement. So half my solution was to include that.
The second half of my solution was to fix my fixture document. I was dumping everything into my fixture with:
$ python manage.py datadump > RDB/fixtures/toy-course.json
This is, apparently, a bad way to do fixtures it with South--because it also dumps the South migration tables into the fixture. The post above shows the blogger using app-specific fixtures (which are also talked about in this SO post), and that was the key to getting my fixtures to work. The Django docs on fixtures do show the optional parameters to dump just an app, but I didn't know ignoring them would cause South to conflict. So the second half of my solution was to create my fixture to be app-specific:
$ python manage.py datadump RDB > RDB/fixtures/toy-course.json
And my tests now run fine (slow, but probably a different issue)!
Your test database is created using South migrations by default. Set SOUTH_TESTS_MIGRATE = False in your settings.py, quote from docs:
If this is False, South’s test runner integration will make the test
database be created using syncdb, rather than via migrations (the
default).

Why are celery_taskmeta and other tables not being created when running a syncdb in django?

I'm trying to setup celery and django, but the celery_taskmeta table is not being created.
I've followed numerous (Recent) tutorials, added djcelery and djkombu to my installed_apps. added the 'BROKER_TRANSPORT = "djkombu.transport.DatabaseTransport"' line to my settings, etc.
I can run the daemon just fine, and it will execute tasks, but it spits out this traceback at the end:
==============
2011-08-05 16:21:16,231: ERROR/MainProcess] Task slate.modules.filebrowser.tasks.gen_thumb_task[0afc564b-cc54-4f4c-83f5-6db56fb23b76] raised exception: DatabaseError('no such table: celery_taskmeta',)
Traceback (most recent call last):
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/celery/worker/job.py", line 107, in execute_safe
return self.execute(*args, **kwargs)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/celery/worker/job.py", line 125, in execute
return super(WorkerTaskTrace, self).execute()
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/celery/execute/trace.py", line 79, in execute
retval = self._trace()
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/celery/execute/trace.py", line 93, in _trace
r = handler(trace.retval, trace.exc_type, trace.tb, trace.strtb)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/celery/worker/job.py", line 140, in handle_success
self.task.backend.mark_as_done(self.task_id, retval)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/celery/backends/base.py", line 54, in mark_as_done
return self.store_result(task_id, result, status=states.SUCCESS)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/celery/backends/base.py", line 194, in store_result
return self._store_result(task_id, result, status, traceback, **kwargs)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/djcelery/backends/database.py", line 20, in _store_result
traceback=traceback)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/djcelery/managers.py", line 36, in _inner
return fun(*args, **kwargs)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/djcelery/managers.py", line 154, in store_result
"traceback": traceback})
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/djcelery/managers.py", line 78, in update_or_create
return self.get_query_set().update_or_create(**kwargs)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/djcelery/managers.py", line 62, in update_or_create
obj, created = self.get_or_create(**kwargs)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/django/db/models/query.py", line 378, in get_or_create
return self.get(**lookup), False
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/django/db/models/query.py", line 344, in get
num = len(clone)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/django/db/models/query.py", line 82, in __len__
self._result_cache = list(self.iterator())
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/django/db/models/query.py", line 273, in iterator
for row in compiler.results_iter():
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 680, in results_iter
for rows in self.execute_sql(MULTI):
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 735, in execute_sql
cursor.execute(sql, params)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site- packages/django/db/backends/util.py", line 34, in execute
return self.cursor.execute(sql, params)
File "/Users/erichutchinson/python-env/slate/lib/python2.7/site- packages/django/db/backends/sqlite3/base.py", line 234, in execute
return Database.Cursor.execute(self, query, params)
DatabaseError: no such table: celery_taskmeta
-============================
so how the hell do i get this table created during syncdb?
The problem here is actually that South manages the djcelery tables. You need to migrate djcelery to it's new schema. If you upgraded djcelery from an earlier version and you already have a set of tables installed, you need to do a fake migration first:
python manage.py migrate djcelery 0001 --fake
python manage.py migrate djcelery
I had the same problems before but this fixed it.
I was also getting the following error:
DatabaseError: no such table: djkombu_queue
After looking into it a bit further I believe the correct way to solve this problem (pulled from here) it to add the following to INSTALLED_APPS:
INSTALLED_APPS = ('djcelery.transport', )
Adding kombu.transport.django felt incorrect.
Ran into the exact same issue, fresh install. Downgrading celery and django-celery to 2.2.7 and rerunning syncdb solved it (for the interim, anyway).
I was getting a similar error:
DatabaseError: no such table: djkombu_queue
In my case, I needed to add a Django app from a related technology to the INSTALLED_APPS setting. In my case, it was: kombu.transport.django
After that, I reran syncdb and everything was working. In your case, maybe add something in the celery egg to the path.
I got this error while running manage.py dumpdata. I tried two different 2.2.x versions of the celery and django-celery packages with a MySQL database. In my case upgrading to 2.2.7 didn't fix the issue. What did work was advice found on this Github Issue #34.
When using dumpdata on Django 1.3+, add the --exclude djcelery option. (Of course, if you are dumping only a subset of apps and models you won't get the missing table error anyway. And if you aren't using dumpdata in the first place, this answer doesn't apply.)
The problem is probably SQLite3. You can not use it concurrently in Django and it is throwing an misleading error. Switch to PostgreSQL or MySQL, especially for celeryd development.
Or, bite the bullet, and setup RabbitMQ ...

Django shift to PostgreSQL fails to import fixtures, stating data too long

I am switching from using SQLite3 to PostgreSQL, and hoped that I could populate the database using the fixtures I had been using to populate SQLite3. However, I am getting these errors:
$ python manage.py loaddata fixtures/core.json fixtures/auth.json
Installing json fixture 'fixtures/core' from absolute path.
Problem installing fixture 'fixtures/core.json': Traceback (most recent call last):
File "/home/mvid/webapps/nihl/nihlapp/django/core/management/commands/loaddata.py", line 153, in handle
obj.save()
File "/home/mvid/webapps/nihl/nihlapp/django/core/serializers/base.py", line 163, in save
models.Model.save_base(self.object, raw=True)
File "/home/mvid/webapps/nihl/nihlapp/django/db/models/base.py", line 495, in save_base
result = manager._insert(values, return_id=update_pk)
File "/home/mvid/webapps/nihl/nihlapp/django/db/models/manager.py", line 177, in _insert
return insert_query(self.model, values, **kwargs)
File "/home/mvid/webapps/nihl/nihlapp/django/db/models/query.py", line 1087, in insert_query
return query.execute_sql(return_id)
File "/home/mvid/webapps/nihl/nihlapp/django/db/models/sql/subqueries.py", line 320, in execute_sql
cursor = super(InsertQuery, self).execute_sql(None)
File "/home/mvid/webapps/nihl/nihlapp/django/db/models/sql/query.py", line 2369, in execute_sql
cursor.execute(sql, params)
File "/home/mvid/webapps/nihl/nihlapp/django/db/backends/util.py", line 19, in execute
return self.cursor.execute(sql, params)
DataError: value too long for type character varying(30)
I never used to get any data length errors, and I have not changed the models between database switches. PostgreSQL is running utf8. Is there a way to see exactly which json values it fails on so that I can update the respective models? Any idea as to why the values worked in SQLite but fail in PostgreSQL?
Sqlite does not enforce the length of a varchar(n). From the sqlite FAQ:
http://www.sqlite.org/faq.html#q9
Check the Postgres log file, it will log the full failed SQL statement including table name and offending string value.
To solve the actual problem, change your declaration to text. That will allow you to import your data and get it cleaned up. Then you can reapply a constraint.
Check the length of data from SQLLite inside fixtures/core.json that you are trying to insert to Postgres. It looks like the data exceed 30 length characters. Check your django model for property that is limited to 30 characters, then check your fixtures for data that exceeds that maximum length.