Running tests with unmanaged tables in django - django

My django app works with tables that are not managed and have the following defined in my model like so:
class Meta:
managed = False
db_table = 'mytable'
When I run a simple test that imports the person, I get the following:
(person)bob#sh ~/person/dapi $ > python manage.py test
Creating test database for alias 'default'...
DatabaseError: (1060, "Duplicate column name 'db_Om_no'")
The tests.py is pretty simple like so:
import person.management.commands.dorecall
from person.models import Person
from django.test import TestCase
import pdb
class EmailSendTests(TestCase):
def test_send_email(self):
person = Person.objects.all()[0]
Command.send_email()
I did read in django docs where it says "For tests involving models with managed=False, it’s up to you to ensure the correct tables are created as part of the test setup.". So I understand that my problem is that I did not create the appropriate tables. So am I supposed to create a copy of the tables in the test_person db that the test framework created?
Everytime I run the tests, the test_person db gets destroyed (I think) and re-setup, so how am I supposed to create a copy of the tables in test_person. Am I thinking about this right?
Update:
I saw this question on SO and added the ManagedModelTestRunner() in utils.py. Though ManagedModelTestRunner() does get run (confirmed through inserting pbd.set_trace()), I still get the Duplicate column name error. I do not get errors when I do python manage.py syncdb (though this may not mean much since the tables are already created - will try removing the table and rerunning syncdb to see if I can get any clues).

I had the same issue, where I had an unmanaged legacy database that also had a custom database name set in the models meta property.
Running tests with a managed model test runner, as you linked to, solved half my problem, but I still had the problem of Django not knowing about the custom_db name:
django.db.utils.ProgrammingError: relation "custom_db" does not exist
The issue was that ./manage.py makemigrations still creates definitions of all models, managed or not, and includes your custom db names in the definition, which seems to blow up tests. By installing:
pip install django-test-without-migrations==0.2
and running tests like this:
./manage.py test --nomigrations
I was able to write tests against my unmanaged model without getting any errors.

Related

Django test to use Postgres extension without migration

I have an existing project that I want to start implementing test step. There are quite a few data migrations happened in the history and I don't want to spend the effort to make them run in test setup. So I have disabled 'migrations':
DATABASES['default']['TEST'] = {
'MIGRATE': False,
}
However this Postgres DB makes use of some extensions
class Meta:
verbose_name = 'PriceBook Cache'
verbose_name_plural = 'PriceBook Caches'
indexes = [
GinIndex(
name="pricebookcache_gin_trgm_idx",
fields=['itemid', 'itemdescription', 'manufacturerpartnumber', 'vendor_id'],
opclasses=['gin_trgm_ops', 'gin_trgm_ops', 'gin_trgm_ops', 'gin_trgm_ops']
),
]
Which resulted error when I run the test
psycopg2.errors.UndefinedObject: operator class "gin_trgm_ops" does not exist for access method "gin"
I have had a look at here https://docs.djangoproject.com/en/4.1/ref/databases/#migration-operation-for-adding-extensions which specifically said done via migration but which I have disabled.
An alternative is using template db but I don't really want to as this will be run automatically in gitlab using docker container and I don't want to maintain another fixture outside the project repo.
So is there a way to initialise the database without running the migrations or is it possible to make it run completely different migration just for the test?

Flask-Migrate creates the same duplicate migration when used with postgres schemas

I have a very simple and silly problem but I don't know what I'm missing. Basically, the way I've currently written my manage app, it seems flask migrate always creates an absolute migration and not just a change-set to migrate from the previous schema to the current one.
For example, if I delete my migrations and spin a brand new DB and I then do manage db migrate followed by manage db upgrade all works. If I then make a change to a db.Model table and then do manage db migrate I don't get an error.
However, the new migration script points to the previous one but isn't just the diff needed to get the database from the previous schema state to the new one but a full (absolute) migration starting from an empty schema - as in, it would try to create the tables from scratch again (with the change) and not just for example apply the change to the already created schema. That is, even though the migration is linked to the previous one, it hasn't taken into account what the previous migration has been applied. This means they cannot be chained because for example the second migration will attempt to create tables again and so manage db upgrade fails when called the second time.
My manage app looks like this:
from flask_migrate import Migrate, MigrateCommand
from src.common.db import db
from src.common.flaskery import global_flask_app, global_flask_manager
app = global_flask_app(__name__)
migrate = Migrate(app, db)
manager = global_flask_manager(__name__)
manager.add_command('db', MigrateCommand)
from src.db.models import *
def main():
manager.run()
if __name__ == '__main__':
main()
Similar: Flask Migrate using different postgres schemas ( __table_args__ = {'schema': 'test_schema']})
So in your migrations/env.py, you need to add include_schemas=True to the config as below:
context.configure(connection=connection,
target_metadata=target_metadata,
process_revision_directives=process_revision_directives,
include_schemas=True,
**current_app.extensions['migrate'].configure_args)

Django+Postgres: "current transaction is aborted, commands ignored until end of transaction block"

I've started working on a Django/Postgres site. Sometimes I work in manage.py shell, and accidentally do some DB action that results in an error. Then I am unable to do any database action at all, because for any database action I try to do, I get the error:
current transaction is aborted, commands ignored until end of transaction block
My current workaround is to restart the shell, but I should find a way to fix this without abandoning my shell session.
(I've read this and this, but they don't give actionable instructions on what to do from the shell.)
You can try this:
from django.db import connection
connection._rollback()
The more detailed discussion of This issue can be found here
this happens to me sometimes, often it's the missing
manage.py migrate
or
manage.py syncdb
as mentioned also here
it also can happen the other way around, if you have a schemamigration pending from your models.py. With south you need to update the schema with.
manage.py schemamigration mymodel --auto
Check this
The quick answer is usually to turn on database level autocommit by adding:
'OPTIONS': {'autocommit': True,}
To the database settings.
I had this error after restoring a backup to a totally empty DB. It went away after running:
./manage syncdb
Maybe there were some internal models missing from the dump...
WARNING: the patch below can possibly cause transactions being left in an open state on the db (at least with postgres). Not 100% sure about that (and how to fix), but I highly suggest not doing the patch below on production databases.
As the accepted answer does not solve my problems - as soon as I get any DB error, I cannot do any new DB actions, even with a manual rollback - I came up with my own solution.
When I'm running the Django-shell, I patch Django to close the DB connection as soon as any errors occur. That way I don't ever have to think about rolling back transactions or handling the connection.
This is the code I'm loading at the beginning of my Django-shell-session:
from django import db
from django.db.backends.util import CursorDebugWrapper
old_execute = CursorDebugWrapper.execute
old_execute_many = CursorDebugWrapper.executemany
def execute_wrapper(*args, **kwargs):
try:
old_execute(*args, **kwargs)
except Exception, ex:
logger.error("Database error:\n%s" % ex)
db.close_connection()
def execute_many_wrapper(*args, **kwargs):
try:
old_execute_many(*args, **kwargs)
except Exception, ex:
logger.error("Database error:\n%s" % ex)
db.close_connection()
CursorDebugWrapper.execute = execute_wrapper
CursorDebugWrapper.executemany = execute_many_wrapper
For me it was a test database without migrations. I was using --keepdb for testing. Running it once without it fixed the error.
There are a lot of useful answers on this topic, but still it can be a challenge to figure out what is the root of the issue. Because of this, I will try to give just a little more context on how I was able to figure out the solution for my issue.
For Django specifically, you want to turn on logs for db queries and before the error is raised, you can find the query that is failing in the console. Run that query directly on db, and you will see what is wrong.
In my case, one column was missing in db, so after migration everything worked correctly.
I hope this will be helpful.
If you happen to get such an error when running migrate (South), it can be that you have lots of changes in database schema and want to handle them all at once. Postgres is a bit nasty on that. What always works, is to break one big migration into smaller steps. Most likely, you're using a version control system.
Your current version
Commit n1
Commit n2
Commit n3
Commit n4 # db changes
Commit n5
Commit n6
Commit n7 # db changse
Commit n8
Commit n9 # db changes
Commit n10
So, having the situation described above, do as follows:
Checkout repository to "n4", then syncdb and migrate.
Checkout repository to "n7", then syncdb and migrate.
Checkout repository to "n10", then syncdb and migrate.
And you're done. :)
It should run flawlessly.
If you are using a django version before 1.6 then you should use Christophe's excellent xact module.
xact is a recipe for handling transactions sensibly in Django applications on PostgreSQL.
Note: As of Django 1.6, the functionality of xact will be merged into the Django core as the atomic decorator. Code that uses xact should be able to be migrated to atomic with just a search-and-replace. atomic works on databases other than PostgreSQL, is thread-safe, and has other nice features; switch to it when you can!
I add the following to my settings file, because I like the autocommit feature when I'm "playing around" but dont want it active when my site is running otherwise.
So to get autocommit just in shell, I do this little hack:
import sys
if 'shell' in sys.argv or sys.argv[0].endswith('pydevconsole.py'):
DATABASES['default']['OPTIONS']['autocommit'] = True
NOTE: That second part is just because I work in PyCharm, which doesnt directly run manage.py
I got this error in Django 1.7. When I read in the documentation that
This problem cannot occur in Django’s default mode and atomic()
handles it automatically.
I got a bit suspicious. The errors happened, when I tried running migrations. It turned out that some of my models had my_field = MyField(default=some_function). Having this function as a default for a field worked alright with sqlite and mysql (I had some import errors, but I managed to make it work), though it seems to not work for postgresql, and it broke the migrations to the point that I didn't event get a helpful error message, but instead the one from the questions title.

don't load 'initial_data.json' fixture when testing

I'm testing a django app not written by myself, which uses two fixtures: initial_data.json and testing.json. Both fixtures files contain conflicting data (throwing an integrity error).
For testing, I've specified TestCase.fixtures = ['testing.json'], but initial_data.json is loaded too.
How can I avoid loading initial_data.json (not renaming it) in the testcase?
Quoting from Django Website:
If you create a fixture named
initial_data.[xml/yaml/json], that
fixture will be loaded every time you
run syncdb. This is extremely
convenient, but be careful: remember
that the data will be refreshed every
time you run syncdb. So don't use
initial_data for data you'll want to
edit.
So I guess there's no way to say "okay, don't load initial data just this once". Perhaps you could write a short bash script that would rename the file. Otherwise you'd have to dig into the Django code.
More info here: http://docs.djangoproject.com/en/dev/howto/initial-data/#automatically-loading-initial-data-fixtures
You might want to think about whether initial_data.json is something your app actually needs. It's not hard to "manually" load your production data with ./manage.py loaddata production.json after running a syncdb (how often do you run syncdb in production, anyway?), and it would make loading your testing fixture much easier.
If you want to have tables with no initial data, this code will help you:
edit tests.py:
from django.core import management
class FooTest(TestCase):
#classmethod
def setUpClass(cls):
management.call_command('flush', interactive=False, load_initial_data=False)
this will remove your data and syncdb again without loading initial data.

How do I run a unit test against the production database?

How do I run a unit test against the production database instead of the test database?
I have a bug that's seems to occur on my production server but not on my development computer.
I don't care if the database gets trashed.
Is it feasible to make a copy the database, or part of the database that causes the problem? If you keep a backup server, you might be able to copy the data from there instead (make sure you have another backup, in case you messed the backup database).
Basically, you don't want to mess with live data and you don't want to be left with no backup in case you mess something up (and you will!).
Use manage.py dumpdata > mydata.json to get a copy of the data from your database.
Go to your local machine, copy mydata.json to a subdirectory of your app called fixtures e.g. myapp/fixtures/mydata.json and do:
manage.py syncdb # Set up an empty database
manage.py loaddata mydata.json
Your local database will be populated with data and you can test away.
Make a copy the database... It's really a good practices!!
Just execute the test, instead call commit, call rollback at the end of.
The first thing to try should be manually executing the test code on the shell, on the production server.
python manage.py shell
If that doesn't work, you may need to dump the production data, copy it locally and use it as a fixture for the testcase you are using.
If there is a way to ask django to use the standard database without creating a new one, I think rather than creating a fixture, you can do a sqldump which will generally be a much smaller file.
Short answer: you don't.
Long answer: you don't, you make a copy of the production database and run it there
If you really don't care about trashing the db, then Marco's answer of rolling back the transaction is my preferred choice as well. You could also try NdbUnit but I personally don't think the extra baggage it brings is worth the gains.
How do you test the test db now? By test db do you mean SQLite?
HTH,
Berryl
I have both a full-on-slow-django-test-db suite and a crazy-fast-runs-against-production test suite built from a common test module. I use the production suite for sanity checking my changes during development and as a commit validation step on my development machine. The django suite module looks like this:
import django.test
import my_test_module
...
class MyTests(django.test.TestCase):
def test_XXX(self):
my_test_module.XXX(self)
The production test suite module uses bare unittest and looks like this:
import unittest
import my_test_module
class MyTests(unittest.TestCase):
def test_XXX(self):
my_test_module.XXX(self)
suite = unittest.TestLoader().loadTestsFromTestCase(MyTests)
unittest.TextTestRunner(verbosity=2).run(suite)
The test module looks like this:
def XXX(testcase):
testcase.assertEquals('foo', 'bar')
I run the bare unittest version like this, so my tests in either case have the django ORM available to them:
% python manage.py shell < run_unit_tests
where run_unit_tests consists of:
import path.to.production_module
The production module needs a slightly different setUp() and tearDown() from the django version, and you can put any required table cleaning in there. I also use the django test client in the common test module by mimicking the test client class:
class FakeDict(dict):
"""
class that wraps dict and provides a getlist member
used by the django view request unpacking code, used when
passing in a FakeRequest (see below), only needed for those
api entrypoints that have list parameters
"""
def getlist(self, name):
return [x for x in self.get(name)]
class FakeRequest(object):
"""
an object mimicing the django request object passed in to views
so we can test the api entrypoints from the developer unit test
framework
"""
user = get_test_user()
GET={}
POST={}
Here's an example of a test module function that tests via the client:
def XXX(testcase):
if getattr(testcase, 'client', None) is None:
req_dict = FakeDict()
else:
req_dict = {}
req_dict['param'] = 'value'
if getattr(testcase, 'client', None) is None:
fake_req = FakeRequest()
fake_req.POST = req_dict
resp = view_function_to_test(fake_req)
else:
resp = testcase.client.post('/path/to/function_to_test/', req_dict)
...
I've found this structure works really well, and the super-speedy production version of the suite is a major time-saver.
If you database supports template databases, use the production database as a template database. Ensure that you Django database user has sufficient permissions.
If you are using PostgreSQL, you can easily do this specifying the name of your production database as POSTGIS_TEMPLATE(and use the PostGIS backend).