Django: Loaddata command after syncdb fails - django

I'm trying to use fixtures as a DB-agnostic way to get the data into my database, but this is much harder than it should be. I'm wondering what I'm doing wrong...
Specifically, when I do a syncdb followed by a migrate followed by a loaddata I run into trouble, since syncdb already creates data that loaddata tries to read from the dump. This leads to double entries and hence a crashing script.
This seems to be the same problem as described here: https://code.djangoproject.com/ticket/15926
But it's weird to me that this seems to be an ignored issue. Are fixtures not meant to actually put real (live) data in?
If so: is there any Django-format that is meant for this? Or is everyone just dumping data as SQL? And, if so, how would one migrate development data in SQLite to a production database?

syncdb will also load data from fixtures if you have the fixtures named correctly and in the correct location. See this link for more info.
https://docs.djangoproject.com/en/1.3/howto/initial-data/#automatically-loading-initial-data-fixtures
If you do not want the data to load on every syncdb then you will need to change the name of the fixture.
fixtures are an OK way to load your data, I have used it on a number of projects. On some projects when I have a ton of data I sometimes write a special load script that will take the data from my data source and load up my new django models, the custom script is a little more work, but gives you more flexibility.
I tend to stay away from using sql to load if I can, since SQL is usually DB specific, if you have to worry about loading on different database versions, stay away if you can.
"In general, using a fixture is a cleaner method since it’s database-agnostic, but initial SQL is also quite a bit more flexible."

OP here; this is what I came up with so far:
# some_app/management/commands/delete_all_objects.py
from django.core.management.base import BaseCommand, CommandError
from django.db.models import get_models
class Command(BaseCommand):
help = 'Deletes all objects'
def handle(self, *args, **options):
for model in get_models():
model.objects.all().delete()
And then just run delete_all_objects between after syncdb & migrate and before loaddata. I'm not sure I like it, I'm very surprised it's necessary, but it works.

Related

Can't use assertTemplateUsed with unitest.TestCase

Please help, i'm fairly new to Django and not sure what's the best way to proceed with my unit-tests.
So, i have a large django app, and it has dozens of views-methods, and the postgresql schemas get pretty complex. I've read that if I use "from django.test import TestCase" then the test database is flushed after running each unit-test. I wanted to prevent from flushing the db in between unit-tests within the same class, so i started using "from unittest import TestCase". That did the trick and the db is preserved in between unit-tests, but now the statement
self.assertTemplateUsed(response, 'samplepage.html') gives me errors AttributeError: 'TestViews' object has no attribute 'assertTemplateUsed'.
What can I do? Is there an alternative to 'assertTemplateUsed' that can be used with unittest.TestCase? Many thanks in advance!

Run migrations without loading views/urls

I have following code in one of my views:
#ratelimit(method='POST', rate=get_comment_rate())
def post_comment_ajax(request):
...
However, upon initial ./manage.py migrate, get_comment_rate() requires a table in database, so I'm unable to run the migrations to create the tables. I ended up with following error:
Django.db.utils.ProgrammingError: relation .. does not exist
Is it possible to run migrations without loading views or is there a better way?
Running migrations triggers the system checks to run, which causes the views to load. There isn't an option to disable this.
It looks like the ratelimit library allows you to pass a callable.
#ratelimit(method='POST', rate=get_comment_rate)
def post_comment_ajax(request):
This would call get_comment_rate when the view runs, rather than when the module loads. This could be an advantage (value won't be stale) or a disadvantage (running the SQL query every time the view runs could affect performance.
In general, you want to avoid database queries when modules load. As well as causing issues with migrations, it can cause issues when running tests -- queries can go to the live db before the test database has been created.
If you are ok with this risk, one option would be to catch the exception in the decorator:
def get_comment_rate():
try:
...
except ProgrammingError:
return '1/m' # or some other default

How to unittest a django database migration?

We've changed our database, using django migrations (django v1.7+).
The data that exists in the database is no longer valid.
Basically I want to test a migration by, inside a unittest, constructing the pre-migration database, adding some data, applying the migration, then confirming everything went smoothly.
How does one:
hold back the new migration when loading the unittest
I found some stuff about overriding settings.MIGRATION_MODULES but couldn't work out how to use it. When I inspect executor.loader.applied_migrations it still lists everything. The only way I could prevent the new migration was to actually remove the file; not a solution I can use.
create a record in the unittest database (using the old model)
If we can prevent the migration then this should be pretty straightforward. myModel.object.create(...)
apply the migration
I think I can probably work this out now that I've found the test_executor: set a plan pointing to the migration file and execute it? Um, right? Got any code for that :-D
confirm the old data in the database now matches the new model
Again, I expect this should be pretty easy: just fetch the instance created before the migration and confirm it has changed in all the right ways.
So the challenge is really just working out how to prevent the unittest from applying the latest migration script and then applying it when we're ready?
Perhaps I have the wrong approach? Should I create fixtures, and just confirm that they're all good at the end? Do fixtures get loaded before the migrations are applied, or after they're all done?
By using the MigrationExecutor and picking out specific migrations with .migrate I've been able to, maybe?, roll it back to a specific state, then roll forward one-by-one. But that is popping up doubts; currently chasing down sqlite fudging around due to the lack of an actual ALTER TABLE instruction. Jury still out.
I wasn't able to prevent the unittest from starting with the current database schema, but I did find it is quite easy to revert to earlier points in the migration history:
Where "0014_nulls_permitted" is a file in the migrations directory...
from django.db.migrations.executor import MigrationExecutor
executor.migrate([("workflow_engine", "0014_nulls_permitted")])
executor.loader.build_graph()
NB: running the executor.loader.build_graph between invocations of executor.migrate seems to be a very important part of completing the migration and making things behave as one might expect
The migrations which are currently applicable to the database can be checked with something like:
print [x[1] for x in sorted(executor.loader.applied_migrations)]
[u'0001_initial', u'0002_fix_foreignkeys', ... u'0014_nulls_permitted']
I created a model instance via the ORM then ensured the database was in the old state by running some SQL directly:
job = Job.objects.create(....)
from django.db import connection
cursor = connection.cursor()
cursor.execute('UPDATE workflow_engine_job SET next_job_state=NULL')
Great. Now I know I have a database in the old state, and can test the forwards migration. So where 0016_nulls_banished is a migration file:
executor.migrate([("workflow_engine", "0016_nulls_banished")])
executor.loader.build_graph()
Migration 0015 goes through the database converting all the NULL fields to a default value. Migration 0016 alters the schema. You can scatter some print statements around to confirm things are happening as you think they should be.
And now the test can confirm that the migration has worked. In this case by ensuring there are no nulls left in the database.
jobs = Job.objects.all()
self.assertTrue(all([j.next_job_state is not None for j in jobs]))
We have used the following code in settings_test.py to ignore the migration for the tests:
MIGRATION_MODULES = dict(
(app.split('.')[-1], '.'.join([app, 'nonexistent_django_migrations_module']))
for app in INSTALLED_APPS
)
The idea here being that none of the apps have a nonexistent_django_migrations_module folder, and thus django will simply find no migrations.

Django shell: Command to load test fixture data?

Is there an easy way to load fixture data that I usually use in automated test runs in the interactive Django shell?
It might be awkward to have a mixture of model data that come from the database and others that come from a fixture. In my case, I have some read-only tables and wand to experiment with some data that I can discard afterwards.
I can probably load the fixture files like described here, but that's a bit cumbersome for repeated use...
ilardm's answer points in the right direction, specifically what you want is:
from django.core.management import call_command
call_command('loaddata', 'fixture_name.json')
Edit: But the correct way to include fixtures in test cases is like this:
class TestThis(TestCase):
fixtures = ['myfixture.json']
def setUp(self):
# Ready to test
I expect ./manage.py loaddata fixture_name.json is what you want.
Perhaps this link: http://testedwebdev.blogspot.ru/2012/05/django-shell-testing.html might help.

Django 1.3 and South migrations

I have an existing project which extensively uses South migrations to load data into its tables.
Since upgrading to Django 1.3 our unit tests no longer run because they cannot find the data they rely on.
Is this behaviour is due to one of the backwards incompatible changes in 1.3
Is there an easy way for me to convert all these migrations into fixtures?
Yes, this behavior is due to this change.
There seems to be a workaround in South trunk (see https://bitbucket.org/andrewgodwin/south/changeset/21a635231327 ) so you can try South development version (it is quite stable in my experience).
You may try to change the DB name in settings (in order to get clean environment), run ./manage.py syncdb and ./manage.py migrate and then do ./manage.py dumpdata
I hit this issue today. Eventually I ended up refactoring my migrations so that they use helper functions to actually insert the data, and then calling the same functions from the setUp() of my tests.
Some hints;
Make your helper functions take the model class as an argument, so you can call them with orm['yourapp.YourModel'] from the migration and with models.YourModel from the test. That also shows the main limitation: South works for models whose schema has changed since then, the test code can't do that. I was lucky in that this particular model hasn't changed.
If you want to keep the helper methods inside the migrations, you'll find that you can't directly import yourapp.migrations.0001_some_migration because identifiers can't start with numbers. Use something like migration_0001 = importlib.import_module('yourapp.migrations.0001_some_migration') instead of an import statement.