Did a dumpdata of my project, then in my new test I added it to fixtures.
from django.test import TestCase
class TestGoal(TestCase):
fixtures = ['test_data.json']
def test_goal(self):
"""
Tests that 1 + 1 always equals 2.
"""
self.failUnlessEqual(1 + 1, 2)
When running the test I get:
Problem installing fixture
'XXX/fixtures/test_data.json':
DoesNotExist: XXX matching query does
not exist.
But manually doing loaddata works fine does not when the db is empty.
I do a dropdb, createdb a simple syncdb the try loaddata and it fails, same error.
Any clue?
Python version 2.6.5, Django 1.1.1
Perhaps you have some foreign key troubles. If you have a model that contains a foreign key referring to another model but the other model doesn't exist, you'll get this error.
This can happen for a couple of reasons: if you are pointing to a model in another app that you didn't include in the test_data.json dump, you'll have trouble.
Also, if foreign keys change, this can break serialization -- this is especially problematic with automatically created fields like permissions or generic relations. Django 1.2 supports natural keys, which are a way to serialize using the "natural" representation of a model as a foreign key rather than an ID which might change.
Related
I'm writing a test in Django 1.11 and trying to load some fixture data into my local sqlite database (retrieved from the real database via manage.py dumpdata) like so:
class MyTest(TestCase):
fixtures = [os.path.join(settings.BASE_DIR, 'myapp', 'tests', 'fixtures.json')]
Part of the fixture data is User models where we have a custom User with a different USERNAME_FIELD, so all of the username fields on each of our records is blank.
However, when I try and run my test using my fixture, I get the following error:
IntegrityError: Problem installing fixture '.../fixtures.json': Could not load myapp.CustomUser(pk=41): column username is not unique
pk 41 is the second CustomUser record in my fixtures.json. How can I tell Django's testing framework to ignore the constraint it thinks it needs to enforce, or otherwise get my fixture data loaded for testing?
I use Django and South for my database. Now I want to add a new Model and a field in an existing model, referencing the new model. For example:
class NewModel(models.Model):
# a new model
# ...
class ExistingModel(models.Model):
# ... existing fields
new_field = models.ForeignKey(NewModel) # adding this now
Now South obviously complains that I added a non-null field and asks me to enter a one-off value. But what I really want is to create a new NewModel instance for every existing ExistingModel instance, thus fulfilling the database requirements. Is that possible somehow?
The easiest way to do this is to write a schema migration that makes the column change, and then write a datamigration to correctly fill in the value. Depending on the database you're using you'll have to do this in slightly different ways.
Sqlite
For Sqlite, you can add a sentinel value for the relation and use a datamigration to fill it in without any issue:
0001_schema_migration_add_foreign_key_to_new_model_from_existing_model.py
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
db.add_column('existing_model_table', 'new_model',
self.gf('django.db.models.fields.related.ForeignKey')(default=0, to=orm['appname.new_model']), keep_default=False)
0002_data_migration_for_new_model.py:
class Migration(DataMigration):
def forwards(self, orm):
for m in orm['appname.existing_model'].objects.all():
m.new_model = #custom criteria here
m.save()
This will work just fine, with no issues.
Postgres and MySQL
With MySql, you have to give it a valid default. If 0 isn't actually a valid Foreignkey, you'll get errors telling you so.
You could default to 1, but there are instances where that isn't a valid foreign key (happened to me because we have different environments, and some environments publish to other databases, so the IDs rarely match up (we use UUIDs for cross-database identification, as God intended).
The second issue you get is that South and MySQL don't play well together. Partially because MySQL doesn't have the concept of DDL transactions.
In order to get around some issues you will inevitably face (including the error I mentioned above and from South asking you to mark orm items in a SchemaMigration as no-dry-run), you need to change the above 0001 script to do the following:
0001_schema_migration_add_foreign_key_to_new_model_from_existing_model.py
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
id = 0
if not db.dry_run:
new_model = orm['appname.new_model'].objects.all()[0]
if new_model:
id = new_model.id
db.add_column('existing_model_table', 'new_model',
self.gf('django.db.models.fields.related.ForeignKey')(default=id, to=orm['appname.new_model']), keep_default=False)
And then you can run the 0002_data_migration_for_new_model.py file as normal.
I advise using the same example above for Postgres and for MySql. I don't remember any issues offhand with Postgres with the first example, but I'm certain the second example works for both (tested).
You want a data migration to supplement your schema migration in this scenario.
South has a nice step by step tutorial on how to achieve this in the docs, here.
It's not uncommon in South to have the desired outcome spread over two or three schema/data migrations as its not always possible to do it in one big hit (sometimes depends on the underlying db if it will tolerate adding a non null column with no default). So in this case you might add a schema migration that has a default, then a data migration with your object manipulation then a final schema migration.
I have changed django.contrib.auth.user model where I have made the email id to be unique. After which I added a datamigration to reflect the same-
python manage.py datamigration appname unique_password.
Followed by python manage.py schemamigration appname --auto-->no changes. Finally, python manage.py migrate appname.
Migrating forwards to 0003_unique_password
appname:0037_unique_password.py
Now when I add a user whose email is not unique, it gives me an error and does not let me create the user through the django admin. But when i do:
>
python manage.py shell
from django.contrib.auth.models import User
user=User('cust1','123','xxx#gmail.com')
This creates a user object even though 'xxx#gmail.com' already exists.
How can I add an unique constraint to django.contrib.auth.user.email for an existing project (with data)?
Checks for uniqueness occur during model validation as per the Model validation documentation. Calling save() on the model doesn't automatically cause model validation. The admin application will be calling model validation.
Nothing prevents you from creating a User object in Python such as user=User('cust1','123','xxx#gmail.com') and then doing a user.save(). If there are uniqueness checks at the database level, you'll likely get an exception at this point. If you want to check before doing the save, call is_valid() on the Model instance.
If you want every User object to be automatically validated before save, use the pre_save Django signal to do Model validation before the save proceeds.
I am using Django 1.5b1 and south migrations and life has generally been great. I have some schema updates which create my database, with a User table among others. I then load a fixture for ff.User (my custom user model):
def forwards(self, orm):
from django.core.management import call_command
fixture_path = "/absolute/path/to/my/fixture/load_initial_users.json"
call_command("loaddata", fixture_path)
All has been working great until I have added another field to my ff.User model, much further down the migration line. My fixture load now breaks:
DatabaseError: Problem installing fixture 'C:\<redacted>create_users.json':
Could not load ff.User(pk=1): (1054, "Unknown column 'timezone_id' in 'field list'")
Timezone is the field (ForeignKey) which I added to my user model.
The ff.User differs from what is in the database, so the Django ORM gives up with a DB error. Unfortunately, I cannot specify my model in my fixture as orm['ff.User'], which seems to be the south way of doing things.
How should I load fixtures properly using south so that they do not break once the models for which these fixtures are for gets modified?
I found a Django snippet that does the job!
https://djangosnippets.org/snippets/2897/
It load the data according to the models frozen in the fixture rather than the actual model definition in your apps code! Works perfect for me.
I proposed a solution that might interest you too:
https://stackoverflow.com/a/21631815/797941
Basicly, this is how I load my fixture:
from south.v2 import DataMigration
import json
class Migration(DataMigration):
def forwards(self, orm):
json_data=open("path/to/your/fixture.json")
items = json.load(json_data)
for item in items:
# Be carefull, this lazy line won't resolve foreign keys
obj = orm[item["model"]](**item["fields"])
obj.save()
json_data.close()
This was a frustrating part of using fixtures for me as well. My solution was to make a few helper tools. One which creates fixtures by sampling data from a database and includes South migration history in the fixtures.
There's also a tool to add South migration history to existing fixtures.
The third tool checks out the commit when this fixture was modified, loads the fixture, then checks out the most recent commit and does a south migration and dumps the migrated db back to the fixture. This is done in a separate database so your default db doesn't get stomped on.
The first two can be considered beta code, and the third please treat as usable alpha, but they're already being quite helpful to me.
Would love to get some feedback from others:
git#github.com:JivanAmara/django_fixture_tools.git
Currently, it only supports projects using git as the RCS.
The most elegant solution I've found is here where by your app model's get_model function is switched out to instead supply the model from the supplied orm. It's then set back after the fixture is applied.
from django.db import models
from django.core.management import call_command
def load_fixture(file_name, orm):
original_get_model = models.get_model
def get_model_southern_style(*args):
try:
return orm['.'.join(args)]
except:
return original_get_model(*args)
models.get_model = get_model_southern_style
call_command('loaddata', file_name)
models.get_model = original_get_model
You call it with load_fixture('my_fixture.json', orm) from within you forwards definition.
Generally South handles migrations using forwards() and backwards() functions. In your case you should either:
alter the fixtures to contain proper data, or
import fixture before migration that breaks it (or within the same migration, but before altering the schema),
In the second case, before migration adding (or, as in your case, removing) the column, you should perform the migration that will explicitly load the fixtures similarly to this (docs):
def forwards(self, orm):
from django.core.management import call_command
call_command("loaddata", "create_users.json")
I believe this is the easiest way to accomplish what you needed. Also make sure you do not do some simple mistakes like trying to import data with new structure before applying older migrations.
Reading the following two posts has helped me come up with a solution:
http://andrewingram.net/2012/dec/common-pitfalls-django-south/#be-careful-with-fixtures
http://news.ycombinator.com/item?id=4872596
Specifically, I rewrote my data migrations to use output from 'dumpscript'
I needed to modify the resulting script a bit to work with south. Instead of doing
from ff.models import User
I do
User = orm['ff.User']
This works exactly like I wanted it to. Additionally, it has the benefit of not hard-coding IDs, like fixtures require.
In django models we have option named managed which can be set True or False
According to documentation the only difference this option makes is whether table will be managed by django or not. Is management by django or by us makes any difference?
Is there any pros and cons of using one option rather than other?
I mean why would we opt for managed=False? Will it give some extra control or power which affects my code?
The main reason for using managed=False is if your model is backed by something like a database view, instead of a table - so you don't want Django to issue CREATE TABLE commands when you run syncdb.
Right from Django docs:
managed=False is useful if the model represents an existing table or a database view that has been created by some other means. This is the only difference when managed=False. All other aspects of model handling are exactly the same as normal
When ever we create the django model, the managed=True implicitly is
true by default. As we know that when we run python manage.py makemigrations the migration file(which we can say a db view) is
created in migration folder of the app and to apply that migration i.e
creates the table in db or we can say schema.
So by managed=False, we restrict Django to create table(scheme, update
the schema of the table) of that model or its fields specified in
migration file.
Why we use its?
case1: Sometime we use two db for the project for
example we have db1(default) and db2, so we don't want particular
model to be create the schema or table in db1 so we can this or we can
customize the db view.
case2. In django ORM, the db table is tied to django ORM model, It
help tie a database view to bind with a django ORM model.
Can also go through the link:
We can add our raw sql for db view in migration file.
The raw sql in migration look like: In 0001_initial.py
from future import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.RunSQL(
CREATE OR REPLACE VIEW app_test AS
SELECT row_number() OVER () as id,
ci.user_id,
ci.company_id,
),
]
Above code is just for overview of the looking of the migration file, can go through above link for brief.