Django south migration error with unique field in postgresql database - django

Edit: I understand the reason why this happened. It was because of the existence of `initial_data.json` file. Apparently, south wants to add those fixtures after migration but failing because of the unique property of a field.
I changed my model from this:
class Setting(models.Model):
anahtar = models.CharField(max_length=20,unique=True)
deger = models.CharField(max_length=40)
def __unicode__(self):
return self.anahtar
To this,
class Setting(models.Model):
anahtar = models.CharField(max_length=20,unique=True)
deger = models.CharField(max_length=100)
def __unicode__(self):
return self.anahtar
Schema migration command completed successfully, but, trying to migrate gives me this error:
IntegrityError: duplicate key value violates unique constraint
"blog_setting_anahtar_key" DETAIL: Key (anahtar)=(blog_baslik) already
exists.
I want to keep that field unique, but still migrate the field. By the way, data loss on that table is acceptable, so long as other tables in DB stay intact.

It's actually the default behavior of syncdb to run initial_data.json each time. From the Django docs:
If you create a fixture named initial_data.[xml/yaml/json], that fixture will be loaded every time you run syncdb. This is extremely convenient, but be careful: remember that the data will be refreshed every time you run syncdb. So don't use initial_data for data you'll want to edit.
See: docs
Personally, I think the use-case for initial data that needs to be reloaded each and every time a change occurs is retarded, so I never use initial_data.json.
The better method, since you're using South, is to manually call loaddata on a specific fixture necessary for your migration. In the case of initial data, that would go in your 0001_initial.py migration.
def forwards(self, orm):
from django.core.management import call_command
call_command("loaddata", "my_fixture.json")
See: http://south.aeracode.org/docs/fixtures.html
Also, remember that the path to your fixture is relative to the project root. So, if your fixture is at "myproject/myapp/fixtures/my_fixture.json" call_command would actually look like:
call_command('loaddata', 'myapp/fixtures/my_fixture.json')
And, of course, your fixture can't be named 'initial_data.json', otherwise, the default behavior will take over.

Related

How can I initialise group names in Django every time the program runs?

I have this code and I want it to just create the groups every time the program runs so that if the database is deleted it will still be a sufficient program itself and someone won't have to create groups again, do you know an easy way to do this?
system_administrator = Group.objects.get_or_create(name='system_administrator')
manager = Group.objects.get_or_create(name='manager')
travel_advisor = Group.objects.get_or_create(name='travel_advisor')
If you lose your DB, you'd have to rerun migrations on a fresh db before the program could run again. So I think data migrations might be a good solution for this? A data migration, is a migration that runs python code to alter the data in the DB, not the schema as a normal migration does.
You could do something like this:
In a new migration file (you can run python manage.py makemigrations --empty yourappname to create an empty migration file for an app)
def generate_groups(apps, schema_editor):
Group = apps.get_model('yourappname', 'Group')
Group.objects.get_or_create(name="system_administrator")
Group.objects.get_or_create(name="manager")
Group.objects.get_or_create(name="travel_advisor")
class Migration(migrations.Migration):
dependencies = [
('yourappname', 'previous migration'),
]
operations = [
migrations.RunPython(generate_groups),
]
Worth reading the docs on this https://docs.djangoproject.com/en/3.0/topics/migrations/#data-migrations
You can do it in the ready method of one of your apps.
class YourApp(Appconfig):
def ready(self):
# important do the import inside the method
from something import Group
Group.objects.get_or_create(name='system_administrator')
Group.objects.get_or_create(name='manager')
Group.objects.get_or_create(name='travel_advisor')
The problem with the data migrations approach is that it is useful for populate the database the first time. But if the groups are deleted once the data migration has run, you will need to populate them again.
Also remember that get_or_create return a tuple.
group, created = Group.objects.get_or_create(name='manager')
# group if an instance of Group
# created is a boolean

Calling loaddata in Django 1.7 migrations is throwing "Unknown column '[field]' in 'field list'"

I'm running into an issue in Django 1.7 when attempting to write multiple migrations in a row. Here's the basic setup of the migrations:
Initial schema migration to create the models for an app
Data migration that calls loaddata on a specific fixture that contains one-time default data
A new optional field was added to one of the models, so it's a schemamigration to add the field
If I generate the first migration, run it, generate the second, run it, and then add the new field, generate the third migration, and run it, everything is fine. However, if my database were on migration #1 and then I pulled down from a source repository, migration 2 would fail because it uses the models from models.py when calling loaddata rather than the models as of the time of that migration. It then produces the following error:
"Unknown column '[field]' in 'field list'"
In this case, [field] is the new field that I added for migration #3. The error makes sense, because my database doesn't have the new field yet but loaddata expects it to be there (even though the fixture doesn't reference the new field), but is there any way to make loaddata use the database at the time of the migration rather than the current state in models.py? Or are there any other ways to get around this issue?
Thanks.
I ended up writing a hack to get around this for now, but I feel like there has to be a better way. Instead of calling loaddata in the migration, I now call this function:
def load_fixture_in_data_migration(apps, schema_editor, fixture_filename, migration_file):
"""
Load fixture data in data migrations without breaking everything
when the models change later on
"""
fixture_dir = os.path.abspath(os.path.join(os.path.dirname(migration_file), '../fixtures'))
fixture_file = os.path.join(fixture_dir, fixture_filename)
fixture = open(fixture_file, 'rb')
objects = serializers.deserialize('json', fixture, ignorenonexistent=True)
for obj in objects:
ObjApp = apps.get_model(obj.object._meta.app_label, obj.object._meta.object_name)
new_obj = ObjApp(pk=obj.object.pk)
for field in ObjApp._meta.fields:
setattr(new_obj, field.name, getattr(obj.object, field.name))
new_obj.save()
fixture.close()
And I call it like this from the data migration:
load_fixture_in_data_migration(apps, schema_editor, 'initial_add_ons.json', __file__)
Does anyone know a better way to do this? It feels really like a hack since I have to access object meta data to accomplish this.

Data migrations in Django

I am working on a data migration for a Django app to populate
the main table in the db with data that will form the mainstay of
the app - this is persistent/permanent data that may added to but
never deleted.
My reference is the Django 1.7 documentation and in particular an
example on page
https://docs.djangoproject.com/en/1.7/ref/migration-operations/#django.db.migrations.operations.RunPython
with a custom method called forward_funcs:
def forwards_func(apps, schema_editor):
# We get the model from the versioned app registry;
# if we directly import it, it'll be the wrong version
Country = apps.get_model("myapp", "Country")
db_alias = schema_editor.connection.alias
Country.objects.using(db_alias).bulk_create([
Country(name="USA", code="us"),
Country(name="France", code="fr"),])
I am assuming the argument to bulk_create is a list of Country model objects not namedtuple objects, although the format looks exactly the same. Is this the case, and could someone please explain what db_alias is?
Also, if I wish to change or remove existing entries in a table using a data migration what are the methods corresponding to bulk_create to do this?
Thanks in advance for any help.
Country is just the same as you would do from app.models import Country. Only thing different, the import always gives you the latest model and apps.get_model in a migration gives you the model at the time of the migration. It continues to edit the model within the initial migration.
About bulk_create; its argument is indeed a list of unsaved Country objects and uses it to do an huge insert into your db. More information about bulk_create can be found here; https://docs.djangoproject.com/en/1.7/ref/models/querysets/#bulk-create.
About db_alias, it is the name of the database you set within your settings. Most of the time it is default, so you can leave it in your code if you just use one database. The function will probably will called more than once if you have more databases set within your settings. More info about using; https://docs.djangoproject.com/en/1.7/ref/models/querysets/#using.
An bulk delete is actually quite simple, you just filter your Countries and call delete on the queryset. So something like;
Country.objects.filter(continent="Europe").delete()
About the persistent/permanent data question, I don't really have a solution for that one. One thing you can do, I think, is overwrite the .delete() function on the model and Manager.

Database Error on Dotcloud, Postgres Django

I'm deploying my django app with Dotcloud.
I use Postgres as DB.
I had a new model to my app, and I wanted to flush and syncdb the db. Everything works find when I do it. My new model, named 'Competition' appears in my admin.
The problem is that an other model, Match, has a ForeignKey with the model Competition. And when I go to 'Matchs' in my admin, I get this error:
DatabaseError at /admin/myproject/match/
column myproject_match.competition_id does not exist
LINE 1: ...team_challenger_id", "sportdub_match"."sport_id", "sportdub_...
Any idea on why the syncdb didn't make it work find?
Thank you for your help.
EDIT: My two models
class Competition(models.Model):
name = models.CharField(max_length=256)
comp_pic = models.ImageField(upload_to="comp_pics/")
def __unicode__(self):
return self.name
class Match(models.Model):
team_host = models.ForeignKey(Team, related_name='host_matches')
team_challenger = models.ForeignKey(Team, related_name= 'challenger_matches')
sport = models.ForeignKey(Sport)
competition = models.ForeignKey(Competition)
manage.py syncdb will only create missing tables. If a table already exists, but with an invalid definition, it will not be updated. This is very probably the problem that you are experiencing.
There are at least three ways to solve the problem.
The easy way: use manage.py reset to effectively drop+recreate all the tables. Easy, but of course you will lose the data.
The lazy way: install django_extensions and use manage.py sqldiff. It will show you the difference between the current structure of the database, and the expected structure. sqldiff will nicely show you SQL statements which can update the existing database to conform with your models. You can then execute those statements in a SQL shell.
The clean way: use a migrations framework like south to handle database schema updates. Of course, if you are early in the development, this is overkill (you do not want to write database migrations each time you add/change/remove a field while you're doing local development!) but if your project will have a longer life span, I definitely recommend checking out south.

Django & South: Custom field methods are not executed when doing obj.save() in a data migration

In my Django model I have two fields, name (a regular CharField) and slug, a custom field that generates the slug based on a field name passed in the field definition, in this case I use name for this.
First, the model only had the name field, with it's corresponding migrations and all. Then I needed to add the slug field, so following South conventions, I added the slug field with unique=False, create the schema migration, then created a data migration, set the unique=True and create another migration for this last change.
Since the value of the slug is generated on model save, in the forwards method of the data migration what I did was to iterate over the queryset returned by orm['myapp.MyModel'].objects.all() and calling the save() method on each instance.
But the value of the slug field is never generated. Under an IPython session I did the same thing, but referencing the model as from myapp.models import MyModel, and worked. Using some debug statements, printing the type of the model returned by South's orm dict shows the real class, it doesn't appear to be an fake model constructed on the fly by South.
The slug field creates it's value when the pre_save method. How to force it to be called during a data migration? I need to ensure the uniqueness of the value so when the index is applied in a later schema migration, the columns doesn't contains duplicate values.
Thanks!
BTW: The slug field class does define the south_field_triple so South plays nice with it.
EDIT: Please see my answer. But more like an answer, it feels more like a hack. Is there a better/right way to do this?
Generally you should explicitly replicate the code that generates the field's contents as closely as possible in the migration (A rare example of purposeful code duplication). The code in your approach, even if it worked, would call pre_save as defined at the time of executing the migration, which may have changed or even fail with the models state at the time the migration was created (It may depend on other fields not being present at an earlier time etc.).
So the correct approach in you example would be to use slugify() directly, as it is done in the SlugField's pre_save method:
from django.template.defaultfilters import slugify
class Migration(DataMigration):
def forwards(self, orm):
"Write your forwards methods here."
for myobj in orm['myapp.MyModel'].objects.all():
myobj.slug = slugify(myobj.otherfield)
myobj.save()
I solved this temporarily by obtaining the model field instance and calling it's pre_save directly:
class Migration(DataMigration):
def forwards(self, orm):
"Write your forwards methods here."
# Note: Remember to use orm['appname.ModelName'] rather than "from appname.models..."
for myobj in orm['myapp.MyModel'].objects.all():
slug_field = myobj._meta.get_field_by_name('slug')[0]
myobj.slug = slug_field.pre_save(myobj, add=False)
myobj.save()
However it feels cumbersome to take this into account for custom fields...