I have a migration that loads a fixture for populating the database with a basic site structure (from Loading initial data with Django 1.7 and data migrations
). After that migration ran, my test adds a (custom) NewsPage. THis yields an "IntegrityError at /admin/pages/add/website/newspage/5/
duplicate key value violates unique constraint "wagtailcore_page_pkey"
DETAIL: Key (id)=(3) already exists." The same happens when i add the page through the admin interface.
It's a bit suspicious that the Page with pk=3 is the first one that is created in my fixture. The other two pk's were already created by Wagtail's migrations.
I've read up about fixtures an migrations, and it seems Postgres won't reset the primary key sequences. I'm assuming this is also my problem here.
I found a possible solution in Django: loaddata in migrations errors, but that failed (with "psycopg2.ProgrammingError: syntax error at or near "LINE 1: BEGIN;"). Trying to execute the gist of it, I ran the sqlsequencereset management command (./manage.py sqlsequencereset wagtailcore myapp), but i still get the error, although now for id=4.
Is my assumption correct that Postgres not resetting the primary key sequences is my problem here?
Does anyone know how to reliably fix that from/after a migration loaded fixtures?
Would it maybe be easier / more reliable to create content in Python code?
Edit (same day):
If i don't follow the example in Loading initial data with Django 1.7 and data migrations, but just run the management command, it works:
def load_fixture(fixture_file):
"""Load a fixture."""
commands = StringIO()
call_command('loaddata', fixture_file, stdout=commands)
I don't know what the drawbacks of this more simple approach are.
Edit 2 (also same day):
Ok i do know, the fixtures will be based on the current model state, not the state that the migration is for, so it will likely break if your model changes.
I converted the whole thing to Python code. That works and will likely keep working. Today i learned: don't load fixtures in migrations. (Pity, it would have been a nice shortcut.)
Related
I have a quite large db of 750MB which I need to migrate from sqlite to postgres. As, the db is quite large I am facing some issues that some of the previous questions on the same topic did not had.
Few days ago, I have migrated one sqlite db of size 30MB without any issues with loaddata and dumpdata commands. But for this large db, one of my app throws Database image is malformed error when running command dumpdata. Another of my app dumps successfully but does not load. I have seen the with -v 3 verbose flag that the objects are not even processed. To be precise, while running the loaddata command data from json file is processed first to check duplicate primary key and other model constraints then those data are used to create model objects. But for this app, data are not processed in the first place.
Apart from this two commands there are some other methods that does the migration. But, the schema is completely changed in those way which I don't desire to do. Moreover I have faced issue DurationField becomes a string after migration and I couldn't typecast those fields. Bigint becomes int and varchar becomes text etc are some of the issues. I cannot afford to have this kind of issues.
I found the issues for these problems.
Database image is malformed occurred because I was copying entire db file directly. Just as, angardi suggested, I compressed the db file using gzip package and then copied it. Which solved this issue.
There were two issues for which loaddata was not loading a certain app. That app contained a model which had a foreign key to itself. And one of the entry of the model had the foreign key set to the entry. This resulted in an infinite loop and the app wasn't loading. Even after fixing the entry I was facing the same issue. This happened because I dumped data of a specific app. One of the model of the app had a ManyToMany field set to a model of another app. Though, loaddata throws error when ForeignKey field reference is not found, I think it kept searching for the ManyToMany fields without throwing an error which turned into an infinite loop type of situation. All, I had to do here is, dump the entire database or the related app on the same file.
Read this for the image malformed part.
When migrating from one database to another, you should first dump all data, then run django migrations on the new database and the load data. That should retain all column types.
When running loaddata, the tables that have no dependencies should be loaded first and then the ones that depend on those already loaded.
I'm fairly new to Django, I am using 1.10 version. I reached a point where I was unable to migrate because of an error, and so backed up to an earlier migration using ./manage.py migrate myapp 0003_auto_20160426_2022 and deleted the later migration files. I then repaired my models.py and ran makemigrations, which worked fine. But when I attempted to migrate, I received the following error (only showing last few lines)
File
"/Users/wahhab/Sites/rts/env/lib/python3.5/site-packages/MySQLdb/connections.py",
line 280, in query
_mysql.connection.query(self, query) django.db.utils.OperationalError: (1022, "Can't write; duplicate key in table '#sql-72_4a6'")
I don't know how to move forward from this point so that I can continue working on my project. I have data in other apps but only a little test data in this new app so far, so I have considered deleting all migrations and the MySQL tables for this app and starting over, but I don't want to create a worse mess than I have and don't know what is causing this error. Any advice is welcome. Thanks!
Okay so a hackish solution has already been suggested by #raratiru.
Now the explanation for WHY you were facing the above problem is that when you deleted you migrations, Django resets all its counters, which also includes the counter of the key, but because the deletion of migrations was a manual one, the data still persists in your DB.
So there already exists objects in your DB with key = 1, 2, 3 ....and so on. But Django doesn't know this and hence when you have deleted just the migrations and not the data, Django faces a clash of key as Django again starts assigning automatic key values from 1, which already exists in the DB. But as the key needs to be unique, it gives you an error.
Hence if you delete both the migrations and the data, you don't get thee error.
One of my apps has an initial_data.json file in /fixtures, which works great when using vanilla django to specify some initial data. However, when migrating using South, when it steps through 'Loading initial data for ' I get an IntegrityError, because duplicate data already exists. This makes sense, because my migration didn't empty the table, so the initial data from previous calls to syncdb is already there.
How can I either 1) tell South to not load initial data while migrating, or 2) modify initial_data.json or other django files so that duplicate data errors are handled gracefully, instead of crashing the migrate process in south?
migrate using the --no-initial-data option
I rewrote a lot of my models, and since I am just running a test server, I do ./manage.py reset myapp to reset the db tables and everything has been working fine.
But I tried to do it this time, and I get an error,
"The full error: contraint owner_id_refs_id_9036cedd" of relation "myapp_tagger" does not exist"
So I figured I would just nuke the whole site and start fresh. So i did ./manage.py flush then did a syncdb this did not raise an error and deleted all my data, however it did not update the database since when I try to access any of my_app's objects, i get a column not found error. I thought that flush was supposed to drop all tables. The syncdb said that no fixtures were added.
I assume the error is related to the fact that I changed the tagger model to having a foreignkey with a name owner tied to another object.
I have tried adding related_name to the foreignkey arguments and nothing seems to be working.
I thought that flush was supposed to drop all tables.
No. According to the documentation, manage.py flush doesn't drop the tables. Instead it does the following:
Returns the database to the state it was in immediately after syncdb was executed. This means that all data will be removed from the database, any post-synchronization handlers will be re-executed, and the initial_data fixture will be re-installed.
As stated in chapter 10 of The Django Book in the "Making Changes to a Database Schema" section,
syncdb merely creates tables that don't yet exist in your database — it does not sync changes in models or perform deletions of models. If you add or change a model's field, or if you delete a model, you’ll need to make the change in your database manually.
Therefore, to solve your problem you will need to either:
Delete the database and reissue manage.py syncdb. This is the process that I use when I'm still developing the database schema. I use an initial_data fixture to install some test data, which also needs to be updated when the database schema changes.
Manually issue the SQL commands to modify your database schema.
Use South.
I'm now making unit-tests for already existing code. I faced the next problem:
After running syncdb for creating test database, Django automatically fills several tables like django_content_type or auth_permissions.
Then, imagine I need to run a complex test, like check the users registration, that will need a lof ot data tables and connections between them.
If I'll try to use my whole existing database for making fixtures (that would be rather convinient for me) - I will receive the error like here. This happens because, Django has already filled tables like django_content_type.
The next possible way is to use django dumpdata --exclude option for already filled with syncdb tables. But this doesn't work well also, because if I take User and User Group objects from my db and User Permissions table, that was automatically created by syncdb, I can receive errors, because the primary keys, connecting them are now pointing wrong. This is better described here in part 'fixture hell', but the solution shown there doensn't look good)
The next possible scheme I see is next:
I'm running my tests; Django creates test database, makes syncdb and creates all those tables.
In my test setup I'm dropping this database, creating the new blank database.
Load data dump from existing database also in test setup
That's how the problem was solved:
After the syncdb has created the test database, in setUp part of the tests I use os.system to access shell from my code. Then I'm just loading the dump of the database, which I want to use for tests.
So this works like this: syncdb fills contenttype and some other tables with data. Then in setUp part of tests loading the sql dump clears all the previously created data and i get a nice database.
May be not the best solution, but it works=)
My approach would be to first use South to make DB migrations easy (which doesn't help at all, but is nice), and then use a module of model creation methods.
When you run
$ manage.py test my_proj
Django with South installed with create the Test DB, and run all your migrations to give you a completely updated test db.
To write tests, first create a python module calle, test_model_factory.py In here create functions that create your objects.
def mk_user():
User.objects.create(...)
Then in your tests you can import your test_model_factory module, and create objects for each test.
def test_something(self):
test_user = test_model_factory.mk_user()
self.assert(test_user ...)