What is the difference between --fake-initial and --fake in Django migrations? What are the dangers of using fake migrations? Anybody knows? Thank you very much to all.
I am using django 1.10
Well the documentation is very clear about this
--fake-initial
Allows Django to skip an app’s initial migration if all database
tables with the names of all models created by all CreateModel
operations in that migration already exist. This option is intended
for use when first running migrations against a database that
preexisted the use of migrations. This option does not, however, check
for matching database schema beyond matching table names
You were asking about the risks, well here it is
only safe to use if you are confident that your existing schema
matches what is recorded in your initial migration.
--fake
Tells Django to mark the migrations as having been applied or
unapplied, but without actually running the SQL to change your
database schema.
This is intended for advanced users to manipulate the current
migration state directly if they’re manually applying changes;
Once again risks are clearly highlighted
be warned that using --fake runs the risk of putting the migration
state table into a state where manual recovery will be needed to make
migrations run correctly.
This answer is valid not just for django versions 1.8+ but for other versions as well.
edit Nov, 2018: I sometimes I see answers here and elsewhere that suggest that you should drop your databae. That's almost never the right thing to do. If you drop your database you lose all your data.
#e4c5 already gave an answer about this question, but I would like to add one more thing concerning when to use --fake and --fake-initial.
Suppose you have a database from production and you want to use it for development and apply migrations without destroying the data. In that case --fake-initial comes in handy.
The --fake-initial will force Django to look at your migration files and basically skip the creation of tables that are already in your database. Do note, though, that any migrations that don’t create tables (but rather modify existing tables) will be run.
Conversely, If you have an existing project with migration files and you want to reset the history of existing migrations, then --fake is usually used.
Short answer
--fake does not apply the migration
--fake-initial might, or might not apply the migration
Longer answer:
--fake: Django keeps a table called django_migrations to know which migrations it has applied in the past, to prevent you from accidentally applying them again. All --fake does is insert the migration filename into that table, without actually running the migration. This is useful if you manually changed the database schema first, and the models later, and want to bypass django's actions. However, during that step you are on your own, so take care that you don't end up in an inconsistent state.
--fake-initial: depends on the state of the database
all of the tables already exist in the database: in that case, it works like --fake. Only the names of the tables are checked, not their actual schema, so, again, take care
none of the tables already exist in the database: in that case, it works like a normal migration
some of the table already exist: you get an error. That's not supposed to happen, either you take care of the database, or django does.
Note that, --fake-initial is only taken into account if the migration file has initial=True in its class, otherwise the flag is ignored. Also, this is the only documented usage of initial=True in migrations.
Related
Many moons ago I used commands like ./manage.py reset appname to DROP and then recreate the database tables for a single App. This was handy for when other developers had inadvertently but manually broken something in the database and you wanted to reset things back without affecting other apps (or needing to go through a lengthy dump/load process).
The advent of Django 1.7 and its builtin migrations support seems to have removed and renamed a lot of these commands and I'm going crosseyed with all the shared prefixes in the documentation. Can somebody spell this out for me?
How do I reset the tables for a single application (one with migrations)?
If your Django migration subsystem is not broken in itself, the normal way to reset an app is to run manage.py migrate <app> zero.
This will run all of the app's migrations backwards, so a few things are noteworthy:
if some of the app's migrations are not reversible, the process will fail. Should not happen normally as Django only creates reversible migrations. You can build irreversible ones yourself, though - usually when you create data migrations.
if some other app has a dependency on this app, it will also be migrated backwards up to the last migration that did not depend on it.
You can then run migrate again, so it is run forwards.
In any case, remember migrations introduce a risk for your data, so backup your database before touching anything.
I read somewhere that you would never run syncdb on a database, post its initial run.
Is this true?
I don't see what the problem could be. Do you?
running syncdb will not make changes to tables for any models already in the database (even if you have changed them).
for managing changes to models, consider south
Syncdb will create tables that don't exist, but not modify existing tables. So it's fairly safe to run in production. But it's not a reliable way to maintain a database schema. Look at the South package for a way to reliably maintain changes to your database schema between development and production. Should be part of django standard, IMHO.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
update django database to reflect changes in existing models
I've used Django in the past and one of the frustrations I've had with it as an ORM tools is the inability to update an existing database with changes in the model. (Hibernate does this very well and makes things really easy for updating and heavily modifying a model and applying this to an existing database.) Is there a way to do this without wiping the database every time? It gets really old having to regenerate admin users and sites after every change in the model which I'd like to play with.
You will want to look into South. It provides a migrations system to migrate both schema changes as well as data from one version to the next.
It's quite powerful and the vast majority of changes can be handled simple by going
manage.py schemamigration --auto
manage.py migrate
The auto functionality does have it limits, and especially if the change is going to be run on a production system eventually you should check the code --auto generated to be sure it's doing what you expect.
South has a great guide to getting started and is well documented. You can find it at http://south.aeracode.org
No.
As the documentation of syncdb command states:
Syncdb will not alter existing tables
syncdb will only create tables
for models which have not yet been installed. It will never issue
ALTER TABLE statements to match changes made to a model class after
installation. Changes to model classes and database schemas often
involve some form of ambiguity and, in those cases, Django would have
to guess at the correct changes to make. There is a risk that critical
data would be lost in the process.
If you have made changes to a model and wish to alter the database
tables to match, use the sql command to display the new SQL structure
and compare that to your existing table schema to work out the
changes.
South seems to be how most people solve this problem, but a really quick and easy way to do this is to change the db directly through your database's interactive shell. Just launch your db shell (usually just dbshell) and manually alter, add, drop the fields and tables you need changed using your db syntax.
You may want to run manage.py sqlall appname to see the sql statements Django would run if it was creating the updated table, and then use those to alter the database tables and fields as required.
The Making Changes to a Database Schema section of the Django book has a few examples of how to do this: http://www.djangobook.com/en/1.0/chapter05/
I manually go into the database - whatever that may be for you: MySQL, PostgreSQL, etc. - to change database info, and then I adjust the models.py accordingly for reference. I know there is Django South, but I didn't want to bother with using another 3rd party application.
I'm no Django expert, but from what I can gather, there is no way to tell syndb not to try to run ALTER statements to create foreign key constraints on the db.
I recently tried to upgrade my MySQL Cluster installation from 7.0.6 to the latest release 7.1.9a. This revealed a bug in this latest MySQL release in which foreign key constructs are NOT ignored on engines that do not support them as they were in previous versions. This is definitely a MySQL bug which I have submitted to them and they have verified as valid.
In the mean time, however, until that bug is fixed, I'm stuck running a very old version of MySQL and wondered if I could possibly work around it by somehow forcing syncdb NOT to try to actually create foreign keys on the database, just create the tables.
Without getting into detail, in my case the syncdb command is built into some automation that does a lot more than just build a database from models, so I can't very easily work around this manually.
Any input or ideas are appreciated.
South can do this (http://south.aeracode.org/)
You could do a schemamigration and then edit the migration by hand, leaving out the FK constructs. You could also make your own introspection rules in South that automates this, I guess. It might interfere with your syncdb automation, though.
You can run python manage.py sqlall app_name, remove the ALTER FK constraints, and load the SQL manually.
I've just inherited a Django project for maintenance and continuous development. While I'm a fairly proficient programmer (also Python) I have next to no experience with Django, therefore I need a bit of saneness-checking for my ideas ;)
The current problem is this: the project contains a custom install.sh file, which does three things:
Creating some non-model databases and importing initial data via SQL
Importing fixtures using manage.py
The usual migrate.py syncdb and migrate.py migrate.
(install.sh also contained some logic to implement half-baked south dependency management, which I replaced by a native one)
My idea was the following:
Generate models for every non-model database table (manage.py inspectdb for a start, split up in apps afterwards).
Convert all non-south models to south
Convert initial SQL data to south fixtures
Convert database backup routines to manage.py dumpdata (and restoring to manage.py loaddata fixtures).
Never work with raw SQL again
Now the simple question is: is this plan sensible? What are the alternatives?
Sounds sane enough to me, if you're after a pure no-actual-SQL route.
Two small points:
the fixtures in 3) are actually Django fixtures, rather than South ones.
using dumpdata to create JSON/XML Django fixtures and then restoring them is not without risks. Certain django.contrib apps (and many other non-contrib ones) can cause loaddata pain due to FK clashes etc, due to round-robin dependencies, etc. So, I would recommend dumping to SQL as well as fixtures. A raw SQL dump will be faster for a non-Djangonaut to restore if your server explodes while you're holidaying in the sun, etc
Generate models for every non-model database table (manage.py inspectdb for a start, split up in apps afterwards).
Sounds good. You may want to proceed on a need-to-use basis on this though. Start with those you need immediately.
Convert all non-south models to south
I don't quite get what a non-south model is. If you mean generating south migrations for all existing models (and then probably --fake them during migration) then yes, this makes sense.
Convert initial SQL data to south fixtures
Again, what are South fixtures?
Convert database backup routines to manage.py dumpdata (and restoring to manage.py loaddata fixtures).
I am not so sure about this one. I have seen database specific backup and restore solutions used more frequently than manage.py. Especially if you have legacy triggers/stored procedures etc.
Never work with raw SQL again
Good in theory. Do keep in mind that you will at some point have to dig up SQL. As long as you don't use this as a reason to lose touch with SQL, or use it as an excuse not to get your hands "dirty" with SQL, then yes, go ahead.