I need to know if getting an error while running python manage.py migrate means my database will remain in the same state it was before running the migrate command.
I'm trying to implement migrations as part of a CI system and it would be good to know if I need to do some kind of rollback if the migrations fails.
As the documentation explains, it depends on the database.
PostgreSQL can use transactions for schema alteration operations, so Django does so, and rolls back in case of failure. But MySQL does not support this.
Related
On our staging server we observed a runtime error stating a field is missing from a database,
column our_table.our_field does not exist
LINE 1: ...d"."type", "our_table"...
The field was added during a recent update with a complicated migration squashing process. It's possible that some errors were made during this process, but "manage.py showmigrations" command shows that the migration has been applied and "manage.py makemigrations" does not create any new migration. As we do not run tests on our staging or production databases, we are trying to figure out the most effective method for identifying such errors.
In short, how can we identify mismatches between the database and Django models caused by an incorrect migration like the following?
python manage.py migrate our_app --fake
I suppose I am looking for something like
python manage.py check_database
Edit: Many thanks for the suggestions. However, this is more of a deployment than a development question because the problem likely occurred when our devops tried to apply the squashed migrations while retaining data on the staging server (which will be case on production). It was scary to learn that such inconsistency can occur when makemigrations and showmigrations do not show any problem and can therefore also happen on production.
The bottom line is that we need some way to ensure our database matches our models after deployment.
This is a Heroku-specific issue with a Django project 1.11.24 running Python 3.6.5 and a Heroku postgres database.
During testing of two different branches during development, different conflicting migration files were deployed at different times to the Heroku server. We recognized this, and have now merged the migrations, but the order the Heroku psql db schema was migrated is out of order with the current migration files.
As a result, specific tables already exist, so on deploy applying the updated merged migration files errs with:
psycopg2.errors.DuplicateTable: relation "table_foo" already exists
In heroku run python manage.py showmigrations -a appname all of the migrations are shown as having run.
We've followed Heroku's docs and done the following:
Rolled back the app itself to before when the conflicting migrations took place and were run (https://blog.heroku.com/releases-and-rollbacks)
Rolled back the postgres db itself to a datetime before when the conflicting migrations took place and were run (https://devcenter.heroku.com/articles/heroku-postgres-rollback)
However, despite both app and db rollbacks, when we check the db tables themselves in the rollback in pql shell with \dt, the table causing the DuplicateTable err still exists, so the db rollback doesn't actually seem to effect the django_migrations table.
It's Heroku, so we can't fake the migrations.
We could attempt to drop the specific db tables that already exist (or drop the entire db, since it's a test server), but that seems like bad practice. Is there any other way to address this in Heroku? thanks
I eventually fixed this by manually modifying migration files to align with the schema dependency order that was established. Very unsatisfying fix, wish Heroku offered a better solution for this (or a longer postgres database rollback window)
We seem to have gotten our prisma migrations into a bad state. When we get the latest code and run
prisma migrate dev
we get
Migration 20210819161149_some_migration failed to apply cleanly to
the shadow database. Error code: P3018 Error: A migration failed to
apply. New migrations cannot be applied before the error is recovered
from. Read more about how to resolve migration issues in a production
database: https://pris.ly/d/migrate-resolve
Migration name: 20210819161149_some_migration
Database error code: 1065
All the migrations in source control do match the ones in the _prisma_migrations table so I'm not sure why it thinks 20210819161149_some_migration failed. There is nothing in the logs column in _prisma_migrations for that record. I think what happened is a developer applied the migration then changed the migration.sql for it after the fact.
Anyway, we followed the steps outlined https://pris.ly/d/migrate-resolve but they don't seem to resolve the issue. It first suggests running
prisma migrate resolve --rolled-back "20210819161149_some_migration"
but that results in
Error: P3012
Migration 20210819161149_some_migration cannot be rolled
back because it is not in a failed state.
So then we tried to mark it as applied
prisma migrate resolve --applied "20210819161149_some_migration"
but that results in this error
Error: P3008
The migration 20210819161149_some_migration is already
recorded as applied in the database.
We also tried running
prisma migrate deploy
Which gives
13 migrations found in prisma/migrations WARNING The following
migrations have been modified since they were applied:
20210819161149_some_migration
but you still get the same issue above when running prisma migrate dev.
Is there any way to get prisma happy again without deleting all the data?
A possible workaround would involve baselining your development database based on your current schema/migration history. You would need a separate backup database and you will lose your existing migration history, but it should retain your data.
Here's what the process would look like
Delete all migration history from your prisma folder as well as the _prisma_migrations table in your database.
Create a new backup database.
Connect the project to your backup database and run prisma migrate dev --name baseline_migration . This will generate a migration matching your current prisma schema.
Connect back to your main Database and baseline the generated migration by running prisma migrate resolve --applied 20210426141759_baseline_migration (The numbers at the beginning of your migration name will differ).
The reason you'd be creating a backup database and running the initial baseline migrations in that backup database is because you don't want to lose the data in your main database. I realize this is not an ideal solution, but it might work if it's very important for you to keep your data while retaining your existing dev workflow.
This article on Adding Prisma Migrate to an existing project is also worth a read.
You can do how it`s described in this short video
https://youtu.be/BIfvmEhbtBE
https://www.prisma.io/docs/reference/api-reference/error-reference this site will provide you the the correct ways of troubleshooting your problem. I got a migration error (P3009) and when i investigate through the generated sql file and found that some syntax issue occured in the generated file and resolved them and fix the migration issue
Django migrations has excellent behavior in terms of single migrations, where, assuming you leave atomic=True, a migration will be all-or-nothing: it will either run to completion or undo everything.
Is there a way to get this all-or-nothing behavior for multiple migrations? That is to say, is there a way to either run multiple migrations inside an enclosing transaction (which admittedly could cause other problems) or to rollback all succeeded migrations on failure?
For context, I'm looking for a single command or setting to do this so that I can include it in a deploy script. Currently, the only part of my deploy that isn't rolled back in the event of a failure are database changes. I know that this can be manually done by running python manage.py migrate APP_NAME MIGRATION_NUMBER in the event of a failure, but this requires knowledge of the last run migration on each app.
This isn't a feature in Django (at least as of 4.1). To do this yourself, your code needs to do the following:
Get a list of current apps that have migrations. This can be done by getting the intersection of apps from settings.INSTALLED_APPS and the apps in the MigrationRecorder.
Get the latest migration for each app.
Run migrations.
If migrations fail, run the rollback command for the latest migration for each app.
Here's an implementation of this as a custom management command: https://github.com/zagaran/django-migrate-or-rollback. This is done as a child-class of the migrate management command so that it exposes all of the command line options of migrate. To use this library, pip install django-migrate-or-rollback and add "django_migrate_or_rollback" to your INSTALLED_APPS in settings.py. Then you can use python manage.py migrate_or_rollback in place of the traditional migrate command.
Disclosure: I am the author of the library referenced (though I made the library based on this StackOverflow answer).
I have accidently deleted one of migrations folders and and have no backup for it.
What are my options?
DB is postgres. Right now everything is OK.(I have moved instead migration folder I have on my DEV server with SQL lite) So I am just getting red message on server that not all migrations have been applied.
But next time if i run migration i will be in trouble.
What is my way out?
Migrations are mainly for backward compatibility, and tracking/versioning of the changes to models/database. If you do not really care about historical changes, etc.. then you can just delete the migrations directory and do:
python manage.py makemigrations <app_name>
This creates the initial migrations (like starting from a clean slate) - and moving forward you can track the historical migrations moving forward. More on this can be read here
Now, when you run the migrations, you can do
python manage.py migrate <app_name> --fake-initial
to fake the initial migration.
Now might be a good time to have a version control on your application
Use version control.
You are not the first developer to delete important files, but often recovery takes less than a second - thanks to version control systems (also called revision control systems). Please stop everything else and install and use one of Git, Mercury or Subversion.
Don't use FTP
It's totally. I mean totally insecure. Always use SFTP
Don't use sqlite3 for local with other db for production
sqlite doesn't enforce strict type checking. Postgresql on the other hand is very particular about it. Additionally sqlite only has a subset of the functionality that's found on postgresql. Last but not least different RDBMS have different intricacies. If you use one locally and another in production, there is always a chance that your code will break when you deploy to live
Managing without the migration files
This is not a big loss as long as your database is in sync with your models.
If you database is not in sync with your models, you can use
./manage.py inspectdb
to recreate local models that represent the actual structure in the db. Then you do makemigrations and migrate (as explained by karthik) on the generated models.
Then replace that with your live models and do the step again.