We use liquibase for source controlling the database. Initially, we started with Postgres and created the changesets with datatype of columns which are specific to Postgres.
For example, we have a changeset which creates the table with fields of type 'JSON'. Now that, we wanted to move to other database. So, when we run the changeset against the other database, it fails to create the table. I tried adding 'failOnError=false'. But, the later changesets failed because the table doesnot exist.
Could you please suggest how to refactor the old changeset to make compatible with other database as well?
You can make your changesets database specific. You could try re-creating the changeset to work in the new DB, and adding the "dbms" attribute equal to the new DB to the new changeset. While adding the same attribute but with the old DB to the old changeset.
Related
I use flyway for DB schema migrations.
But now I also want to make it possible to dynamically create a new database (for testing), update it to the latest schema, and fill it with test data.
Is it possible to have flyway baseline a new DB and apply ALL schema version scripts sequentially so the DB is updated to the latest state?
I could not find any examples of this. I don't want to have a separate process or scripts for creating a new DB with the right schema.
I was given access to a database to use on one of my Django projects. I set everything up and went to migrate the new models and I got an error.
"Unable to create the django_migrations table ((1142, "CREATE command denied
to user 'MyDatabaseName' for table 'django_migrations'"))"
After looking into it, I see its because Django is trying to create a new table in that database.
I don't have write access and I do not want it because I am pretty new at this and do not want to mess anything up. I am also not sure the owner would give it to me.
Is there a way to get to use the legacy database without having to create a new table in it?
I'm working in a development environment on a flask-app with a Postgres 10 database that has ~80 tables. There are lots of relationships and ForeignKeyConstraints networking it all together.
It was working fine with Flask-Migrate. I'd bootstrapped and migrated up to this point with ~80 tables. But, I wanted to test out some new scripts to seed the database tables, and thought it would be quickest to just drop the database and bring it back up again using Flask-Migrate.
In this process, the migration folder was deleted, so I just started over fresh with a db init. Then ran db migrate. I manually fixed a few imports in the migrate script. Finally, I ran db upgrade.
However, now with all these 80 create_table commands in my migrate script, when I run db_upgrade, I receive an error:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) relation "items" does not exist
I receive this error for every Child table that has a ForeignKeyConstraint if the Child table is not in an order which is below the Parent table in the migration file.
But, the autogenerated script from db migrate has the tables sorted alphabetically, ordered by table name.
Referring to documentation, I don't see this importance of sort order mentioned.
Bottom line is, it seems I'm either forced to write a script to sort all these tables in an order where the Parent table is above the Child table. Or else, just cut and paste like a jigsaw puzzle until all the tables are in the required order.
What am I missing? Is there an easier way to do this with Flask-Migrate or Alembic?
After researching this, it seems flask-migrate and/or Alembic does not have any built-in methods to resolve this sort order issue. I fixed it by cutting and pasting the tables in an order which ensured the Parent table was above the child tables in the migration file.
I've just encountered this myself, and could not find a better and/or official answer.
My approach was to separate the table creation from the creation of foreign key constraints:
Edit Alembic's auto-generated migration script: In each table create operation, remove all lines creating foreign key constraints
Run Alembic's upgrade command (tables are created, minus the FK constraints, of course)
Run Alembic's migrate command (additional migration script created, that adds all FK constraints)
Run Alembic's upgrade command (FK constraints added to tables)
I've faced some problems when using flask-sqlalchemy and flask-migrate, I solved it using python interactive shell.
>>> from yourapp import db, create_app
>>> db.create_all(app=create_app())
Check this link to get more information.
Happy coding...
I am trying to write a migration which grants readonly permissions to each schema in my multi-tenant postgres DB.
The migrations run once per schema, so what I would like to do would be capture the name of the schema for which it is running, and then use that schema_name in my SQL statement to grant permissions for that schema.
In django, I can create a migration operation called 'RunPython', and from within that python code I can determine for which schema the migrations are currently running (schema_editor.connection.connection_name).
What I want to do is pass that information to the next migration operation, namely "RunSQL", so that the SQL I run can be:
"GRANT SELECT ON ALL TABLES IN SCHEMA {schema_name_from_python_code} TO readaccess;"
If anyone can shed any light on this issue it would be greatly appreciated. Cheers!
I was able to figure this out by getting rid of the migrations.runSQL. I just have migrations.RunPython. From within that python forward_func I am able to access the DB and write sql there (with the necessary string interpolation)
:)
I have two different databases in django. Initially, I had a table called cdr in my secondary database. I decided to get rid of the second database and just add the cdr table to the first database.
I deleted references (all of them, I think) to the secondary database in the settings file and throughout my app. I deleted all of the migration files and ran make migrations fresh.
The table that used to be in the secondary database is not created when I run migrate even though it doesn't exist on my postgres database.
I simply cannot for the life of me understand why the makemigrations function will create the migration file for the table when I add it back in to the model definition and I have verified that it is in the migration file. When I run migrate, it tells me there are no migrations to apply.
Why is this so. I have confirmed that I have managed=True. I have confirmed that the model is not on my postgres database by logging into the first database and running \dt.
Why does Django still think that this table still exists such that it is telling me no migrations to apply even though it shows a create command in the migrations file? I even dropped the secondary database to make sure it wasn't somehow being referenced.
I suspect code isn't needed to explain this to me but I will post if needed. I figure I am missing something simple here.
Why does Django still think that this database still exists such that
it is telling me no migrations to apply even though it shows a create
command in the migrations file
Because django maintains a table called django_migrations in your database which lists all the migrations that have been applied. Since you are almost starting afresh, clear out this table and then run the migrations.
If this still doesn't work and still assuming that you are still on a fresh start, it's a simple matter to drop all the tables (or even the database and do the migration again). OTH that you have data you want to save, you need to look at the --fake and --fake-initial options to migrate