I'm using hasura migration guide to sync two servers - DEV and PROD.
Before we manually transferred the changes (as in 'using UI to copy all the changes'), so now databases are 90% similar.
We decided to set up proper migrations, but based on my tests doing an initial sync requires a 'clean slate'.
Example of the problem:
We have users table on both DEV and PROD. On DEV there is additional field age.
We do
1 hasura migrate create --init (on dev)
2 hasura migrate apply --endpoint PRODUCTION
We get error relation \"users\" already exists.
The question is - how can we sync the DBs without cleaning PROD first?
You're currently receiving that issue since running migrate apply is trying to execute on tables which already exist.
If you use the --skip-execution flag you can mark all of your relevant migrations as completed in the PRODUCTION environment and the migrate apply as usual to apply the new migration.
More information is available in the CLI documentation:
https://hasura.io/docs/latest/graphql/core/hasura-cli/hasura_migrate_apply.html
After re-reading the question to clarify - creating the initial migration using create --init will create a snapshot of your database as it is now (won't diff between STAGING and PRODUCTION).
To migrate this between STAGING and PRODUCTION you'd need to manually change the initial migration created to match staging and prod, and then manually create an incremental migration to bring PRODUCTION in line with STAGING.
After this, if you're working with Hasura Console through the CLI (using https://hasura.io/docs/latest/graphql/core/hasura-cli/hasura_console.html) -- it will automatically create future incremental migrations for you in the directory.
As an aside - you can also create resilient migrations manually as well using IF NOT EXISTS (these aren't automatically created by Hasura, but you can edit their SQL file migrations).
For example:
ALTER TABLE users
ADD COLUMN IF NOT EXISTS age INT
Edit 2: One other tool which I came upon which may be helpful is Migra (for Postgres, outside of Hasura). It can help with diff-ing your dev and production databases to help create the initial migration state: https://github.com/djrobstep/migra
It's a bit buried, but the section on migrations covers this scenario (you haven't been tracking/creating migrations and now need to initialize them for the first time):
https://hasura.io/docs/latest/graphql/core/migrations/migrations-setup.html#step-3-initialize-the-migrations-and-metadata-as-per-your-current-state
Hope this helps =)
Related
I am facing a challenge here. So I inhertied the models from previous developers and the tables were not properly built. I added some constraints and new tables in order to normalize those tables. Before pushing the application to the heroku I tested it on my local machine and it actually broke my database.
Now the heroku website is already in production, so there are user information. How should i approach this, do I need to destroy the existing database and create a new one and run the migrations
Be very, very careful. Applying migrations on production servers can cause irreversible damage if you are not careful, and so you should be prepared for every possible situation.
My best recommendation would be to create an entire duplicate copy of your live DB (using Heroku this is as simple as a PG dump/backup). You can then create a new staging site using the same code, upload the backup into a new Database instance, and then test against that. Live environments are not always the same as local ones. You can then run your migrations on the staging site, and see if there are any unexpected effects (the best way to do this would be by utilizing django test cases). If there are any issues, be sure to understand how the rollback process works with django migrations.
A good tutorial that is fairly recent can be found here: https://realpython.com/django-migrations-a-primer/
I'm using Django and Postgresql to develop a web service.
Suppose we've 3~4 branch which for the different features or old-version bugfix purpose.
Then, I met a problem, when I was in branch A and change django model, and run migrate to change database in my local test desktop.
When I switch to another branch which has no migration file, database will inconsistent and cannot work when I try to run django, I've to delete the database and recreate it.
In general, what's the best/common way to deal with this kind demands for developer environment?
I understand your situation well and have been in same shoe several times.
Here is what I prefer(/do):
I am in branch bug-fix/surname_degrade
I changed the user data model [which generated user_migration_005] and then migrated the DB.
Then my boss came and pointed out that the user is not able to login due to login degrade.
So I have to switch branch and fix that first.
I can rollback the migration[user_migration_005] which I have done few moments back. With something like this python manage.py migrate user_migration_004
Switched branch and started working on hot-fix/login_degrade
When I switch back to my previous task , I can just do migration and proceed.
With this procedure I don't need to delete my all tables or restore old database or anything like that.
I am a newbie, will be extremely happy to hear your thoughts.
The major issue here is that, you database will change everytime You migrate,so either you mantain you database consistency among different branches, or You can do One thing, while using/testing (after declaring all the models)
1) Delete all database tables ( If you have a backup or dummy data )
2) Delete all existing migration files in you branch
3) Create new migrations
4) Migrate to new migrations
The above steps can also be done if the models are re modified, after modification just repeat the steps.
Run a different test database in each branch.
When you fork the design, fork the database
Make a clone of the database and migrate that.
Make sure when you push to git, you include your migrations, that wait when someone else pulls the branch and does a migrate django knows what changes were made to the database.
I've got an existing production DB, and a development DB with some schema differences. Neither have used Liquibase before. How can I achieve the following:
Compute a difference between production and development.
Apply the delta to production,
End up with both production (and dev) schema under control of Liquibase.
Here is what I ended up with ($LIQUIBASE expands to the Liquibase command line tool configured for the particular DB I was using).
Generate a baseline changelog from the current state of the Production DB:
$LIQUIBASE --url=$PROD_URL --changeLogFile=changeLog.xml generateChangeLog
Record Production as being in sync with the change log:
$LIQUIBASE --url=$PROD_URL --changeLogFile=changeLog.xml changeLogSync
Calculate the differences between Development and Production, and append them to the change log file:
$LIQUIBASE --url=$PROD_URL --referenceUrl=$DEV_URL --changeLogFile=changeLog.xml diffChangeLog
Bring Production in sync with Development:
$LIQUIBASE --url=$PROD_URL --changeLogFile=changeLog.xml update
Record Development as being in sync with the change log:
$LIQUIBASE --url=$DEV_URL --changeLogFile=changeLog.xml changeLogSync
You would start by generating a changelog for the development database, using the generateChangelog command, documented here: http://www.liquibase.org/documentation/generating_changelogs.html
When you generate the changelog, Liquibase will also create and initialize the two database tables (DATABASECHANGELOG and DATABASECHANGELOGLOCK) that it uses to keep track of what changesets have been applied to the database.
Next, you want to diff the two databases and have liquibase generate the differences as a separate changelog using the diffChangelog command:
http://www.liquibase.org/documentation/diff.html
The diff changelog can be included as is, or copied into an existing change log. If the diff command is passed an existing change log file, the new change sets will be appended to the end of the file.
Finally, you deploy the changelog to the production environment.
5.3 part of this tutorial answers your question I guess.
http://www.baeldung.com/liquibase-refactor-schema-of-java-app
This uses maven in java but I think plain liquibase commands also can do it.
I have few questions about this plugin.
1- what does it do?
Is it for exchanging databases between teams or changing their schema or creating tables based on models or something else?
2- if it is not meant to create tables based on models where can I find a script that does this?
3-can it work under windows?
thanks
The Migrations plugin allows versioning of your db changes. Much like is available in other PHP frameworks and Rails.
You essentially start with your original schema and create the initial migration. Each time you make a change you generate a 'diff' that gets stored in the filesystem.
When you run a migration, the database is updated with the changes you made. Think deployment to a staging or production server where you want the structure to be the same as your code is moved from environment to environment.
We are starting to look at this plugin so we can automate our deployments, as the DB changes are done manually right now.
I have a Django site running django-cms, and three environments: local dev (currently a sqlite DB that's committed to the repo), staging (mysql), and prod (mysql). There are other django apps in the project that have their own tables in the DB(s), and schema changes are managed through South migrations.
We do development using a "git flow" process, meaning that features are developed in branches and merged into a "develop" branch when complete. From a deployment standpoint, the develop branch maps to the staging version of the website.
I'd like a way to manage data in these environments that doesn't involve manually crafting data migrations for django-cms, or blowing away the staging/prod databases to loaddata in changes.
What's a good working strategy for this? Is there a quasi-automated way to generate South data migrations? Or a way to have django-cms publish pages to different environments?
I run the exact same setup on multiple projects but almost never look to migrate data between dev, stage or production.
Development environments get messy with test data, stage environments get messy with development code and data which doesn't make it to production. Meaning that hopefully production stays clean and tidy.
Following on from this moving data between them should be done carefully and I'd almost never look to automate this in case of erroneous data making it to the production database.
If there is important information which you put in to your staging environment to demo to a client or perform 'final' testing on before deploying to production then I'd suggest performing a data migration with south on that specific app and deploying with that data migration.
For any other types of data migration, like CMS page setup etc, I'd recommend setting things up in CMS draft mode as you need them, them publishing.