Is FlywayDB or Liquibase used for Cross DB Migration? - database-migration

I'm working on a project, which is planned to use/manage 3 different databases [MySQL, SQL Server, Oracle] as the back-end. Now I have finished with my database design and everything I have set in with MySQL database.
Now I have to migrate / clone to SQL Server & Oracle as well. Also, I need the system to update automatically, whatever changes I will make in MySQL database [structural, not data] to be replicated in SQL Server & Oracle as well.
I couldn't find anything worthy from the documentations of FlywayDB and Liquibase, rather than a database source control mechanism.
So can I go for / get support from FlyWayDB or Liquibase to do this? Or is there an alternative for this task? Please advise.

Liquibase is easier to manage different kind of databases because it use XML to describe structure.
Flyway use SQL rather than XML to keep the syntax as simple as possible
In your case, you probably have to adapt your data structure to be compatible with your 3 databases. Oracle is one of the most restrictive.
Flyway could be a better solution because you don't care about historic modifications and because you have a better knowneldge of sql

Related

Python alembic offline mode without alembic_version table

I'd like to use alembic in offline mode, and execute the SQL migrations from an external software.
The external software tracks the DB version, so I don't need the alembic_version table at all.
Do you know if I can generate SQL migrations without creating an alembic_version table ? Is it a supported feature ?
Thanks
I don't think generating migrations off line is an available feature till now as #zzzeek wrote in 'Writing Migration Scripts to Support Script Generation' section in the official tutorial https://alembic.zzzcomputing.com/en/latest/offline.html . I do hope the new feature will come as my project can't connect to the database server directly but generating sql scripts off line will save my time by preventing writing SQL DDL and SqlAlchemy objects both.

Zend Framework 2 and Doctrine change database per module

I have an application which use Zend Framework and Doctrine.
I want to change for a module the database from the default settings.
I have created an alternative connection for doctrine.
When creating/updating the tables using,
./vendor/bin/doctrine-module orm:schema-tool:update --force
the tables are created in the first configuration of database.
Basically what I want to update the second configured database tables.
Can someone help me with an working example ?
Thanks,
Bogdan
To my knowledge, the schema-tool binary only works with the orm_default database.
Now, there's certainly nothing stopping you from having modules that add additional named connections. See this documentation for doing that:
https://github.com/doctrine/DoctrineORMModule/blob/master/docs/configuration.md#how-to-use-two-connections
But, the tooling around managing those additional databases might be a little "roll your own". The good news is all the pieces are there (Doctrine's underlying SchemaTool classes), you would just need to wire them up and build a cli command that acts on multiple schemas.
All that being said, if you find yourself using multiple unique schemas in the same database engine (unique being the key word to account for things like doctrine sharding), I worry your application design might be potentially troublesome. It could be possible that your multiple storage domains should actually live as separate applications.

How do I test a Django site that uses UDFs in the database?

I have a django project that uses a postgres db with a number of UDFs. The UDFs are written in plpythonu. Since plpyhtonu is an untrusted language, only database super users can use it to to create UDFs. This poses a serious problem in that I have no way of programmatically creating them within the test suite. Right now I see two options:
Modify django.db.backends.creation._create_test_db to create the test db from a template, which has my UDFs already loaded. This seems hacky and laborious to maintain.
Create a super user with MD5 authentication and load the UDFs in the test suite with psycopg2. This seems unsecure.
Are there less terrible ways I can do this?
Thanks.
I don't know the PG UDF model, only the MySQL and SQLite ones. A few other work-arounds might be:
Create a PG instance which you use just for testing, isolated so that potential security problems won't be a concern. Databases are cheap after all.
If the UDFs are simple (or the test data size makes them simple) then recreate them in SQLite and use that database for testing. This assumes that you don't need to test the plpython functionality as part of the project.
If the UDF functionality is much more stable than the rest of the project, then split the UDF code into its own subproject, and for the rest of the code assume the UDFs will be present. At the very least this will make it so most of the code can be automated, and only the UDF code needs manual intervention to specify the password.
I went with #1. It's not ideal but it works ok.

How to profile embed firebird database

Did I make wrong to use firebird database I don't know. It has lot's of good futures but I can't figure out why my query (stored procedure) didn't work.
Is there any profiler/monitoring tool for firebird?
Firebird database is working stand alone so it is embeded db. And It doesn't allow to connect with 2 users. If there is a profiler I wonder how it will connect while I'm executing my queries.
IBExpert and Database Worbench have stored procedure debugger
There is also many monitoring tools http://www.firebirdfaq.org/faq95/
I advice you to install server version if you want to have more than 2 users

Options for maintaining MySQL databases for a django development team

What are some options to avoid the latency of pointing local django development servers to a remote MySQL database?
If developers use local MySQL databases to avoid the latency, what are some useful tools to sync schema updates of the remote db with the local db and avoid manually creating, downloading, and loading dumps?
Thanks!
One possibility is to configure the remote MySQL database to replicate to the developers local machine - assuming you have control of the remote database's configuration.
See the MySQL docs for replication notes. Using MySQL replication the remote node would be the Master and the developer machines would be Slaves. The main advantage of this approach is your developer machines would always remain synchronized to the Master database. One possible disadvantage (depending on the number of developer machines you are slaving) is a degradation in the remote database's performance due to extra load introduced by replication.
It sounds like you want to do schema migrations. Basically it's a way to log schema changes so that you can update and even roll back along with your source changes (if you change a model you also check in a new migration that has up and down commands). While this will likely become an official feature at some point, there are several third-party solutions to choose from. It's really a personal preference, here are some popular ones:
South
Django Evolution
dmigrations
I use a combination of South for schema migrations, and storing JSON fixtures (or SQL dumps) of useful test data in the VCS repo for the project. Works pretty seamlessly.