Zend Framework 2 and Doctrine change database per module - doctrine-orm

I have an application which use Zend Framework and Doctrine.
I want to change for a module the database from the default settings.
I have created an alternative connection for doctrine.
When creating/updating the tables using,
./vendor/bin/doctrine-module orm:schema-tool:update --force
the tables are created in the first configuration of database.
Basically what I want to update the second configured database tables.
Can someone help me with an working example ?
Thanks,
Bogdan

To my knowledge, the schema-tool binary only works with the orm_default database.
Now, there's certainly nothing stopping you from having modules that add additional named connections. See this documentation for doing that:
https://github.com/doctrine/DoctrineORMModule/blob/master/docs/configuration.md#how-to-use-two-connections
But, the tooling around managing those additional databases might be a little "roll your own". The good news is all the pieces are there (Doctrine's underlying SchemaTool classes), you would just need to wire them up and build a cli command that acts on multiple schemas.
All that being said, if you find yourself using multiple unique schemas in the same database engine (unique being the key word to account for things like doctrine sharding), I worry your application design might be potentially troublesome. It could be possible that your multiple storage domains should actually live as separate applications.

Related

Adding users via flyway DB migration

I've read in some articles that it's best practice NOT to add DB users via flyway db migration. It's not very clear to me as to why it's not a good practice. One thing we thought about is that it might be good to have the user configuration automatically documented in the code.
One article mentioned that you might want different user configuration for different environments. But you could also control that in flyway.
When/why would you not want to add DB users using flyway DB migration?
If I'm deploying a new user for the database that will be common across all environments, I would absolutely make the creation of that user a part of the Flyway deployment scripts. It fundamentally makes sense. "Version 43.43 is where we added the login snarglegrass to the app."
On the other hand, if you are working on setting up different environments with varying permissions, I probably will make that part of the flow control commands in pre/post deployment scripts instead of using Flyway. The reason for this is because it can be challenging to write the scripts in such a way as they're repeatable and safe. You could still do it that way though.

Dynamic database connection in Symfony 4

I am setting up a multi tenant Symfony 4 application where each tenant has it's own database.
I've set up two database connections in the doctrine.yaml config. One of the connections is static based on an env variable. The other one should have a dynamic URL based on a credential provider service.
doctrine:
dbal:
connections:
default:
url: "#=service('provider.db.credentials').getUrl()"
The above expression "#=service('provider.db.credentials').getUrl()" is not being parsed though.
When injecting "#=service('provider.db.credentials').getUrl()" as argument into another service the result of getUrl() on the provider.db.credentials service is injected. But when using it in the connection configuration for doctrine the expression is not being parsed.
Does anyone have an idea how to solve this?
You're trying to rely on ability of Symfony services definition to use expressions for defining certain aspects of services. However you need to remember that this functionality is part of Dependency Injection component which is able (but not limited to) to use configuration files for services. To be more precise - this functionality is provided by configuration loaders, you can take a look here for example of how it is handled by Yaml configuration loader.
On the other hand configuration for Doctrine bundle, you're trying to use is provided by Config component. A fact that Dependency Injection component uses same file formats as Config component do may cause an impression that these cases are handled in the same way, but actually they're completely different.
To sum it up: expression inside Doctrine configuration does not work as you expecting because Doctrine bundle configuration processor doesn't expect to get an Expression Language expression and doesn't have support for handling them.
While explanations given above are, hopefully, answers your question - you're probably expecting to get some information about how to actually solve your problem.
There is at least 2 possible ways to do it, but choosing correct way may require some additional information which is out of scope of this question.
In a case if you know which connection to choose at a time of container building (your code assumes that it is a case, but you may not be aware about it) - then you should use compiler pass mechanism yo update Doctrine DBAL services definitions (which may be quite tricky). Reason for this non-trivial process is that configurations are loaded at the early stages of container building process and provides no extension points. You can take a look into sources if necessary. Anyway, while possible, I would not recommend you to go in this way and most likely you will not need it because (I suppose) you need to select connection in runtime rather then in container building time.
Probably more correct approach is to create own wrapper of DBAL Connection class that will maintain list of actual connections and will provide required connection depending on your application's logic. You can refer to implementation details of DBAL sharding feature as example. Wrapper class can be defined directly through Doctrine bundle configuration by using wrapper_class key for dbal configuration

Is FlywayDB or Liquibase used for Cross DB Migration?

I'm working on a project, which is planned to use/manage 3 different databases [MySQL, SQL Server, Oracle] as the back-end. Now I have finished with my database design and everything I have set in with MySQL database.
Now I have to migrate / clone to SQL Server & Oracle as well. Also, I need the system to update automatically, whatever changes I will make in MySQL database [structural, not data] to be replicated in SQL Server & Oracle as well.
I couldn't find anything worthy from the documentations of FlywayDB and Liquibase, rather than a database source control mechanism.
So can I go for / get support from FlyWayDB or Liquibase to do this? Or is there an alternative for this task? Please advise.
Liquibase is easier to manage different kind of databases because it use XML to describe structure.
Flyway use SQL rather than XML to keep the syntax as simple as possible
In your case, you probably have to adapt your data structure to be compatible with your 3 databases. Oracle is one of the most restrictive.
Flyway could be a better solution because you don't care about historic modifications and because you have a better knowneldge of sql

How do I test a Django site that uses UDFs in the database?

I have a django project that uses a postgres db with a number of UDFs. The UDFs are written in plpythonu. Since plpyhtonu is an untrusted language, only database super users can use it to to create UDFs. This poses a serious problem in that I have no way of programmatically creating them within the test suite. Right now I see two options:
Modify django.db.backends.creation._create_test_db to create the test db from a template, which has my UDFs already loaded. This seems hacky and laborious to maintain.
Create a super user with MD5 authentication and load the UDFs in the test suite with psycopg2. This seems unsecure.
Are there less terrible ways I can do this?
Thanks.
I don't know the PG UDF model, only the MySQL and SQLite ones. A few other work-arounds might be:
Create a PG instance which you use just for testing, isolated so that potential security problems won't be a concern. Databases are cheap after all.
If the UDFs are simple (or the test data size makes them simple) then recreate them in SQLite and use that database for testing. This assumes that you don't need to test the plpython functionality as part of the project.
If the UDF functionality is much more stable than the rest of the project, then split the UDF code into its own subproject, and for the rest of the code assume the UDFs will be present. At the very least this will make it so most of the code can be automated, and only the UDF code needs manual intervention to specify the password.
I went with #1. It's not ideal but it works ok.

Options for maintaining MySQL databases for a django development team

What are some options to avoid the latency of pointing local django development servers to a remote MySQL database?
If developers use local MySQL databases to avoid the latency, what are some useful tools to sync schema updates of the remote db with the local db and avoid manually creating, downloading, and loading dumps?
Thanks!
One possibility is to configure the remote MySQL database to replicate to the developers local machine - assuming you have control of the remote database's configuration.
See the MySQL docs for replication notes. Using MySQL replication the remote node would be the Master and the developer machines would be Slaves. The main advantage of this approach is your developer machines would always remain synchronized to the Master database. One possible disadvantage (depending on the number of developer machines you are slaving) is a degradation in the remote database's performance due to extra load introduced by replication.
It sounds like you want to do schema migrations. Basically it's a way to log schema changes so that you can update and even roll back along with your source changes (if you change a model you also check in a new migration that has up and down commands). While this will likely become an official feature at some point, there are several third-party solutions to choose from. It's really a personal preference, here are some popular ones:
South
Django Evolution
dmigrations
I use a combination of South for schema migrations, and storing JSON fixtures (or SQL dumps) of useful test data in the VCS repo for the project. Works pretty seamlessly.