Dynamic database connection in Symfony 4 - doctrine-orm

I am setting up a multi tenant Symfony 4 application where each tenant has it's own database.
I've set up two database connections in the doctrine.yaml config. One of the connections is static based on an env variable. The other one should have a dynamic URL based on a credential provider service.
doctrine:
dbal:
connections:
default:
url: "#=service('provider.db.credentials').getUrl()"
The above expression "#=service('provider.db.credentials').getUrl()" is not being parsed though.
When injecting "#=service('provider.db.credentials').getUrl()" as argument into another service the result of getUrl() on the provider.db.credentials service is injected. But when using it in the connection configuration for doctrine the expression is not being parsed.
Does anyone have an idea how to solve this?

You're trying to rely on ability of Symfony services definition to use expressions for defining certain aspects of services. However you need to remember that this functionality is part of Dependency Injection component which is able (but not limited to) to use configuration files for services. To be more precise - this functionality is provided by configuration loaders, you can take a look here for example of how it is handled by Yaml configuration loader.
On the other hand configuration for Doctrine bundle, you're trying to use is provided by Config component. A fact that Dependency Injection component uses same file formats as Config component do may cause an impression that these cases are handled in the same way, but actually they're completely different.
To sum it up: expression inside Doctrine configuration does not work as you expecting because Doctrine bundle configuration processor doesn't expect to get an Expression Language expression and doesn't have support for handling them.
While explanations given above are, hopefully, answers your question - you're probably expecting to get some information about how to actually solve your problem.
There is at least 2 possible ways to do it, but choosing correct way may require some additional information which is out of scope of this question.
In a case if you know which connection to choose at a time of container building (your code assumes that it is a case, but you may not be aware about it) - then you should use compiler pass mechanism yo update Doctrine DBAL services definitions (which may be quite tricky). Reason for this non-trivial process is that configurations are loaded at the early stages of container building process and provides no extension points. You can take a look into sources if necessary. Anyway, while possible, I would not recommend you to go in this way and most likely you will not need it because (I suppose) you need to select connection in runtime rather then in container building time.
Probably more correct approach is to create own wrapper of DBAL Connection class that will maintain list of actual connections and will provide required connection depending on your application's logic. You can refer to implementation details of DBAL sharding feature as example. Wrapper class can be defined directly through Doctrine bundle configuration by using wrapper_class key for dbal configuration

Related

Spring Cloud Function - Manual Bean Registration and Loading Configuration Classes

I am currently using Spring Cloud function 3.07.RELEASE with the AWS Adapter for lambda.
We are using a limited scope Functional Bean registration and understand that this does not include full Spring Boot autoconfiguration. We are okay with this as we value the speed and significant reduction cold start times.
However, we do have configuration classes that we want to utilize and assume that this needs to be done manually. What is the best practice on importing these classes?
We tried searching, but failed to find documentation on the differences in behavior of the limited scope context vs spring boot application context.
If I understand your question correctly all you need to do is register those configuration classes manually and the rest will be autowired. There was a little issue with it which may or may not affect you. In any event it was fixed and will be available in 3.0.9 release next week.

Zend Framework 2 and Doctrine change database per module

I have an application which use Zend Framework and Doctrine.
I want to change for a module the database from the default settings.
I have created an alternative connection for doctrine.
When creating/updating the tables using,
./vendor/bin/doctrine-module orm:schema-tool:update --force
the tables are created in the first configuration of database.
Basically what I want to update the second configured database tables.
Can someone help me with an working example ?
Thanks,
Bogdan
To my knowledge, the schema-tool binary only works with the orm_default database.
Now, there's certainly nothing stopping you from having modules that add additional named connections. See this documentation for doing that:
https://github.com/doctrine/DoctrineORMModule/blob/master/docs/configuration.md#how-to-use-two-connections
But, the tooling around managing those additional databases might be a little "roll your own". The good news is all the pieces are there (Doctrine's underlying SchemaTool classes), you would just need to wire them up and build a cli command that acts on multiple schemas.
All that being said, if you find yourself using multiple unique schemas in the same database engine (unique being the key word to account for things like doctrine sharding), I worry your application design might be potentially troublesome. It could be possible that your multiple storage domains should actually live as separate applications.

Cross application transaction scoped persistence context injection

I have a project broken down to 2 parts: persistence.jar, webapp.war. I don't package them in a single EAR, because I want to re-deploy webapp/run arquillian tests without re-deploying persistence for quick turnaround.
With this kind of setup, how can one use transaction scoped #PersistenceContext defined in persistence.jar from beans defined in webapp.war? Any other ways to achieve my goal?
There is no spec-defined way to achieve this. The only option that comes to mind is to manage the transaction-scoped EntityManager yourself using TransactionSynchronizationRegistry.getResource, .putResource, and .registerInterposedSynchronization (basically, the same as what the JPA container normally does on you behalf). It's also very likely that you'll need to somehow configure class loading in your application server to ensure that both applications have visibility to the same entity classes.

Common Data Model with wso2bps and wso2greg

I am evaluating wso2 and came across the following issue: imagine that my company already has a well defined Common Data Model for their business. Those schemas (and even service definitions - wsdls) live on a repository and references between files are done using relative paths between them. Now, what I want is to import all of these XSD's and WSDL's into wso2 Gorvernance Registry (wso2greg) to make it manage them. More (and this is where I start to lose the grip on wso2) I want to reference wso2greg's resources on wso2bps's BPEL Workflows. I want to say: "Hey, workflow! Forget all about your auto-generated interface. Your interface will be this one {wsdl_from_wso2greg}."
I am trying to accomplish this by creating an Registry Resources Project inside my "main" Carbon Application Project (along-side with my BPEL Workflow) and then creating PartnerLinks on the workflow but, after I configured everything, I get the following error: "The import location of ../TestGreg/TestServiceWsdl.wsdl is not supported by this implementation. Import artifacts must be contained within the folder hierarchy that has the deployment descriptor at the root".
Now, am I doing something really stupid or wso2 really does not support what I am trying to do? If so, how do you guys usually deal with these issues?
Thks,
Leandro Nunes
When you are referring to WSDLs/XSDs from BPEL, you need to package them with BPEL. You cannot refer the ones stored in the registry.
Referring from the external registry can be done only for security policies and endpoint references. Any other resources need to be packaged with the BPEL.

How do you configure WorkManagers in WebLogic 10.3?

I would like to use a WorkManager to schedule some parallel jobs on a WebLogic 10.3 app server.
http://java.sun.com/javaee/5/docs/api/javax/resource/spi/work/WorkManager.html
I'm finding the Oracle/BEA documentation a bit fragmented and hard to follow and it does not have good examples for using WorkManagers from EJB 3.0.
Specifically, I'd like to know:
1) What exactly, if anything, do I need to put in my deployment descriptors (ejb-jar.xml and friends)?
2) I'd like to use the #Resource annotation to inject the WorkManager into my EJB 3 session bean. What "name" do I use for the resource?
3) How do I configure the number of threads and other parameters for the WorkManager.
My understanding is that the underlying implementation on WebLogic is CommonJ, but I'd prefer to use a non-proprietary approach if possible.
First, you'll find the documentation of CommonJ, an implementation of the Timer and Work Manager API developed by BEA Oracle and IBM, in Timer and Work Manager API (CommonJ) Programmer’s Guide. They provide a Work Manager Example but it's not injected in this document.
1) What exactly, if anything, do I need to put in my deployment descriptors (ejb-jar.xml and friends)?
According to the Work Manager Deployment section:
Work Managers are defined at the
server level via a resource-ref in the
appropriate deployment descriptor.
This can be web.xml or ejb-jar.xml
among others.
The following deployment descriptor
fragment demonstrates how to configure
a WorkManager:
...
<resource-ref>
<res-ref-name>wm/MyWorkManager</res-ref-name>
<res-type>commonj.work.WorkManager</res-type>
<res-auth>Container</res-auth>
<res-sharing-scope>Shareable</res-sharing-scope>
</resource-ref>
...
Note: The recommended prefix for the JNDI namespace for WorkManager
objects is java:comp/env/wm.
Check the WorkManager javadocs for more details (e.g. "The res-auth and res-sharing scopes are ignored in this version of the specification. The EJB or servlet can then use the WorkManager as it needs to.").
2) I'd like to use the #Resource annotation to inject the WorkManager into my EJB 3 session bean. What "name" do I use for the resource?
I'd say something like this (not tested):
#ResourceRef(jndiName="java:comp/env/wm/MyWorkManager",
auth=ResourceRef.Auth.CONTAINER,
type="commonj.work.WorkManager",
name="MyWorkManager")
3) How do I configure the number of threads and other parameters for the WorkManager.
See the description of the <work-manager> element and Using Work Managers to Optimize Scheduled Work for detailed information on Work Managers
My understanding is that the underlying implementation on WebLogic is CommonJ, but I'd prefer to use a non-proprietary approach if possible.
I don't have any other suggestion (and, as long as this implementation follows the standards, I wouldn't mind using it).
The Weblogic documentation will answer your questions.
Using Work Managers to Optimize Scheduled Work