Cross application transaction scoped persistence context injection - jpa-2.0

I have a project broken down to 2 parts: persistence.jar, webapp.war. I don't package them in a single EAR, because I want to re-deploy webapp/run arquillian tests without re-deploying persistence for quick turnaround.
With this kind of setup, how can one use transaction scoped #PersistenceContext defined in persistence.jar from beans defined in webapp.war? Any other ways to achieve my goal?

There is no spec-defined way to achieve this. The only option that comes to mind is to manage the transaction-scoped EntityManager yourself using TransactionSynchronizationRegistry.getResource, .putResource, and .registerInterposedSynchronization (basically, the same as what the JPA container normally does on you behalf). It's also very likely that you'll need to somehow configure class loading in your application server to ensure that both applications have visibility to the same entity classes.

Related

Spring Cloud Function - Manual Bean Registration and Loading Configuration Classes

I am currently using Spring Cloud function 3.07.RELEASE with the AWS Adapter for lambda.
We are using a limited scope Functional Bean registration and understand that this does not include full Spring Boot autoconfiguration. We are okay with this as we value the speed and significant reduction cold start times.
However, we do have configuration classes that we want to utilize and assume that this needs to be done manually. What is the best practice on importing these classes?
We tried searching, but failed to find documentation on the differences in behavior of the limited scope context vs spring boot application context.
If I understand your question correctly all you need to do is register those configuration classes manually and the rest will be autowired. There was a little issue with it which may or may not affect you. In any event it was fixed and will be available in 3.0.9 release next week.

Dynamic database connection in Symfony 4

I am setting up a multi tenant Symfony 4 application where each tenant has it's own database.
I've set up two database connections in the doctrine.yaml config. One of the connections is static based on an env variable. The other one should have a dynamic URL based on a credential provider service.
doctrine:
dbal:
connections:
default:
url: "#=service('provider.db.credentials').getUrl()"
The above expression "#=service('provider.db.credentials').getUrl()" is not being parsed though.
When injecting "#=service('provider.db.credentials').getUrl()" as argument into another service the result of getUrl() on the provider.db.credentials service is injected. But when using it in the connection configuration for doctrine the expression is not being parsed.
Does anyone have an idea how to solve this?
You're trying to rely on ability of Symfony services definition to use expressions for defining certain aspects of services. However you need to remember that this functionality is part of Dependency Injection component which is able (but not limited to) to use configuration files for services. To be more precise - this functionality is provided by configuration loaders, you can take a look here for example of how it is handled by Yaml configuration loader.
On the other hand configuration for Doctrine bundle, you're trying to use is provided by Config component. A fact that Dependency Injection component uses same file formats as Config component do may cause an impression that these cases are handled in the same way, but actually they're completely different.
To sum it up: expression inside Doctrine configuration does not work as you expecting because Doctrine bundle configuration processor doesn't expect to get an Expression Language expression and doesn't have support for handling them.
While explanations given above are, hopefully, answers your question - you're probably expecting to get some information about how to actually solve your problem.
There is at least 2 possible ways to do it, but choosing correct way may require some additional information which is out of scope of this question.
In a case if you know which connection to choose at a time of container building (your code assumes that it is a case, but you may not be aware about it) - then you should use compiler pass mechanism yo update Doctrine DBAL services definitions (which may be quite tricky). Reason for this non-trivial process is that configurations are loaded at the early stages of container building process and provides no extension points. You can take a look into sources if necessary. Anyway, while possible, I would not recommend you to go in this way and most likely you will not need it because (I suppose) you need to select connection in runtime rather then in container building time.
Probably more correct approach is to create own wrapper of DBAL Connection class that will maintain list of actual connections and will provide required connection depending on your application's logic. You can refer to implementation details of DBAL sharding feature as example. Wrapper class can be defined directly through Doctrine bundle configuration by using wrapper_class key for dbal configuration

How to unit test kafka streams dsl when using schema registry

Lets say I want write a unit test for the example show here :
https://github.com/confluentinc/kafka-streams-examples/blob/5.1.2-post/src/main/java/io/confluent/examples/streams/WikipediaFeedAvroLambdaExample.java
I tried the following methods, both which did not work out for me:
1) Use TopologyTestDriver.
This class is pretty useful as long as schema registry is not involved.
I tried making use of MockSchemaRegistryClient but it didn't work out.
And even if it does work out, it requires that I create my own serializers which kind of defeats the purpose of schema registry.
2) Use EmbeddedSingleNodeKafkaCluster defined in the same project.
https://github.com/confluentinc/kafka-streams-examples/blob/5.1.2-post/src/test/java/io/confluent/examples/streams/kafka/EmbeddedSingleNodeKafkaCluster.java
Now this class is really handy and seems to have embedded kafka cluster and schema registry. But it does not seem to be available in any artifact. Consequently I tried copying the class but ran into further import issues.
Unable to download this particular artifact : io.confluent:kafka-schema-registry-client:5.0.0:tests
Has anyone able to make progress with the above mentioned options? Or even a completely different solution?
For doing this I ended up doing this small test library based on testcontainers: https://github.com/vspiliop/embedded-kafka-cluster. Starts a fully configurable docker based Kafka cluster (broker, zookeeper and Confluent Schema Registry) as part of your tests. Check out the example unit and cucumber tests.

JUnit with Glassfish and JPA

I've a web application that runs in Glassfish v3. It's realized with JSF v2 and JPA (so there's a persistence.xml where is declared a JTA-data-source).
If i try to test my repositories with JUnit, it fails the lookup and gives me this error:
javax.naming.NamingException: Lookup failed for 'java:comp/env/persistence/em' in SerialContext[myEnv=
java.naming.factory.initial=com.sun.enterprise.naming.impl.SerialInitContextFactory,
java.naming.factory.url.pkgs=com.sun.enterprise.naming,
java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl}
[Root exception is javax.naming.NamingException: Invocation exception: Got null ComponentInvocation ]
It seems to ask for a transaction-type="RESOURCE_LOCAL" that i can't provide it, since it'd be in conflict with Glassfish's transaction-type="JTA".
So, what i'd like to ask is if it's possible to find a way to run JUnit without [strongly] change my webapp's configuration.
Thanks,
AN
For real in-container tests you should have a look at Arquillian. It allows you to run your unit tests within the container.
You should have a look at the documentation at http://arquillian.org/guides/ and the showcases at GitHub at https://github.com/arquillian/arquillian-showcase/. There is also a JSF related showcase.
Regarding your configuration. I would strongly suggest to configure your project in such a way, that you can use a different configuration as in production.
If you need only a working JPA environment for your tests, then you should do the following:
Create a second JPA configuration with transaction-type="RESOURCE_LOCAL".
Add a setter for the entity manager to your beans.
Create the entity manager within your test setup as you would do it in an standalone Java application.
Inject the entity manager manually in the beans.
Try to use a mocking framework like Mockito to mock all other parts of the application which are not a part of the current test but required for the test.
The second approach depends on your architecture and the possibilities it offers to you. It allows you to write very fine-grained unit tests. The first approach is very usefull to test the real behaviour of your application in the container.

How do you configure WorkManagers in WebLogic 10.3?

I would like to use a WorkManager to schedule some parallel jobs on a WebLogic 10.3 app server.
http://java.sun.com/javaee/5/docs/api/javax/resource/spi/work/WorkManager.html
I'm finding the Oracle/BEA documentation a bit fragmented and hard to follow and it does not have good examples for using WorkManagers from EJB 3.0.
Specifically, I'd like to know:
1) What exactly, if anything, do I need to put in my deployment descriptors (ejb-jar.xml and friends)?
2) I'd like to use the #Resource annotation to inject the WorkManager into my EJB 3 session bean. What "name" do I use for the resource?
3) How do I configure the number of threads and other parameters for the WorkManager.
My understanding is that the underlying implementation on WebLogic is CommonJ, but I'd prefer to use a non-proprietary approach if possible.
First, you'll find the documentation of CommonJ, an implementation of the Timer and Work Manager API developed by BEA Oracle and IBM, in Timer and Work Manager API (CommonJ) Programmer’s Guide. They provide a Work Manager Example but it's not injected in this document.
1) What exactly, if anything, do I need to put in my deployment descriptors (ejb-jar.xml and friends)?
According to the Work Manager Deployment section:
Work Managers are defined at the
server level via a resource-ref in the
appropriate deployment descriptor.
This can be web.xml or ejb-jar.xml
among others.
The following deployment descriptor
fragment demonstrates how to configure
a WorkManager:
...
<resource-ref>
<res-ref-name>wm/MyWorkManager</res-ref-name>
<res-type>commonj.work.WorkManager</res-type>
<res-auth>Container</res-auth>
<res-sharing-scope>Shareable</res-sharing-scope>
</resource-ref>
...
Note: The recommended prefix for the JNDI namespace for WorkManager
objects is java:comp/env/wm.
Check the WorkManager javadocs for more details (e.g. "The res-auth and res-sharing scopes are ignored in this version of the specification. The EJB or servlet can then use the WorkManager as it needs to.").
2) I'd like to use the #Resource annotation to inject the WorkManager into my EJB 3 session bean. What "name" do I use for the resource?
I'd say something like this (not tested):
#ResourceRef(jndiName="java:comp/env/wm/MyWorkManager",
auth=ResourceRef.Auth.CONTAINER,
type="commonj.work.WorkManager",
name="MyWorkManager")
3) How do I configure the number of threads and other parameters for the WorkManager.
See the description of the <work-manager> element and Using Work Managers to Optimize Scheduled Work for detailed information on Work Managers
My understanding is that the underlying implementation on WebLogic is CommonJ, but I'd prefer to use a non-proprietary approach if possible.
I don't have any other suggestion (and, as long as this implementation follows the standards, I wouldn't mind using it).
The Weblogic documentation will answer your questions.
Using Work Managers to Optimize Scheduled Work