I'm currently developing a REST service to replace an existing solution. I'm using plain Payara/JEE7/JAX-RS. I am not using Spring and I do not intent to.
The problem I'm facing is that we want to reuse as much of the original configuration as possible (deployment on multiple nodes in a cluster with puppet controlling the configuration files).
Usually in Glassfish/Payara, you'd have a domain.xml file that has some content like this:
<jdbc-connection-pool driver-classname="" pool-resize-quantity="10" datasource-classname="org.postgresql.ds.PGSimpleDataSource" max-pool-size="20" res-type="javax.sql.DataSource" steady-pool-size="10" description="" name="pgsqlPool">
<property name="User" value="some_user"/>
<property name="DatabaseName" value="myDatabase"/>
<property name="LogLevel" value="0"/>
<property name="Password" value="some_password"/>
<!-- bla --->
</jdbc-connection-pool>
<jdbc-resource pool-name="pgsqlPool" description="" jndi-name="jdbc/pgsql"/>
Additionally you'd have a persistence.xml file in your archive like this:
<persistence-unit name="myDatabase">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>jdbc/pgsql</jta-data-source>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/>
<!-- bla -->
</properties>
</persistence-unit>
I need to replace both of these configuration files by a programmatic solution so I can read from the existing legacy configuration files and (if needed) create the connection pools and persistence units on the server's startup.
Do you have any idea how to accomplish that?
Actually you do not need to edit each domain.xml by hands. Just create glassfish-resources.xml file like this:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE resources PUBLIC "-//GlassFish.org//DTD GlassFish Application Server 3.1 Resource Definitions//EN" "http://glassfish.org/dtds/glassfish-resources_1_5.dtd">
<resources>
<jdbc-connection-pool driver-classname="" pool-resize-quantity="10" datasource-classname="org.postgresql.ds.PGSimpleDataSource" max-pool-size="20" res-type="javax.sql.DataSource" steady-pool-size="10" description="" name="pgsqlPool">
<property name="User" value="some_user"/>
<property name="DatabaseName" value="myDatabase"/>
<property name="LogLevel" value="0"/>
<property name="Password" value="some_password"/>
<!-- bla --->
</jdbc-connection-pool>
<jdbc-resource pool-name="pgsqlPool" description="" jndi-name="jdbc/pgsql"/>
</resources>
Then either use
$PAYARA_HOME/bin/asadmin add-resources glassfish-resources.xml
on each node once or put it under WEB-INF/ of your war (note, in this case jndi-name SHOULD be java:app/jdbc/pgsql because you do not have access to global: scope at this context).
Note that your persistence.xml should be under META-INF/ of any jar in your classpath.
If you do not like this, you may use
#PersistenceUnit(unitName = "MyDatabase")
EmtityManagerFactory emf;
to create EntityManager on fly:
createEntityManager(java.util.Map properties).
By the way, using Payara you can share configuration with JCache across you cluster.
Since the goal is to have a dockerized server that runs a single application, I can very well use an embedded server.
Using an embedded sever, the solution to my problem looks roughly like this:
For the server project, create a Maven dependency:
<dependencies>
<dependency>
<groupId>fish.payara.extras</groupId>
<artifactId>payara-embedded-all</artifactId>
<version>4.1.1.163.0.1</version>
</dependency>
</dependencies>
Start your server like this:
final BootstrapProperties bootstrapProperties = new BootstrapProperties();
final GlassFishRuntime runtime = GlassFishRuntime.bootstrap();
final GlassFishProperties glassfishProperties = new GlassFishProperties();
final GlassFish glassfish = runtime.newGlassFish(glassfishProperties);
glassfish.start();
Add your connection pools to the started instance:
final CommandResult createPoolCommandResult = commandRunner.run("create-jdbc-connection-pool",
"--datasourceclassname=org.postgresql.ds.PGConnectionPoolDataSource", "--restype=javax.sql.ConnectionPoolDataSource", //
"--property=DatabaseName=mydb"//
+ ":ServerName=127.0.0.1"//
+ ":PortNumber=5432"//
+ ":User=myUser"//
+ ":Password=myPassword"//
//other properties
, "Mydb"); //the pool name
Add a corresponding jdbc resource:
final CommandResult createResourceCommandResult = commandRunner.run("create-jdbc-resource", "--connectionpoolid=Mydb", "jdbc__Mydb");
(In the real world you would get the data from some external configuration file)
Now deploy your application:
glassfish.getDeployer().deploy(new File(pathToWarFile));
(Usually you would read your applications from some deployment directory)
In the application itself you can just refer to the configured pools like this:
#PersistenceContext(unitName = "mydb")
EntityManager mydbEm;
Done.
A glassfish-resources.xml would have been possible too, but with a catch: My configuration file is external, shared by some applications (so the file format is not mine) and created by external tools on deployment. I would need to XSLT the file to a glassfish-resources.xml file and run a script that does the "asadmin" calls.
Running an embedded server is an all-java solution that I can easily build on a CI server and my application's test suite could spin up the same embedded server build to run some integration tests.
Related
We are connecting to BigTable using HBase API and we are using the hbase-site.xml.
Is there any way we can use impersonation using HBase API to connect to BigTable?
<configuration xmlns:xi="http://www.w3.org/2001/XInclude">
<property>
<name>hbase.client.connection.impl</name>
<value>com.google.cloud.bigtable.hbase1_x.BigtableConnection</value>
</property>
<property>
<name>google.bigtable.project.id</name>
<value></value>
</property>
<property>
<name>google.bigtable.instance.id</name>
<value></value>
</property>
<property>
<name>google.bigtable.auth.json.keyfile</name>
<value></value>
</property>
</configuration>
The source code (bigtable implementation using HBase API i.e com.google.cloud.bigtable.hbase1_x.BigtableConnection)doesn't have any functionality related to using impersonation. https://github.com/googleapis/java-bigtable-hbase
About User Impersonation in Hbase, it appears that it is supported through the user of an Apache Thrift server, which I think acts a bit like an upstream proxy. Per the comments in the post here, it is stated that CBT does support thrift with this provided example (note this should be set up on a GCE instance). This additional guide shows the process of setting up this gateway and using it for requests coming from App Engine. If I misunderstood your intention, you can come back with additional details on your use-case, so that I could work on your question.
We didn't anyway to configure the impersonated user in hbase-site.xml as in the source code of this didn't find any param for this https://github.com/googleapis/java-bigtable-hbase
The best way, we can impersonate using HBase API when connecting to BigTable is create BigTable connection using impersonation and use that connection object in the existing HBase API implementation. Here is the code snippet for getting the connection
public org.apache.hadoop.hbase.client.Connection getConnection() throws Exception{
Credentials credentials = GoogleCredentials.fromStream(new FileInputStream("credentials_key.json"));
ImpersonatedCredentials targetCredentials = ImpersonatedCredentials.create((GoogleCredentials) credentials,
"your-service-account#gcp-test-project.iam.gserviceaccount.com", null,
Arrays.asList("https://www.googleapis.com/auth/bigtable.data"), 3600);
// use your gcp project name and bigtable instance name
Configuration config = BigtableConfiguration.configure("gcp-test-project", "big-table-instance");
BigtableConfiguration.withCredentials(config,(Credentials)targetCredentials);
Connection connection = BigtableConfiguration.connect(config);
return connection;
}
Using this approach, with minimal changes, one can use existing api/implementations using HBase API to connect to BigTable and can impersonate. Please note that if impersonation is not required (the json key account you are using has the permissions to read/write), then there will not be any changes required for your existing code base. ref https://cloud.google.com/bigtable/docs/hbase-bigtable
I need to create parallel running service tasks in my process.
Try to create the simplest flow with async property usage:
With loop cardinality = 5 (for example)
I found that in activiti.xml configuration it's required to add this property:
<property name="asyncExecutorActivate" value="true" />
But flow still runs in one thread.
What i'm missing?
How to activate async correctly?
to activate async parallel execution in the example above - need to set async on Call Service and not on Sub Process
as soon as we use async we have to configure process engine to be async
otherwise you will meet this king of exception:
org.activiti.engine.ActivitiOptimisticLockingException: VariableInstanceEntity[id=15317, name=nrOfActiveInstances, type=integer, longValue=1, textValue=1] was updated by another transaction concurrently
the parameters of activiti engine on wso2bps stored here: conf/activiti.xml
just add the following properties to bean id="processEngineConfiguration"
<bean id="processEngineConfiguration" class="org.activiti.engine.impl.cfg.StandaloneProcessEngineConfiguration">
...
<property name="asyncExecutorActivate" value="true" />
<property name="asyncExecutorEnabled" value="true" />
...
</bean>
warn: don't know if it's feature or bug. subprocess will catch correctly all thread endings only if you set async on end events of subprocess...
after those changes, the process from question works great in multithread mode.
I am getting a connection timed out when I try to invoke from a WS client a method from a CXF Web service I have deployed. Both are using custom interceptors, and the service is overloaded due to multiple invocations.
Caused by: java.net.ConnectException: ConnectException invoking http://xxx.xx.xx.xx:12005/myservice/repository?wsdl: Connection timed out
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.mapException(HTTPConduit.java:1338)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1322)
at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:56)
at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:622)
at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62)
... 36 more
I tried multiple solutions to disabled the timeout or to increase it but all failed.
First, I tried to create a CXF configuration file like the following:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:http-conf="http://cxf.apache.org/transports/http/configuration"
xsi:schemaLocation="http://cxf.apache.org/transports/http/configuration
http://cxf.apache.org/schemas/configuration/http-conf.xsd
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<http-conf:conduit name="*.http-conduit">
<http-conf:client CacheControl="no-cache"
ConnectionTimeout="0" ReceiveTimeout="0" AllowChunking="false" />
</http-conf:conduit>
</beans>
Then, I forced my application to load it by using the Java system property -Dcxf.config.file=/home/test/resources/cxf.xml
In the logs I can see that the configuration is read and thus probably applied
INFO: Loaded configuration file /home/test/resources/cxf.xml.
Unfortunately the connection timed out still occurs.
The second solution I tried consists of setting the policy programmatically on all the clients by using the following piece of code:
public static void setHTTPPolicy(Client client) {
HTTPConduit http = (HTTPConduit) client.getConduit();
HTTPClientPolicy httpClientPolicy = new HTTPClientPolicy();
httpClientPolicy.setConnectionTimeout(0);
httpClientPolicy.setReceiveTimeout(0);
httpClientPolicy.setAsyncExecuteTimeout(0);
http.setClient(httpClientPolicy);
}
but again the connection timeout occurs.
Do I miss something? Is there some other timeouts to configure? any help is welcome.
CXF allows you to configure threadpooling for your webservice endpoint. This way, you can cater for timeouts occurring as a result of scarce request processing resources. Below is a sample config using the <jaxws:endpoint/> option in cxf:
<jaxws:endpoint id="serviceBean" implementor="#referenceToServiceBeanDefinition" address="/MyEndpointAddress">
<jaxws:executor>
<bean id="threadPool" class="java.util.concurrent.ThreadPoolExecutor">
<!-- Minimum number of waiting threads in the pool -->
<constructor-arg index="0" value="2"/>
<!-- Maximum number of working threads in the pool -->
<constructor-arg index="1" value="5"/>
<!-- Maximum wait time for a thread to complete execution -->
<constructor-arg index="2" value="400000"/>
<!-- Unit of wait time -->
<constructor-arg index="3" value="#{T(java.util.concurrent.TimeUnit).MILLISECONDS}"/>
<!-- Storage data structure for waiting thread tasks -->
<constructor-arg index="4" ref="taskQueue"/>
</bean>
</jaxws:executor>
</jaxws:endpoint>
<!-- Basic data structure to temporarily hold waiting tasks-->
<bean id="taskQueue" class="java.util.concurrent.LinkedBlockingQueue"/>
I have JPA mapping to HSQLDB and persistence.xml reads as below:
<persistence-unit name="HMC">
<jta-data-source>java:hmc</jta-data-source>
<class>org.hmc.jpa.models.BloodGroup</class>
<class>org.hmc.jpa.models.ContactInfo</class>
<properties>
<property
name="hibernate.transaction.manager_lookup_class"
value="org.hibernate.transaction.JBossTransactionManagerLookup"/>
<property name="hibernate.dialect" value="org.hibernate.dialect.HSQLDialect" />
</properties>
</persistence-unit>
But whenever application is deployed, it JBoss throws RuntimeException saying:
Specification violation [EJB3 JPA 6.2.1.2] - You have not defined a non-jta-data-source for a RESOURCE_LOCAL enabled persistence context named: ABC
I also have datasource defined in JBoss. Is there anything that I am missing in the configuration?
Regards,
Satya
if the transaction type of the persistence unit is JTA, the
jta-datasource element is used to declare the JNDI name of the JTA
data source that will be used to obtain connections. This is the
common case.
if the transaction type of the persistence unit is resource-local,
the non-jta-data-source should be used to declare the JNDI name of a
non-JTA data source.
hat is happening is that JBoss automatically scans and validates files named persistence.xml, since you are using spring to manage your beans, I guess you are not using EJB3.
What needs to be understood is if you want JBoss to control JTA transactions for you and if you want to use JBoss Transaction Manager, or if you just want to do JPA transactions, without JTA transaction control.
If you want to just use the JPA transactions and skip JBoss TransactionManagener, you can just rename your persistence.xml file to spring-persistence.xml (or whatever you like), and in spring-context.xml file you can change your entityManagerFactory to this:
<!-- JPA primary EntityManagerFactory -->
<bean id="entityManagerFactory" lazy-init="true"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
p:persistenceUnitName="ExamplePU"
p:persistenceXmlLocation="classpath:/META-INF/spring-persistence.xml"
p:jpaVendorAdapter-ref="jpaVendorAdapter"
p:jpaDialect-ref="jpaDialect"
p:dataSource-ref="dataSource"/>
What happens is that by renaming the file JBoss won't validate it, since you are working outside of the EJB spec and not using any EJB beans, JBoss shouldn't be scanning this file anyways. And since you renamed it, you need to tell spring where it is and under what name.
I got it working by removing transaction-type="RESOURCE_LOCAL" and changing java:hmc to java:/hmc. But now my application has another problem whenever I try to persist.
It throws : java.lang.IllegalStateException: A JTA EntityManager cannot use getTransaction()
Can anybody tell me how to get a connection and start transaction in JTA mode?
Regards,
Satya
What does your Spring configuration for integration tests look like using an embedded h2 datasource and, optionally, JUnit?
My first try with a SingleConnectionDataSource basically worked, but failed on more complicated tests where you need several connections at the same time or suspended transactions. I think h2 in tcp based server mode might work as well, but this is probably not the fastest communication mode for a temporary embedded database in memory.
What are the possibilities and their advantages / disadvantages? Also, how do you create the tables / populate the database?
Update: Let's specify some concrete requirements that are important for such tests.
The database should be temporary and in memory
The connection should probably not use tcp, for speed requirements
It would be nice if I could use a database tool to inspect the content of the database during debugging
We have to define a datasource since we can't use the application servers datasource in unit tests
With the reservation that I do not know if there is any tool that can inspect the database, I think that a simple solution would be to use the Spring embedded database (3.1.x docs, current docs) which supports HSQL, H2, and Derby.
Using H2, your xml configuration would look like the following:
<jdbc:embedded-database id="dataSource" type="H2">
<jdbc:script location="classpath:db-schema.sql"/>
<jdbc:script location="classpath:db-test-data.sql"/>
</jdbc:embedded-database>
If you prefer Java based configuration, you can instantiate a DataSource like this (note that EmbeddedDataBase extends DataSource):
#Bean(destroyMethod = "shutdown")
public EmbeddedDatabase dataSource() {
return new EmbeddedDatabaseBuilder().
setType(EmbeddedDatabaseType.H2).
addScript("db-schema.sql").
addScript("db-test-data.sql").
build();
}
The database tables are created by the db-schema.sql script and they are populated with test data from the db-test-data.sql script.
Don't forget to add the H2 database driver to your classpath.
I currently include in a test-only springconfig-file as a datasource:
<bean id="database.dataSource" class="org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy">
<constructor-arg>
<bean class="org.springframework.jdbc.datasource.SimpleDriverDataSource">
<property name="driverClass" value="org.h2.Driver" />
<property name="url"
value="jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;MODE=Oracle;TRACE_LEVEL_SYSTEM_OUT=2" />
</bean>
</constructor-arg>
</bean>
<!-- provides a H2 console to look into the db if necessary -->
<bean id="org.h2.tools.Server-WebServer" class="org.h2.tools.Server"
factory-method="createWebServer" depends-on="database.dataSource"
init-method="start" lazy-init="false">
<constructor-arg value="-web,-webPort,11111" />
</bean>
Creating / dropping the tables can be done by using executeSqlScript when overriding AbstractAnnotationAwareTransactionalTests.onSetUpBeforeTransaction, or with SimpleJdbcTestUtils.executeSqlScript in an appropriate place.
Compare also this posting.
H2 is bundled with a built-in connection pool implementation. The following XML provides an example of using it as a Datasource bean without a need to introduce additional dependencies on DBCP or C3P0:
<bean id="dataSource" class="org.h2.jdbcx.JdbcConnectionPool" destroy-method="dispose">
<constructor-arg>
<bean class="org.h2.jdbcx.JdbcDataSource">
<property name="URL" value="jdbc:h2:dbname"/>
<property name="user" value="user"/>
<property name="password" value="password"/>
</bean>
</constructor-arg>
</bean>
The database will be shut down by calling a dispose method when Spring application context closes.
I think it's best to use your production DataSource implementation (only with different connection-string) for the unit-tests.
Anyway "failed on more complicated tests" doesn't give enough information for a more detailed answer.
(Self-ad : check this)