Arquillian remote integration tests AWS - amazon-web-services

I want to run some integration tests when building so I've configured arquillian.xml as:
<container qualifier="widlfly-remote" default="true">
<configuration>
<property name="managementAddress">88.88.88.88</property>
<property name="managementPort">9990</property>
<property name="username">blabla</property>
<property name="password">blabla</property>
<property name="host">88.88.88.88</property>
<property name="port">8080</property>
</configuration>
</container>
I have good degree of automation so I don't know the exact IP of the instances I'm deploying to.
Arquillian Docs are not clear about what I can do.
I'm expecting to do something like:
<container qualifier="widlfly-remote" default="true">
<configuration>
<property name="managementAddress">${envOrWhateverProp}</property>
<property name="managementPort">${envOrWhateverProp}</property>
<property name="username">${envOrWhateverProp}</property>
<property name="password">${envOrWhateverProp}</property>
<property name="host">${envOrWhateverProp}</property>
<property name="port">${envOrWhateverProp}</property>
</configuration>
Whatever it can be environment variable or Maven flag (-DmyPropHere ).
I need some way to fill up those from outside and keeping code unique across all versions.
Any help?

Related

Apache Ignite autoscaling in AWS (linux)

Currently we host Apache Ignite node in AWS using Apache Ignite Image with 16 RAM.
We want to dynamically add new nodes while load on cache increases.
For this purpose we need to somehow trigger that node will run out of memory soon and we need to add additional node. Is there any way to track that?
I've tried to load cache with random data, cache failed with OutOfMemoryException when java process took over 30-40% of RAM.
Here is default-config.xml from {IGNITE_HOME}\config:
<?xml version="1.0" encoding="UTF-8"?>
...
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<list>
<!-- Partitioned cache example configuration (Atomic mode). -->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="default"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="backups" value="1"/>
</bean>
</list>
</property>
<!-- Enabling Apache Ignite Persistent Store. -->
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="false"/>
<property name="metricsEnabled" value="true"/>
<property name="maxSize" value="#{10L * 1024 * 1024 * 1024}"/>
</bean>
</property>
</bean>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder">
<property name="awsCredentialsProvider" ref="aws.creds"/>
<property name="bucketName" value="dev-apache-ignite"/>
</bean>
</property>
</bean>
</property>
</bean>
<!-- AWS credentials. Provide your access key ID and secret access key. -->
<bean id="aws.creds" class="com.amazonaws.auth.InstanceProfileCredentialsProvider">
<constructor-arg value="false" />
</bean>
</beans>
Sorry if this question was alsready answered in documentation.
Is there any predefined guidelines to configure AWS autoscaling for ignite?
Amount of RAM taken by JVM is not relevant. You will see IgniteOutOfMemoryException when you run out of data region - in your case, 10G.
You may divide DataRegionMetrics.getOffheapUsedSize() by data region size to understand how much runway do you have left.
Then you can probably use GridGain K8S Operator to scale your cluster:
https://www.gridgain.com/docs/latest/installation-guide/operator/how-tos

Is it possible to set expiry time using Apache Ignite in C++?

I am using the C++ thin client API and I want to have the data deleted from the cache after 5 minutes. I am connecting to ignite through docker and using the persistence storage. In the documentation for the C++ libraries, I cannot find anything relating to "expiry" and I tried adding the expiry option into the config xml file that my docker container reads in, but that didn't seem to work either. I put data into the cache and checked for the data after 5 minutes (I also checked 10, 20, 30 minutes later) and the data was still there.
Here is my config xml file:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Enabling Apache Ignite Persistent Store. -->
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
<property name="name" value="Default_Region" />
</bean>
</property>
</bean>
</property>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>127.0.0.1:47500..47502</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="MINUTES"/>
<constructor-arg value="5"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
</beans>
Yes, the c++ thin client doesn't support this feature at the moment.
I think you could either define a cache with expiration completely on the server-side or to define only a cache template https://apacheignite.readme.io/docs/cache-template with expiry policy and use it from a client.

In WSO2 EI / ESB how do I process array/object based query parameters? (i.e. with square brackets)

I have an API that uses query parameters as follows:
/events/search
?title=royal
&area=southeast
&maint[date]=20180823
&maint[user]=oscar
&maint[action]=release
(line breaks added for readability)
Processing the simple query params within WSO2 EI is straightforward. There are a few ways to do this using the property mediator:
<property name="title" expression="get-property('query.param.title')"/>
<property name="title" expression="$url:title"/>
<property name="title" expression="$ctx:query.param.title"/>
However, I have been unable to process the array/object based query params. I have tried to use the property mediator in various ways, non of which work:
<property name="maintDate" expression="get-property('query.param.maint[date]')"/>
<property name="maintDate" expression="get-property('query.param.maint.date')"/>
<property name="maintDate" expression="$url:maint[date]"/>
<property name="maintDate" expression="$url:maint.date"/>
<property name="maintDate" expression="$url:maint[date]"/>
<property name="maintDate" expression="$url:maint%5Bdate%5D"/>
Has anyone had an experience and any success in this area?
Your ESB API URL template will look like below:
uri-template="/test?title={t}&area={a}&maint[date]={date}&maint[user]={user}&maint[action]={action}
You can access the square bracket query params as below:
<property name="date" expression="$ctx:uri.var.date"/>
<property name="user" expression="$ctx:uri.var.user"/>
<property name="action" expression="$ctx:uri.var.action"/>

How to work with wso2CEP3.0.0 and activemq 5.8.0

i am using wso2cep 3.0.0 and activemq5.8.0
As per CEP documents i wish to Publish events using CEP.
For that i started activemq with 2 define QUEUES with the name jmsProxy for incoming message and JmsProxy for out message.I added required jars in CEP lib activemq-broker-5.8.0.jar,activemq-client-5.8.0.jar,axiom.jar,geronimo-j2ee-management_1.1_spec-1.0.1.jar,geronimo-jms_1.1_spec-1.1.1.jar,hawtbuf-1.2.jar,xpp3-1.1.4c.jar,xstream-1.4.4.jar
my configuration is like this
InputEventAdaptor
<?xml version="1.0" encoding="UTF-8"?>
<inputEventAdaptor name="jmsProxy" statistics="disable" trace="enable"
type="jms" xmlns="http://wso2.org/carbon/eventadaptormanager">
<property name="java.naming.provider.url">tcp://localhost:61616</property>
<property name="transport.jms.SubscriptionDurable">true</property>
<property name="transport.jms.DurableSubscriberName">jmsProxy</property>
<property name="transport.jms.UserName">admin</property>
<property name="java.naming.factory.initial">org.apache.activemq.jndi.ActiveMQInitialContextFactory</property>
<property name="transport.jms.Password">admin</property>
<property name="transport.jms.ConnectionFactoryJNDIName">QueueConnectionFactory</property>
<property name="transport.jms.DestinationType">queue</property>
</inputEventAdaptor>
above for incoming messages which will pick the messages from jmsProxy queue
But its unable to pick the message from jmsProxy Queue.How would i initiate this to get the message into CEP and
outputEventAdaptor
<?xml version="1.0" encoding="UTF-8"?>
<outputEventAdaptor name="JmsProxy" statistics="disable" trace="disable"
type="jms" xmlns="http://wso2.org/carbon/eventadaptormanager">
<property name="java.naming.security.principal">admin</property>
<property name="java.naming.provider.url">tcp://localhost:61616</property>
<property name="java.naming.security.credentials">admin</property>
<property name="java.naming.factory.initial">org.apache.activemq.jndi.ActiveMQInitialContextFactory</property>
<property name="transport.jms.ConnectionFactoryJNDIName">QueueConnectionFactory</property>
<property name="transport.jms.DestinationType">queue</property>
</outputEventAdaptor>
event builder configuration like this
<?xml version="1.0" encoding="UTF-8"?>
<eventBuilder name="ReadingsDtoBuilder" statistics="disable"
trace="disable" xmlns="http://wso2.org/carbon/eventbuilder">
<from eventAdaptorName="jmsProxy" eventAdaptorType="jmsProxy">
<property name="transport.jms.Destination">JmsProxy</property>
</from>
<mapping customMapping="disable"
parentXpath="//ReadingsLiteTaildtos" type="xml">
<property>
<from xpath="//ReadingsLiteTaildto/ParameterId"/>
<to name="meta_parameterId" type="string"/>
</property>
<property>
<from xpath="//ReadingsLiteTaildto/Slno"/>
<to name="meta_slno" type="string"/>
</property>
<property>
<from xpath="//ReadingsLiteTaildto/FinalValue"/>
<to name="finalValue" type="int"/>
</property>
<property>
<from xpath="//ReadingsLiteTaildto/InputText"/>
<to name="inputText" type="string"/>
</property>
<property>
<from xpath="//ReadingsLiteTaildto/InputValue"/>
<to name="inputValue" type="double"/>
</property>
</mapping>
<to streamName="org.sample.readings.dto.stream" version="1.0.0"/>
</eventBuilder>
The execution plan can be as follows.like this
<?xml version="1.0" encoding="UTF-8"?>
<executionPlan name="ReadingsAnalyzer" statistics="disable"
trace="disable" xmlns="http://wso2.org/carbon/eventprocessor">
<description>This execution plan analyzes readings and triggers notifications based on threshold.</description>
<siddhiConfiguration>
<property name="siddhi.enable.distributed.processing">false</property>
<property name="siddhi.persistence.snapshot.time.interval.minutes">0</property>
</siddhiConfiguration>
<importedStreams>
<stream as="readings" name="org.sample.readings.dto.stream" version="1.0.0"/>
</importedStreams>
<queryExpressions><![CDATA[from readings[finalValue > 100]
select *
insert into notificationStream;]]></queryExpressions>
<exportedStreams>
<stream name="notificationStream" valueOf="notificationStream" version="1.0.0"/>
</exportedStreams>
</executionPlan>
I defined streams inside stream-manager-config.xml similar to the following.
<streamDefinition name="org.sample.readings.dto.stream" version="1.0.0">
<metaData>
<property name="parameterId" type="STRING"/>
<property name="slno" type="STRING"/>
</metaData>
<payloadData>
<property name="finalValue" type="INT"/>
<property name="inputText" type="STRING"/>
<property name="inputValue" type="DOUBLE"/>
</payloadData>
</streamDefinition>
<streamDefinition name="notificationStream" version="1.0.0">
<metaData>
<property name="parameterId" type="STRING"/>
<property name="slno" type="STRING"/>
</metaData>
<payloadData>
<property name="finalValue" type="INT"/>
<property name="inputText" type="STRING"/>
<property name="inputValue" type="DOUBLE"/>
</payloadData>
</streamDefinition>.
its look like all well while i am sending any message to my jmsProxy queue the message is not reflecting to CEP for event and i am getting this message in CEP
[2014-02-18 11:57:53,159] INFO - {EventBuilderDeployer} Event Builder undeployed successfully : ReadingsDtoBuilder.xml
[2014-02-18 11:57:53,160] INFO - {EventBuilderDeployer} Event builder deployment held back and in inactive state :ReadingsDtoBuilder, Waiting for Input Event Adaptor dependency :jmsProxy
[2014-02-18 12:03:58,006] INFO - {InputEventAdaptorConfigurationFilesystemInvoker} Input Event Adaptor configuration deleted from file system : jmsProxy.xml
[2014-02-18 12:03:58,006] INFO - {InputEventAdaptorDeployer} Input Event Adaptor undeployed successfully : jmsProxy.xml
[2014-02-18 12:03:58,008] INFO - {InputEventAdaptorConfigurationFilesystemInvoker} Input Event Adaptor configuration saved in th filesystem : jmsProxy
[2014-02-18 12:03:58,009] INFO - {InputEventAdaptorDeployer} Input Event Adaptor deployed successfully and in active state : jmsProxy
[2014-02-18 12:03:58,009] INFO - {EventBuilderDeployer} Event Builder undeployed successfully : ReadingsDtoBuilder.xml
[2014-02-18 12:03:58,009] INFO - {EventBuilderDeployer} Event builder deployment held back and in inactive state :ReadingsDtoBuilder, Waiting for Input Event Adaptor dependency :jmsProxy
Looking at your configuration, it seems that the event builder configuration is incorrect. Event adaptor type should be 'jms' in the event builder configuration.
<from eventAdaptorName="jmsProxy" eventAdaptorType="jms">
Please correct this as above to get it working. You can enable tracing [1] to see if messages come to a particular component of CEP.
[1] https://docs.wso2.org/display/CEP300/CEP+Event+Tracer

Testing Spring-JPA

I am developping a web application using Spring (3.1.x), JSF 2, JPA 2 (Hibernate Provider) for tomcat 6.x.
I want to test my DAO classes.
In my DAO class: i do this:
#PersistenceContext
private EntityManager entityManager;
In Spring Configuration;
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean">
<property name="persistenceUnitName" value="OpenPU" />
</bean>
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
In persistence.xml
<persistence-unit name="OpenPU" transaction-type="RESOURCE_LOCAL">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<non-jta-data-source>java:comp/env/jdbc/mysql_open</non-jta-data-source>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.MySQLInnoDBDialect" />
<property name="hibernate.show_sql" value="true"/>
<property name="hibernate.transaction.flush_before_completion" value="true"/>
<property name="hibernate.cache.provider_class" value="org.hibernate.cache.HashtableCacheProvider"/>
<property name="hibernate.connection.zeroDateTimeBehavior" value="convertToNull"/>
</properties>
</persistence-unit>
It the first time I make test, and when I test I don't want to use the same persistence unit. I heard about dbunit for using xml data, but i don't understand how to change the persistence unit during the test.
Can you help me or give me some example, tutorial.
Thanks you.
Maybe this tutorial will help.
BTW, there is one interesting Spring feature to fit your needs - embedded database support. So, I usually just use following construction to create in-memory H2 db, create schema with schema.sql and fill it with some data from test-data.sql:
<jdbc:embedded-database id="dataSource" type="H2">
<jdbc:script location="classpath:schema.sql"/>
<jdbc:script location="classpath:test-data.sql"/>
</jdbc:embedded-database>
Then you could use this bean as datasource for you EntityManagerFactory bean:
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
p:dataSource-ref="dataSource"
p:persistence-xml-location="classpath:META-INF/persistence.xml">
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="databasePlatform" value="org.hibernate.dialect.MySQLInnoDBDialect"/>
<property name="showSql" value="true" />
<!-- other properties -->
</bean>
</property>
<property name="persistenceUnitName" value="OpenPU" />
</bean>
This is very convenient and concise way to create in-memory database for tests with Spring.
(don't forget to add H2 in your classpath)
See documentation for details, chapter "13.8 Embedded database support".