The WSO2 Business Rule Manager fails when deploy. I'm using Docker to comunicate WSO2-dashboard and WSO2-worker.
The error logs shows the following:
ERROR {org.wso2.carbon.business.rules.core.services.TemplateManagerService} - Failed to update the deployed artifact for business rule myRule org.wso2.carbon.business.rules.core.exceptions.SiddhiAppsApiHelperException: Failed to update the siddhi app '#App:name('MyApp')
#App:description('MyDescription')
.
.
.
Siddi Template Code
.
.
.'
on node 'wso2sp-worker:9443' due to a validation error occurred when updating the siddhi app
at org.wso2.carbon.business.rules.core.deployer.SiddhiAppApiHelper.update(SiddhiAppApiHelper.java:139)
at org.wso2.carbon.business.rules.core.services.TemplateManagerService.updateDeployedSiddhiApp(TemplateManagerService.java:1400)
at org.wso2.carbon.business.rules.core.services.TemplateManagerService.updateDeployedArtifacts(TemplateManagerService.java:1388)
at org.wso2.carbon.business.rules.core.services.TemplateManagerService.redeployBusinessRule(TemplateManagerService.java:663)
at org.wso2.carbon.business.rules.core.api.impl.BusinessRulesApiServiceImpl.redeployBusinessRule(BusinessRulesApiServiceImpl.java:412)
at org.wso2.carbon.business.rules.core.api.BusinessRulesApi.redeployBusinessRule(BusinessRulesApi.java:235)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invokeResource(HttpMethodInfo.java:187)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invoke(HttpMethodInfo.java:143)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.dispatchMethod(MSF4JHttpConnectorListener.java:218)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.lambda$onMessage$57(MSF4JHttpConnectorListener.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
This can occur if the siddhi app created by the business rules manager is incorrect.
One possible reason for this is using an invalid siddhi app template to create business rules.
Therefore, can you check the following?
Create a siddhi app by filling up the templated fields in your templated siddhi app.
Copy that siddhi file to $SP_HOME/wso2/worker/deployment/siddhi-files directory.
Start the worker runtime.
If there is any issue with the template, it worker will fail to deploy that siddhi app and it will log the relevant error.
Related
I've tried to setup a datasource connect to MySQL in deployment.yaml of WSO2 SI so that I can use it in my Siddhi apps.
It worked fine as normal but when I restart MySQL, the datasource can not reconnect to DB so my Siddhi app get following errors.
How can I config datasource so it can auto reconnect after database restart.
Thank you,
Luong.
[2020-11-12 19:32:43,627] ERROR {io.siddhi.extension.io.cdc.source.polling.strategies.DefaultPollingStrategy} - Error occurred while processing records in table SweetProductionTable. {mode=polling, app=CDCWithPollingMode, stream=insertSweetProductionStream} java.sql.SQLException: Connection is closed
at com.zaxxer.hikari.pool.ProxyConnection$ClosedConnection.lambda$getClosedConnection$0(ProxyConnection.java:493)
at com.sun.proxy.$Proxy73.prepareStatement(Unknown Source)
at com.zaxxer.hikari.pool.ProxyConnection.prepareStatement(ProxyConnection.java:315)
at com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement(HikariProxyConnection.java)
at io.siddhi.extension.io.cdc.source.polling.strategies.DefaultPollingStrategy.printEvent(DefaultPollingStrategy.java:142)
at io.siddhi.extension.io.cdc.source.polling.strategies.DefaultPollingStrategy.poll(DefaultPollingStrategy.java:86)
at io.siddhi.extension.io.cdc.source.polling.CDCPoller.run(CDCPoller.java:202)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
This is a bug of siddhi-io-cdc.
A Siddhi app with CDC polling mode get errors when the source database restart.
These failures caused by the bug in DefaultPollingStrategy class, printEvent(connection) do not verify connection before using it, must getConnection() first.
At the end of the mysql connection URL can you add the following
&autoReconnect=true
For example:
jdbc:mysql://localhost:3306/wso2_api_stat_alt?useSSL=false&autoReconnect=true
I am setting up WSO2 APIM with Analytics running on docker. I am getting the following error in the worker & no data is being published to the console. I'm using a MySQL database.
I am using the docker images # https://github.com/wso2/docker-apim/tree/v2.6.0.3/dockerfiles/centos
OpenJDK8U-jdk_x64_linux_hotspot_8u222b10
mysql-connector-java-5.1.47-bin.jar
[2019-10-05 04:58:27,208] ERROR {org.wso2.carbon.databridge.core.internal.queue.QueueWorker} - Dropping wrongly formatted event sent org.wso2.carbon.databridge.core.exception.EventConversionException: Error when converting org.wso2.metrics.stream.Gauge:1.0.0 of event bundle with events 4
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:188)
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.toEventList(ThriftEventConverter.java:90)
at org.wso2.carbon.databridge.core.internal.queue.QueueWorker.run(QueueWorker.java:72)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.databridge.core.exception.EventConversionException: No StreamDefinition for streamId org.wso2.metrics.stream.Gauge:1.0.0 present in cache
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:171)
... 7 more
This issue occurs when the particular stream, "org.wso2.metrics.stream.Gauge:1.0.0", hasn't deployed properly from the capp. Below steps can be followed to resolve this.
Remove the /tmp directory which is available in the APIM_ANALYTICS_HOME directory.
Restart the server.
If there is any issue observed while deploying the capp, please backup the capp and remove it from the carbon console. (You can find it under the fault apps section. This is to remove any cache data related to the capp).
Then follow steps 1 and 2 again.
I have a WSO2 API Manager 2.5.0 installation, deployed in two nodes as active-active. It also has WSO2 Analytics and WSO2 Identity Server as Key Manager.
When each node starts, it leaves three times the same error in logs:
2018-11-14 07:56:50,989 [-] [DisruptorInboundEventThread-8] ERROR AndesSubscriptionManager Could not add subscription: subscriptionId=3091b9ba-867b-4539-bf25-fee511d1813d,storageQueue=AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101,protocolType=AMQP,isActive=true,subscriberConnection=Y29ubmVjdGVkSVA9LzEwLjAuMC40OjMzMTgyLzEsY29ubmVjdGVkTm9kZT1OT0RFOk9wZW5EYXRhQXBpTTEvMTAuMC4wLjQscHJvdG9jb2xDaGFubmVsSUQ9Y2Y3NDI4MDgtZGY3MS00NzJhLWFiMTEtOTY1Nzc2ZTBkNTZl
org.wso2.andes.kernel.subscription.SubscriptionException: StorageQueue: AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101 is not registered while creating subscription id=3091b9ba-867b-4539-bf25-fee511d1813d
at org.wso2.andes.kernel.subscription.AndesSubscription.<init>(AndesSubscription.java:136)
at org.wso2.andes.kernel.subscription.AndesSubscriptionManager.reloadSubscriptionsFromStorage(AndesSubscriptionManager.java:921)
at org.wso2.andes.kernel.disruptor.inbound.InboundDBSyncRequestEvent.reloadSubscriptions(InboundDBSyncRequestEvent.java:208)
at org.wso2.andes.kernel.disruptor.inbound.InboundDBSyncRequestEvent.updateState(InboundDBSyncRequestEvent.java:76)
at org.wso2.andes.kernel.disruptor.inbound.InboundEventContainer.updateState(InboundEventContainer.java:268)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:70)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:40)
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-11-14 07:56:50,991 [-] [DisruptorInboundEventThread-8] ERROR AndesSubscriptionManager Could not add subscription: subscriptionId=c9a279eb-6c1f-447c-84bc-c077d33a06e1,storageQueue=AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101,protocolType=AMQP,isActive=true,subscriberConnection=Y29ubmVjdGVkSVA9LzEwLjAuMC40OjM0NTA0LzEsY29ubmVjdGVkTm9kZT1OT0RFOk9wZW5EYXRhQXBpTTEvMTAuMC4wLjQscHJvdG9jb2xDaGFubmVsSUQ9NGI5YmVmZmUtNjk1OC00M2Q4LTg3NGYtMzA5YmE5M2IyNzMw
org.wso2.andes.kernel.subscription.SubscriptionException: StorageQueue: AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101 is not registered while creating subscription id=c9a279eb-6c1f-447c-84bc-c077d33a06e1
at org.wso2.andes.kernel.subscription.AndesSubscription.<init>(AndesSubscription.java:136)
at org.wso2.andes.kernel.subscription.AndesSubscriptionManager.reloadSubscriptionsFromStorage(AndesSubscriptionManager.java:921)
at org.wso2.andes.kernel.disruptor.inbound.InboundDBSyncRequestEvent.reloadSubscriptions(InboundDBSyncRequestEvent.java:208)
at org.wso2.andes.kernel.disruptor.inbound.InboundDBSyncRequestEvent.updateState(InboundDBSyncRequestEvent.java:76)
at org.wso2.andes.kernel.disruptor.inbound.InboundEventContainer.updateState(InboundEventContainer.java:268)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:70)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:40)
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-11-14 07:56:50,992 [-] [DisruptorInboundEventThread-8] ERROR AndesSubscriptionManager Could not add subscription: subscriptionId=3091b9ba-867b-4539-bf25-fee511d1813d,storageQueue=AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101,protocolType=AMQP,isActive=true,subscriberConnection=Y29ubmVjdGVkSVA9LzEwLjAuMC40OjMzMTgyLzEsY29ubmVjdGVkTm9kZT1OT0RFOk9wZW5EYXRhQXBpTTEvMTAuMC4wLjQscHJvdG9jb2xDaGFubmVsSUQ9Y2Y3NDI4MDgtZGY3MS00NzJhLWFiMTEtOTY1Nzc2ZTBkNTZl
org.wso2.andes.kernel.subscription.SubscriptionException: StorageQueue: AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101 is not registered while creating subscription id=3091b9ba-867b-4539-bf25-fee511d1813d
at org.wso2.andes.kernel.subscription.AndesSubscription.<init>(AndesSubscription.java:136)
at org.wso2.andes.kernel.subscription.AndesSubscriptionManager.reloadSubscriptionsFromStorage(AndesSubscriptionManager.java:972)
at org.wso2.andes.kernel.disruptor.inbound.InboundDBSyncRequestEvent.reloadSubscriptions(InboundDBSyncRequestEvent.java:208)
at org.wso2.andes.kernel.disruptor.inbound.InboundDBSyncRequestEvent.updateState(InboundDBSyncRequestEvent.java:76)
at org.wso2.andes.kernel.disruptor.inbound.InboundEventContainer.updateState(InboundEventContainer.java:268)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:70)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:40)
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I have no idea about the problem. I have looked for the error on the internet, but there are no similar cases.
Does anyone have any ideas?
2018-11-14 07:56:50,989 [-] [DisruptorInboundEventThread-8] ERROR AndesSubscriptionManager Could not add subscription: subscriptionId=3091b9ba-867b-4539-bf25-fee511d1813d,storageQueue=AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101,protocolType=AMQP,isActive=true,subscriberConnection=Y29ubmVjdGVkSVA9LzEwLjAuMC40OjMzMTgyLzEsY29ubmVjdGVkTm9kZT1OT0RFOk9wZW5EYXRhQXBpTTEvMTAuMC4wLjQscHJvdG9jb2xDaGFubmVsSUQ9Y2Y3NDI4MDgtZGY3MS00NzJhLWFiMTEtOTY1Nzc2ZTBkNTZl
org.wso2.andes.kernel.subscription.SubscriptionException: StorageQueue: AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101 is not registered while creating subscription id=3091b9ba-867b-4539-bf25-fee511d1813d
You might have probably found out the answer for this error. But the possible cause for the issue is when the APIM is unable to identify the correct topic connection factory related to its own node due to the sharing of the 'WSO2_MB_STORE_DB' between the nodes.
If you refer to their Installing and configuring databases documentation, it has mentioned that each Traffic Manager node must have its own Message Broker database (WSO2_MB_STORE_DB).
Therefore, please check the database configurations under the <APIM_Home>/repository/conf/master-datasource.xml file in both nodes whether you have shared the 'WSO2_MB_STORE_DB' between the nodes. If that is the case, you can point them to two separate fresh databases to get rid of this error.
I haven't been able to find much on the web about this problem but...
I am setting up a fresh BigTable cluster on Googles Cloud services. I've gone through the whole process you do with most Google APIs (create service account, know your project ID, authing with the gcloud tool, Google enviornment variable set, etc.).
I am having a problem though after going through the setup. I get this error I can't find anything on web on that says:
Caused by: com.google.bigtable.repackaged.com.google.common.util.concurrent.UncheckedExecutionException: io.grpc.StatusRuntimeException:
NOT_FOUND: Error listing tables for cluster projects/bigtable-1127/zones/us-central1-c/clusters/bigdatastats : Failed to read Tables in cluster: bigdatastats
Here is the complete print that includes the error..note that I get the same error when trying to create a table as well:
./bin/hbase com.google.cloud.bigtable.hbase.CheckConfig
User Agent: bigtable-hbase-1.0-0.2.1
Project ID: bigtable-1127
Cluster Id: bigdatastats
ZoneId: us-central1-c
Cluster admin host: bigtableclusteradmin.googleapis.com
Table admin host: bigtabletableadmin.googleapis.com
Data host: bigtable.googleapis.com
Attempting credential refresh...
HBase Connection Class = com.google.cloud.bigtable.hbase1_0.BigtableConnection (OK)
Opening table admin connection...
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/Michael/bigtable/hbase-1.0.1.1/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/Cellar/hadoop/2.7.1/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-11-12 01:30:31,552 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-11-12 01:30:32,619 INFO [main] grpc.BigtableSession: Opening connection for projectId bigtable-1127, zoneId us-central1-c, clusterId bigdatastats, on data host bigtable.googleapis.com, table admin host bigtabletableadmin.googleapis.com.
Tables in cluster bigdatastats:
Exception in thread "main" java.io.IOException: Failed to listTables
at org.apache.hadoop.hbase.client.AbstractBigtableAdmin.requestTableList(AbstractBigtableAdmin.java:221)
at org.apache.hadoop.hbase.client.AbstractBigtableAdmin.listTableNames(AbstractBigtableAdmin.java:208)
at com.google.cloud.bigtable.hbase.CheckConfig.main(CheckConfig.java:99)
Caused by: com.google.bigtable.repackaged.com.google.common.util.concurrent.UncheckedExecutionException: io.grpc.StatusRuntimeException: NOT_FOUND: Error listing tables for cluster projects/bigtable-1127/zones/us-central1-c/clusters/bigdatastats : Failed to read Tables in cluster: bigdatastats
at io.grpc.stub.Calls.getUnchecked(Calls.java:117)
at io.grpc.stub.Calls.blockingUnaryCall(Calls.java:129)
at com.google.bigtable.admin.table.v1.BigtableTableServiceGrpc$BigtableTableServiceBlockingStub.listTables(BigtableTableServiceGrpc.java:338)
at com.google.cloud.bigtable.grpc.BigtableTableAdminGrpcClient.listTables(BigtableTableAdminGrpcClient.java:44)
at org.apache.hadoop.hbase.client.AbstractBigtableAdmin.requestTableList(AbstractBigtableAdmin.java:219)
... 2 more
Caused by: io.grpc.StatusRuntimeException: NOT_FOUND: Error listing tables for cluster projects/bigtable-1127/zones/us-central1-c/clusters/bigdatastats : Failed to read Tables in cluster: bigdatastats
at io.grpc.Status.asRuntimeException(Status.java:428)
at io.grpc.stub.Calls$UnaryStreamToFuture.onClose(Calls.java:324)
at io.grpc.ChannelImpl$CallImpl$ClientStreamListenerImpl$3.run(ChannelImpl.java:402)
at io.grpc.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
It would be amazing if someone could help with this. I am not sure what to do and I can't find anything out there. Obviously its around authentication, my key file is fresh and in the right place, ive ran the gcloud auth and Im not sure what else to check.
Please let me know if I can provide anymore information to help answer.
As noted in the comments, this was unlikely to have been an authentication issue.
You would receive NOT_FOUND as an error if the resource you are trying to query does not exist in your project. So it's likely that you wanted to switch your default project using gcloud config set project as recommended by Les.
I am trying to integrate external cassandra to BAM. I have changed cassandra-component.xml.
1) I want to know how keyspace are created on external cassandra because when I am running BAM,
I am getting the error Unknown keyspace EVENT_KS.
2) I am having the following error in my wso2 logs
TID: [0] [BAM] [2014-02-11 15:28:30,905] WARN {org.apache.hadoop.mapred.JobClient} - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. {org.apache.hadoop.mapred.JobClient}
TID: [0] [BAM] [2014-02-11 15:37:04,393] ERROR {org.apache.hadoop.hive.ql.exec.ExecDriver} - Job Submission failed with exception 'java.lang.RuntimeException(org.apache.thrift.transport.TTransportException)'
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException
at org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getRangeMap(ColumnFamilyInputFormat.java:297)
at org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(ColumnFamilyInputFormat.java:105)
at org.apache.hadoop.hive.cassandra.input.HiveCassandraStandardColumnInputFormat.getSplits(HiveCassandraStandardColumnInputFormat.java:291)
at org.apache.hadoop.hive.cassandra.input.HiveCassandraStandardColumnInputFormat.getSplits(HiveCassandraStandardColumnInputFormat.java:216)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:302)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:292)
at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:933)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:925)
at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:839)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:792)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1123)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:792)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:766)
at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:460)
at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:733)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.thrift.transport.TTransportException
EVENT_KS is created only after the very first event is published to BAM as I remember. If you try to access it before getting created errors may arise.
In BAM 2.4.0, EVENT_KS is getting created when you run the BAM for the first time. (But in previous versions EVENT_KS will be created when the very first event is published to BAM). Please make sure your cassandra-component.xml looks similar to something like below. Also tell us about the cassandra version you are using.
<Cassandra><Cluster>
<Name>Test Cluster</Name>
<DefaultPort>9160</DefaultPort>
<Nodes>localhost:9160</Nodes>
<AutoDiscovery disable="false" delay="1000"/>
</Cluster></Cassandra>
First You need to check the following:
Have you pointed the cassandra-component.xml correctly to the external cassandra. With this your published data will be stored in the intended external cassandra database.
Have you installed a toolbox with intended stream definition inside? Or Else have you triggered to publish the data to BAM? In both cases the EVENT_KS will be created with the column family with the name of stream.
Have you modified the $BAM_HOME/repository/conf/datasource/master-datasource.xml to point to external cassandra databse? You need to validate the cassandra database configuration provided in WSO2BAM_CASSANDRA_DATASOURCE datasource. For the default toolboxes, this is the default cassandra datasource being used and by default it points to localhost. If you are using this in your hive script you need to change this configuration.
After putting many efforts i figure that after changing data directory of cassendra.yaml of external cassandra to repository/database/cassandra/data everything works fine with external cassandra.Not to mention with version 1.1.3. I want to know is there any other work around for this external cassandra configuration.