I have done mitigations steps by follows this WSO2 Documentation to avoid Cross Site Scripting (XSS) attacks in WSO2 EI 6.4.0.
In below mentioned files i have made required changes as mentioned here
edited <PRODUCT_HOME>/repository/conf/carbon.xml
Added the some configuration within the <Hosts> element of the <PRODUCT_HOME>/repository/conf/tomcat/catalina-server.xml file.
Post restarting server, I have noticed couple of things as mentioned below
We're unable to modify artifacts like API, Sequence, task Scheduler via wso2 management console, But we can upload CARs, Connectors etc.
Whenever I try to edit files in management console, got below ERROR in logs
[2022-10-18 07:38:59,904] [-1234] [] [http-nio-9443-exec-34] ERROR {org.wso2.carbon.tomcat.ext.valves.CompositeValve
} - Could not handle request: /carbon/sequences/save_sequence.jsp
javax.servlet.ServletException: Possible XSS Attack. Suspicious code : eval($)
at org.wso2.carbon.ui.valve.XSSValve.validateParameters(XSSValve.java:110)
at org.wso2.carbon.ui.valve.XSSValve.invoke(XSSValve.java:86)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:57)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.
java:159)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:962)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:445)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1115)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:637)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1775)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1734)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
In DSS, we could able to edit in management console(eg. adding new operation) , but newly added operation is not available when i do try it option
Concluding that whatever we want, we can do via CAR, this mitigation change is not allow us to edit config files via management console.
Is my understanding correct or Am I missing something to notice? I just need to know what was the product level impacts post this Mitigation.
Kindly clarify my doubt or share your thoughts on the same.
We need to skip the content validation of the artifact files. We can whitelist the resource paths used to modify artifacts such as API, Sequences etc. To do this you need to add the following patterns under the <Patterns> element of the <XSSPreventionConfig> element in carbon.xml config file.
<XSSPreventionConfig>
<Enabled>true</Enabled>
<Rule>allow</Rule>
<Patterns>
<Pattern>carbon/sequences</Pattern>
<Pattern>carbon/configadmin</Pattern>
<Pattern>carbon/localentries</Pattern>
<Pattern>carbon/api</Pattern>
<Pattern>carbon/proxyservices</Pattern>
<Pattern>carbon/resources</Pattern>
<Pattern>carbon/task</Pattern>
</Patterns>
</XSSPreventionConfig>
Reference: http://ravindraranwala.blogspot.com/2015/10/preventing-xss-and-csrf-vulnerabilities.html
Related
I'm using Spring Boot 2.0.9.RELEASE and am trying to figure out how to configure CloudWatch monitoring for my application running on an EC2-instance.
What I did can be seen in my answer to this question. But I'm stuck with the following exception:
ERROR Oct 23, 2019 12:20:06.881 [pool-2-thread-30] {} io.micrometer.cloudwatch.CloudWatchMeterRegistry:134 - error sending metric data.
com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [com.amazonaws.auth.profile.ProfileCredentialsProvider#32ee6fee: profile file cannot be null]
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:136) ~[aws-java-sdk-core-1.11.641.jar!/:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1225) ~[aws-java-sdk-core-1.11.641.jar!/:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:801) ~[aws-java-sdk-core-1.11.641.jar!/:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:751) ~[aws-java-sdk-core-1.11.641.jar!/:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744) ~[aws-java-sdk-core-1.11.641.jar!/:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726) ~[aws-java-sdk-core-1.11.641.jar!/:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686) ~[aws-java-sdk-core-1.11.641.jar!/:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668) ~[aws-java-sdk-core-1.11.641.jar!/:?]
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532) ~[aws-java-sdk-core-1.11.641.jar!/:?]
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512) ~[aws-java-sdk-core-1.11.641.jar!/:?]
at com.amazonaws.services.cloudwatch.AmazonCloudWatchClient.doInvoke(AmazonCloudWatchClient.java:2027) ~[aws-java-sdk-cloudwatch-1.11.641.jar!/:?]
at com.amazonaws.services.cloudwatch.AmazonCloudWatchClient.invoke(AmazonCloudWatchClient.java:1994) ~[aws-java-sdk-cloudwatch-1.11.641.jar!/:?]
at com.amazonaws.services.cloudwatch.AmazonCloudWatchClient.invoke(AmazonCloudWatchClient.java:1983) ~[aws-java-sdk-cloudwatch-1.11.641.jar!/:?]
at com.amazonaws.services.cloudwatch.AmazonCloudWatchClient.executePutMetricData(AmazonCloudWatchClient.java:1754) ~[aws-java-sdk-cloudwatch-1.11.641.jar!/:?]
at com.amazonaws.services.cloudwatch.AmazonCloudWatchAsyncClient$20.call(AmazonCloudWatchAsyncClient.java:972) [aws-java-sdk-cloudwatch-1.11.641.jar!/:?]
at com.amazonaws.services.cloudwatch.AmazonCloudWatchAsyncClient$20.call(AmazonCloudWatchAsyncClient.java:966) [aws-java-sdk-cloudwatch-1.11.641.jar!/:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_191]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
Strange thing is that I already managed to put metrics to CloudWatch in an earlier try, but I do not know what I broke. I did not dream it. I can still see the metrics ^^.
From what I read, I'd have to do something with the IAM-role of my EC2-instance, but I'm lost here.
Turns out that the culprit was the following property:
cloud.aws.credentials.instanceProfile=false
I had it in my configuration, because I read somewhere that it is needed to run the application locally. However that wasn't true in my case.
Somehow it slipped to my procuction-configuration. And as the name of the property suggests, setting it to false will make Spring Boot not use the profile of the ec2-instance the application is running on. So unless you provide credentials in another way, the application will not be able to push metrics to CloudWatch.
I am setting up WSO2 APIM with Analytics running on docker. I am getting the following error in the worker & no data is being published to the console. I'm using a MySQL database.
I am using the docker images # https://github.com/wso2/docker-apim/tree/v2.6.0.3/dockerfiles/centos
OpenJDK8U-jdk_x64_linux_hotspot_8u222b10
mysql-connector-java-5.1.47-bin.jar
[2019-10-05 04:58:27,208] ERROR {org.wso2.carbon.databridge.core.internal.queue.QueueWorker} - Dropping wrongly formatted event sent org.wso2.carbon.databridge.core.exception.EventConversionException: Error when converting org.wso2.metrics.stream.Gauge:1.0.0 of event bundle with events 4
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:188)
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.toEventList(ThriftEventConverter.java:90)
at org.wso2.carbon.databridge.core.internal.queue.QueueWorker.run(QueueWorker.java:72)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.databridge.core.exception.EventConversionException: No StreamDefinition for streamId org.wso2.metrics.stream.Gauge:1.0.0 present in cache
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:171)
... 7 more
This issue occurs when the particular stream, "org.wso2.metrics.stream.Gauge:1.0.0", hasn't deployed properly from the capp. Below steps can be followed to resolve this.
Remove the /tmp directory which is available in the APIM_ANALYTICS_HOME directory.
Restart the server.
If there is any issue observed while deploying the capp, please backup the capp and remove it from the carbon console. (You can find it under the fault apps section. This is to remove any cache data related to the capp).
Then follow steps 1 and 2 again.
I have a WSO2 API Manager 2.5.0 installation, deployed in two nodes as active-active. It also has WSO2 Analytics and WSO2 Identity Server as Key Manager.
When each node starts, it leaves three times the same error in logs:
2018-11-14 07:56:50,989 [-] [DisruptorInboundEventThread-8] ERROR AndesSubscriptionManager Could not add subscription: subscriptionId=3091b9ba-867b-4539-bf25-fee511d1813d,storageQueue=AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101,protocolType=AMQP,isActive=true,subscriberConnection=Y29ubmVjdGVkSVA9LzEwLjAuMC40OjMzMTgyLzEsY29ubmVjdGVkTm9kZT1OT0RFOk9wZW5EYXRhQXBpTTEvMTAuMC4wLjQscHJvdG9jb2xDaGFubmVsSUQ9Y2Y3NDI4MDgtZGY3MS00NzJhLWFiMTEtOTY1Nzc2ZTBkNTZl
org.wso2.andes.kernel.subscription.SubscriptionException: StorageQueue: AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101 is not registered while creating subscription id=3091b9ba-867b-4539-bf25-fee511d1813d
at org.wso2.andes.kernel.subscription.AndesSubscription.<init>(AndesSubscription.java:136)
at org.wso2.andes.kernel.subscription.AndesSubscriptionManager.reloadSubscriptionsFromStorage(AndesSubscriptionManager.java:921)
at org.wso2.andes.kernel.disruptor.inbound.InboundDBSyncRequestEvent.reloadSubscriptions(InboundDBSyncRequestEvent.java:208)
at org.wso2.andes.kernel.disruptor.inbound.InboundDBSyncRequestEvent.updateState(InboundDBSyncRequestEvent.java:76)
at org.wso2.andes.kernel.disruptor.inbound.InboundEventContainer.updateState(InboundEventContainer.java:268)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:70)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:40)
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-11-14 07:56:50,991 [-] [DisruptorInboundEventThread-8] ERROR AndesSubscriptionManager Could not add subscription: subscriptionId=c9a279eb-6c1f-447c-84bc-c077d33a06e1,storageQueue=AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101,protocolType=AMQP,isActive=true,subscriberConnection=Y29ubmVjdGVkSVA9LzEwLjAuMC40OjM0NTA0LzEsY29ubmVjdGVkTm9kZT1OT0RFOk9wZW5EYXRhQXBpTTEvMTAuMC4wLjQscHJvdG9jb2xDaGFubmVsSUQ9NGI5YmVmZmUtNjk1OC00M2Q4LTg3NGYtMzA5YmE5M2IyNzMw
org.wso2.andes.kernel.subscription.SubscriptionException: StorageQueue: AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101 is not registered while creating subscription id=c9a279eb-6c1f-447c-84bc-c077d33a06e1
at org.wso2.andes.kernel.subscription.AndesSubscription.<init>(AndesSubscription.java:136)
at org.wso2.andes.kernel.subscription.AndesSubscriptionManager.reloadSubscriptionsFromStorage(AndesSubscriptionManager.java:921)
at org.wso2.andes.kernel.disruptor.inbound.InboundDBSyncRequestEvent.reloadSubscriptions(InboundDBSyncRequestEvent.java:208)
at org.wso2.andes.kernel.disruptor.inbound.InboundDBSyncRequestEvent.updateState(InboundDBSyncRequestEvent.java:76)
at org.wso2.andes.kernel.disruptor.inbound.InboundEventContainer.updateState(InboundEventContainer.java:268)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:70)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:40)
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-11-14 07:56:50,992 [-] [DisruptorInboundEventThread-8] ERROR AndesSubscriptionManager Could not add subscription: subscriptionId=3091b9ba-867b-4539-bf25-fee511d1813d,storageQueue=AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101,protocolType=AMQP,isActive=true,subscriberConnection=Y29ubmVjdGVkSVA9LzEwLjAuMC40OjMzMTgyLzEsY29ubmVjdGVkTm9kZT1OT0RFOk9wZW5EYXRhQXBpTTEvMTAuMC4wLjQscHJvdG9jb2xDaGFubmVsSUQ9Y2Y3NDI4MDgtZGY3MS00NzJhLWFiMTEtOTY1Nzc2ZTBkNTZl
org.wso2.andes.kernel.subscription.SubscriptionException: StorageQueue: AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101 is not registered while creating subscription id=3091b9ba-867b-4539-bf25-fee511d1813d
at org.wso2.andes.kernel.subscription.AndesSubscription.<init>(AndesSubscription.java:136)
at org.wso2.andes.kernel.subscription.AndesSubscriptionManager.reloadSubscriptionsFromStorage(AndesSubscriptionManager.java:972)
at org.wso2.andes.kernel.disruptor.inbound.InboundDBSyncRequestEvent.reloadSubscriptions(InboundDBSyncRequestEvent.java:208)
at org.wso2.andes.kernel.disruptor.inbound.InboundDBSyncRequestEvent.updateState(InboundDBSyncRequestEvent.java:76)
at org.wso2.andes.kernel.disruptor.inbound.InboundEventContainer.updateState(InboundEventContainer.java:268)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:70)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:40)
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I have no idea about the problem. I have looked for the error on the internet, but there are no similar cases.
Does anyone have any ideas?
2018-11-14 07:56:50,989 [-] [DisruptorInboundEventThread-8] ERROR AndesSubscriptionManager Could not add subscription: subscriptionId=3091b9ba-867b-4539-bf25-fee511d1813d,storageQueue=AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101,protocolType=AMQP,isActive=true,subscriberConnection=Y29ubmVjdGVkSVA9LzEwLjAuMC40OjMzMTgyLzEsY29ubmVjdGVkTm9kZT1OT0RFOk9wZW5EYXRhQXBpTTEvMTAuMC4wLjQscHJvdG9jb2xDaGFubmVsSUQ9Y2Y3NDI4MDgtZGY3MS00NzJhLWFiMTEtOTY1Nzc2ZTBkNTZl
org.wso2.andes.kernel.subscription.SubscriptionException: StorageQueue: AMQP_Topic_throttledata_NODE:OpenDataApiM1/10.0.0.101 is not registered while creating subscription id=3091b9ba-867b-4539-bf25-fee511d1813d
You might have probably found out the answer for this error. But the possible cause for the issue is when the APIM is unable to identify the correct topic connection factory related to its own node due to the sharing of the 'WSO2_MB_STORE_DB' between the nodes.
If you refer to their Installing and configuring databases documentation, it has mentioned that each Traffic Manager node must have its own Message Broker database (WSO2_MB_STORE_DB).
Therefore, please check the database configurations under the <APIM_Home>/repository/conf/master-datasource.xml file in both nodes whether you have shared the 'WSO2_MB_STORE_DB' between the nodes. If that is the case, you can point them to two separate fresh databases to get rid of this error.
I am running WSO2 IS 5.0.0. I have the SP for IS 5.0.0 applied along with all the other security patches issued for that version for Identity Server and Carbon 4.2.0. My environment consists of 4 machines that are creating a cluster (using the WKA membership scheme and Load Balancer(AWS ELB) with sticky session enabled). I am using MySQL(not the default H2 database). The machines on which the IS is deployed are Windows Server 2012 R2 (EC2 AWS machines).
I am constantly receiving in the console log files "Deployment synchronization commit for tenant -1234 failed". After I applied the changes that #ycr proposed in order to disable the Dep-Sync:
<DeploymentSynchronizer>
<Enabled>false</Enabled>
<AutoCommit>false</AutoCommit>
<AutoCheckout>false</AutoCheckout>
<RepositoryType>svn</RepositoryType>
<SvnUrl>http://svnrepo.example.com/repos/</SvnUrl>
<SvnUser>username</SvnUser>
<SvnPassword>password</SvnPassword>
<SvnUrlAppendTenantId>false</SvnUrlAppendTenantId>
and restarted all of my machines. I had no issues for about 2 weeks. Then suddenly I received the the same error but with additional stack trace on two of my machines today (29.09.2016):
[2016-09-29 04:42:24,000] ERROR {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask} - Deployment synchronization commit for tenant -1234 failed
com.hazelcast.core.OperationTimeoutException: No response for 120000 ms. Aborting invocation! InvocationFuture{invocation=BasicInvocation{ serviceName='hz:impl:mapService', op=GetOperation{}, partitionId=247, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=60000, target=Address[172.31.2.242]:4000}, done=false} No response has been send backups-expected: 0 backups-completed: 0 reinvocations: 0
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.newOperationTimeoutException(BasicInvocation.java:782)
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.waitForResponse(BasicInvocation.java:760)
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.get(BasicInvocation.java:697)
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.get(BasicInvocation.java:676)
at com.hazelcast.map.proxy.MapProxySupport.invokeOperation(MapProxySupport.java:257)
at com.hazelcast.map.proxy.MapProxySupport.getInternal(MapProxySupport.java:161)
at com.hazelcast.map.proxy.MapProxyImpl.get(MapProxyImpl.java:53)
at org.wso2.carbon.core.clustering.hazelcast.HazelcastDistributedMapProvider$DistMap.get(HazelcastDistributedMapProvider.java:130)
at org.wso2.carbon.caching.impl.CacheImpl.get(CacheImpl.java:182)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCPathCache.getPathID(JDBCPathCache.java:299)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCResourceDAO.getResourceID(JDBCResourceDAO.java:81)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCResourceDAO.resourceExists(JDBCResourceDAO.java:151)
at org.wso2.carbon.registry.core.jdbc.Repository.resourceExists(Repository.java:134)
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.resourceExists(EmbeddedRegistry.java:644)
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.resourceExists(CacheBackedRegistry.java:293)
at org.wso2.carbon.registry.core.session.UserRegistry.resourceExistsInternal(UserRegistry.java:777)
at org.wso2.carbon.registry.core.session.UserRegistry.access$800(UserRegistry.java:60)
at org.wso2.carbon.registry.core.session.UserRegistry$9.run(UserRegistry.java:760)
at org.wso2.carbon.registry.core.session.UserRegistry$9.run(UserRegistry.java:757)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.carbon.registry.core.session.UserRegistry.resourceExists(UserRegistry.java:757)
at org.wso2.carbon.deployment.synchronizer.internal.repository.CarbonRepositoryUtils.getDeploymentSyncConfigurationFromRegistry(CarbonRepositoryUtils.java:262)
at org.wso2.carbon.deployment.synchronizer.internal.repository.CarbonRepositoryUtils.getActiveSynchronizerConfiguration(CarbonRepositoryUtils.java:108)
at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.commit(DeploymentSynchronizerServiceImpl.java:96)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.deploymentSyncCommit(CarbonDeploymentSchedulerTask.java:207)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ------ End remote and begin local stack-trace ------.(Unknown Source)
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.resolveResponse(BasicInvocation.java:862)
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.resolveResponseOrThrowException(BasicInvocation.java:795)
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.get(BasicInvocation.java:698)
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.get(BasicInvocation.java:676)
at com.hazelcast.map.proxy.MapProxySupport.invokeOperation(MapProxySupport.java:257)
at com.hazelcast.map.proxy.MapProxySupport.getInternal(MapProxySupport.java:161)
at com.hazelcast.map.proxy.MapProxyImpl.get(MapProxyImpl.java:53)
at org.wso2.carbon.core.clustering.hazelcast.HazelcastDistributedMapProvider$DistMap.get(HazelcastDistributedMapProvider.java:130)
at org.wso2.carbon.caching.impl.CacheImpl.get(CacheImpl.java:182)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCPathCache.getPathID(JDBCPathCache.java:299)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCResourceDAO.getResourceID(JDBCResourceDAO.java:81)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCResourceDAO.resourceExists(JDBCResourceDAO.java:151)
at org.wso2.carbon.registry.core.jdbc.Repository.resourceExists(Repository.java:134)
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.resourceExists(EmbeddedRegistry.java:644)
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.resourceExists(CacheBackedRegistry.java:293)
at org.wso2.carbon.registry.core.session.UserRegistry.resourceExistsInternal(UserRegistry.java:777)
at org.wso2.carbon.registry.core.session.UserRegistry.access$800(UserRegistry.java:60)
at org.wso2.carbon.registry.core.session.UserRegistry$9.run(UserRegistry.java:760)
at org.wso2.carbon.registry.core.session.UserRegistry$9.run(UserRegistry.java:757)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.carbon.registry.core.session.UserRegistry.resourceExists(UserRegistry.java:757)
at org.wso2.carbon.deployment.synchronizer.internal.repository.CarbonRepositoryUtils.getDeploymentSyncConfigurationFromRegistry(CarbonRepositoryUtils.java:262)
at org.wso2.carbon.deployment.synchronizer.internal.repository.CarbonRepositoryUtils.getActiveSynchronizerConfiguration(CarbonRepositoryUtils.java:108)
at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.commit(DeploymentSynchronizerServiceImpl.java:96)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.deploymentSyncCommit(CarbonDeploymentSchedulerTask.java:207)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Are there any other configuration files that I need to tweak?
Thanks in advance.
Are you using any secondary user stores? If not you do not need Dep-sync. You can simply disable it. If you are using Secondary user stores the best option is to avoid using Dep-sync and copy secondary user store related configurations manually.
I am trying to integrate external cassandra to BAM. I have changed cassandra-component.xml.
1) I want to know how keyspace are created on external cassandra because when I am running BAM,
I am getting the error Unknown keyspace EVENT_KS.
2) I am having the following error in my wso2 logs
TID: [0] [BAM] [2014-02-11 15:28:30,905] WARN {org.apache.hadoop.mapred.JobClient} - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. {org.apache.hadoop.mapred.JobClient}
TID: [0] [BAM] [2014-02-11 15:37:04,393] ERROR {org.apache.hadoop.hive.ql.exec.ExecDriver} - Job Submission failed with exception 'java.lang.RuntimeException(org.apache.thrift.transport.TTransportException)'
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException
at org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getRangeMap(ColumnFamilyInputFormat.java:297)
at org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(ColumnFamilyInputFormat.java:105)
at org.apache.hadoop.hive.cassandra.input.HiveCassandraStandardColumnInputFormat.getSplits(HiveCassandraStandardColumnInputFormat.java:291)
at org.apache.hadoop.hive.cassandra.input.HiveCassandraStandardColumnInputFormat.getSplits(HiveCassandraStandardColumnInputFormat.java:216)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:302)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:292)
at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:933)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:925)
at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:839)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:792)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1123)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:792)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:766)
at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:460)
at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:733)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.thrift.transport.TTransportException
EVENT_KS is created only after the very first event is published to BAM as I remember. If you try to access it before getting created errors may arise.
In BAM 2.4.0, EVENT_KS is getting created when you run the BAM for the first time. (But in previous versions EVENT_KS will be created when the very first event is published to BAM). Please make sure your cassandra-component.xml looks similar to something like below. Also tell us about the cassandra version you are using.
<Cassandra><Cluster>
<Name>Test Cluster</Name>
<DefaultPort>9160</DefaultPort>
<Nodes>localhost:9160</Nodes>
<AutoDiscovery disable="false" delay="1000"/>
</Cluster></Cassandra>
First You need to check the following:
Have you pointed the cassandra-component.xml correctly to the external cassandra. With this your published data will be stored in the intended external cassandra database.
Have you installed a toolbox with intended stream definition inside? Or Else have you triggered to publish the data to BAM? In both cases the EVENT_KS will be created with the column family with the name of stream.
Have you modified the $BAM_HOME/repository/conf/datasource/master-datasource.xml to point to external cassandra databse? You need to validate the cassandra database configuration provided in WSO2BAM_CASSANDRA_DATASOURCE datasource. For the default toolboxes, this is the default cassandra datasource being used and by default it points to localhost. If you are using this in your hive script you need to change this configuration.
After putting many efforts i figure that after changing data directory of cassendra.yaml of external cassandra to repository/database/cassandra/data everything works fine with external cassandra.Not to mention with version 1.1.3. I want to know is there any other work around for this external cassandra configuration.