EOFException in dataflow job writing to Spanner - google-cloud-platform

I have a fairly simple dataflow job that reads from BigQuery and writes to a Spanner instance. All the reading has completed but for some reason it is consistently failing with an EOFException when starting the write to Spanner and it is in the class MutationGroupEncoder. There are about 5.8M rows and 1.36GB of data that have been partitioned, but then after writing about 3500 it fails.
Exception trace below - none of our code is involved in the stack trace, it's all library code that is marshaling the output data around. I haven't been able to find anyone else reporting a similar bug. We are on version 2.5.0 of the Google Cloud Apache Beam SDK.
java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: java.lang.RuntimeException: java.io.EOFException
at com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn$1.output(GroupAlsoByWindowsParDoFn.java:183)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner$1.outputWindowedValue(GroupAlsoByWindowFnRunner.java:102)
at com.google.cloud.dataflow.worker.util.BatchGroupAlsoByWindowViaIteratorsFn.processElement(BatchGroupAlsoByWindowViaIteratorsFn.java:124)
at com.google.cloud.dataflow.worker.util.BatchGroupAlsoByWindowViaIteratorsFn.processElement(BatchGroupAlsoByWindowViaIteratorsFn.java:53)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.invokeProcessElement(GroupAlsoByWindowFnRunner.java:115)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.processElement(GroupAlsoByWindowFnRunner.java:73)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn.processElement(GroupAlsoByWindowsParDoFn.java:113)
at com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
at com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:200)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
at com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:391)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:360)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:288)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:134)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:114)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:101)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.util.UserCodeException: java.lang.RuntimeException: java.io.EOFException
at org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:36)
at org.apache.beam.sdk.io.gcp.spanner.SpannerIO$BatchFn$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:185)
at org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:146)
at com.google.cloud.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:323)
at com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
at com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn$1.output(GroupAlsoByWindowsParDoFn.java:181)
... 21 more
Caused by: java.lang.RuntimeException: java.io.EOFException
at org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.decode(MutationGroupEncoder.java:271)
at org.apache.beam.sdk.io.gcp.spanner.SpannerIO$BatchFn.processElement(SpannerIO.java:1030)
Caused by: java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:197)
at java.io.DataInputStream.readFully(DataInputStream.java:169)
at org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.readBytes(MutationGroupEncoder.java:475)
at org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.decodePrimitive(MutationGroupEncoder.java:434)
at org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.decodeModification(MutationGroupEncoder.java:326)
at org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.decodeMutation(MutationGroupEncoder.java:280)
at org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.decode(MutationGroupEncoder.java:264)
at org.apache.beam.sdk.io.gcp.spanner.SpannerIO$BatchFn.processElement(SpannerIO.java:1030)
at org.apache.beam.sdk.io.gcp.spanner.SpannerIO$BatchFn$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:185)
at org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:146)
at com.google.cloud.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:323)
at com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
at com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn$1.output(GroupAlsoByWindowsParDoFn.java:181)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner$1.outputWindowedValue(GroupAlsoByWindowFnRunner.java:102)
at com.google.cloud.dataflow.worker.util.BatchGroupAlsoByWindowViaIteratorsFn.processElement(BatchGroupAlsoByWindowViaIteratorsFn.java:124)
at com.google.cloud.dataflow.worker.util.BatchGroupAlsoByWindowViaIteratorsFn.processElement(BatchGroupAlsoByWindowViaIteratorsFn.java:53)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.invokeProcessElement(GroupAlsoByWindowFnRunner.java:115)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.processElement(GroupAlsoByWindowFnRunner.java:73)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn.processElement(GroupAlsoByWindowsParDoFn.java:113)
at com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
at com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:200)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
at com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:391)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:360)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:288)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:134)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:114)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:101)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

had same issue and it's got resolved after changing to 2.6.0 , and was able to upload 100 GB (350 Million records). But i observed dataflow jobs are slow when compared with 2.4.0.

Related

grpc StatusRuntimeException on Dataflow

I have a dataflow pipeline in which I consume pubsub messages, treat them, and then publish to pubsub.
Whenever I have too many calculations (ie I increase the amount of treatment for each message) I get an Exception.
:
java.util.concurrent.ExecutionException: org.apache.beam.vendor.grpc.v1p13p1.io.grpc.StatusRuntimeException: CANCELLED: cancelled before receiving half close
What causes this error? How can I avoid it?
Full stacktrace:
org.apache.beam.vendor.grpc.v1p13p1.io.grpc.StatusRuntimeException: CANCELLED: call already cancelled
org.apache.beam.vendor.grpc.v1p13p1.io.grpc.Status.asRuntimeException(Status.java:517)
org.apache.beam.vendor.grpc.v1p13p1.io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl.onNext(ServerCalls.java:335)
org.apache.beam.sdk.fn.stream.DirectStreamObserver.onNext(DirectStreamObserver.java:98)
org.apache.beam.sdk.fn.stream.SynchronizedStreamObserver.onNext(SynchronizedStreamObserver.java:46)
org.apache.beam.runners.fnexecution.control.FnApiControlClient.handle(FnApiControlClient.java:84)
org.apache.beam.runners.dataflow.worker.fn.control.RegisterAndProcessBundleOperation.start(RegisterAndProcessBundleOperation.java:254)
org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77)
org.apache.beam.runners.dataflow.worker.fn.control.BeamFnMapTaskExecutor.execute(BeamFnMapTaskExecutor.java:125)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1283)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:147)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:1020)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)

Camunda Receive Task

I have a question I have defined the following process in Camunda.
And deployed a servlet in the same application server that allows me to send signals to the deployed process easily.
However, everytime I call the signal method using the process instance id, I get an exception
03-Feb-2019 19:36:18.365 ERROR [o.c.b.engine.context] : ENGINE-16004 Exception while closing command context: cannot signal execution 9c3b0cbc-27da-11e9-ae94-0242ac140004: it has no current activity
org.camunda.bpm.engine.impl.pvm.PvmException: cannot signal execution 9c3b0cbc-27da-11e9-ae94-0242ac140004: it has no current activity
at org.camunda.bpm.engine.impl.pvm.runtime.PvmExecutionImpl.signal(PvmExecutionImpl.java:716)
at org.camunda.bpm.engine.impl.cmd.SignalCmd.execute(SignalCmd.java:63)
at org.camunda.bpm.engine.impl.interceptor.CommandExecutorImpl.execute(CommandExecutorImpl.java:24)
at org.camunda.bpm.engine.impl.interceptor.CommandContextInterceptor.execute(CommandContextInterceptor.java:104)
at org.camunda.bpm.engine.impl.interceptor.ProcessApplicationContextInterceptor.execute(ProcessApplicationContextInterceptor.java:66)
at org.camunda.bpm.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:30)
at org.camunda.bpm.engine.impl.RuntimeServiceImpl.signal(RuntimeServiceImpl.java:423)
at systems.appollo.ams.camunda.signaler.SignalerServlet.doGet(SignalerServlet.java:26)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:622)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1132)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1539)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1495)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)

wso2am-2.2.0 distributed setup - not publishing events

Using wso2am-2.2.0 with wum we are running gateway instances in a cluster as -Dprofile=gateway-manager
in the logs after the startup I see an exception
Caused by: java.lang.NoClassDefFoundError: org/wso2/carbon/apimgt/micro/gateway/usage/publisher/util/UsagePublisherException
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
at java.lang.Class.getConstructor0(Class.java:3075)
at java.lang.Class.getDeclaredConstructor(Class.java:2178)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:78)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory
.java:1032)
... 135 more
Caused by: java.lang.ClassNotFoundException: org.wso2.carbon.apimgt.micro.gateway.usage.publisher.util.UsagePublisherException
at org.wso2.carbon.webapp.mgt.l
When the gateway is used (an API is invoked), I see following exception:
TID: [-1234] [] ERROR {org.wso2.carbon.apimgt.gateway.throttling.publisher.ThrottleDataPublisher} - Error while publishing throttling events to global policy server {org.wso2.carbon.apimgt.gateway.throttling.publisher.ThrottleDataPublisher}
java.lang.NullPointerException
at org.wso2.carbon.apimgt.gateway.throttling.publisher.ThrottleDataPublisher.publishNonThrottledEvent(ThrottleDataPublisher.java:121)
at org.wso2.carbon.apimgt.gateway.handlers.throttling.ThrottleHandler.doRoleBasedAccessThrottlingWithCEP(ThrottleHandler.java:349)
at org.wso2.carbon.apimgt.gateway.handlers.throttling.ThrottleHandler.doThrottle(ThrottleHandler.java:512)
at org.wso2.carbon.apimgt.gateway.handlers.throttling.ThrottleHandler.handleRequest(ThrottleHandler.java:460)
at org.apache.synapse.rest.API.process(API.java:325)
at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:90)
at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:69)
at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:303)
at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:92)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:337)
at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:158)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
and as well seems the throttling data are not propagated (I don't see them processed by the analytics)

How can this error about WSO2-am 2.2.0 database be fixed?

I am using wso2-am 2.2.0 with local database. I change h2 database to my local database and every table is created and works correctly. When this error happens system wasn't down and continued to work but I am still getting this error continuously.
Here is the error :
[2018-05-20 07:50:24,935] ERROR - JDBCReporter Error when reporting timers
com.microsoft.sqlserver.jdbc.SQLServerException: The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. Parameter 6 (""): The supplied value is not a valid instance of data type float. Check the source data for invalid values. An example of an invalid value is data of numeric type with scale greater than precision.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:259)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1547)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatementBatch(SQLServerPreparedStatement.java:2678)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtBatchExecCmd.doExecute(SQLServerPreparedStatement.java:2547)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7347)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2713)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:224)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:204)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:2460)
at sun.reflect.GeneratedMethodAccessor84.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tomcat.jdbc.pool.StatementFacade$StatementProxy.invoke(StatementFacade.java:114)
at com.sun.proxy.$Proxy19.executeBatch(Unknown Source)
at org.wso2.carbon.metrics.jdbc.reporter.JDBCReporter.reportTimers(JDBCReporter.java:389)
at org.wso2.carbon.metrics.jdbc.reporter.JDBCReporter.report(JDBCReporter.java:200)
at com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162)
at com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Looks like a metrics DB issue. If you're not using metrics, you can disable it in <APIM_HOME>/repository/conf/metrics.xml file.
Ref: https://docs.wso2.com/display/AM220/Enabling+Metrics+and+Storage+Types
Maybe you run into this bug:
https://support.microsoft.com/de-at/help/970519/you-receive-a-the-incoming-tabular-data-stream-tds-remote-procedure-ca
There are workarounds available on that page.

Deployment synchronization commit for tenant -1234 failed WSO2 IS 5.0.0

I am running WSO2 IS 5.0.0. I have the SP for IS 5.0.0 applied along with all the other security patches issued for that version for Identity Server and Carbon 4.2.0. My environment consists of 4 machines that are creating a cluster (using the WKA membership scheme and Load Balancer(AWS ELB) with sticky session enabled). I am using MySQL(not the default H2 database). The machines on which the IS is deployed are Windows Server 2012 R2 (EC2 AWS machines).
I am constantly receiving in the console log files "Deployment synchronization commit for tenant -1234 failed". After I applied the changes that #ycr proposed in order to disable the Dep-Sync:
<DeploymentSynchronizer>
<Enabled>false</Enabled>
<AutoCommit>false</AutoCommit>
<AutoCheckout>false</AutoCheckout>
<RepositoryType>svn</RepositoryType>
<SvnUrl>http://svnrepo.example.com/repos/</SvnUrl>
<SvnUser>username</SvnUser>
<SvnPassword>password</SvnPassword>
<SvnUrlAppendTenantId>false</SvnUrlAppendTenantId>
and restarted all of my machines. I had no issues for about 2 weeks. Then suddenly I received the the same error but with additional stack trace on two of my machines today (29.09.2016):
[2016-09-29 04:42:24,000] ERROR {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask} - Deployment synchronization commit for tenant -1234 failed
com.hazelcast.core.OperationTimeoutException: No response for 120000 ms. Aborting invocation! InvocationFuture{invocation=BasicInvocation{ serviceName='hz:impl:mapService', op=GetOperation{}, partitionId=247, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=60000, target=Address[172.31.2.242]:4000}, done=false} No response has been send backups-expected: 0 backups-completed: 0 reinvocations: 0
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.newOperationTimeoutException(BasicInvocation.java:782)
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.waitForResponse(BasicInvocation.java:760)
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.get(BasicInvocation.java:697)
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.get(BasicInvocation.java:676)
at com.hazelcast.map.proxy.MapProxySupport.invokeOperation(MapProxySupport.java:257)
at com.hazelcast.map.proxy.MapProxySupport.getInternal(MapProxySupport.java:161)
at com.hazelcast.map.proxy.MapProxyImpl.get(MapProxyImpl.java:53)
at org.wso2.carbon.core.clustering.hazelcast.HazelcastDistributedMapProvider$DistMap.get(HazelcastDistributedMapProvider.java:130)
at org.wso2.carbon.caching.impl.CacheImpl.get(CacheImpl.java:182)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCPathCache.getPathID(JDBCPathCache.java:299)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCResourceDAO.getResourceID(JDBCResourceDAO.java:81)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCResourceDAO.resourceExists(JDBCResourceDAO.java:151)
at org.wso2.carbon.registry.core.jdbc.Repository.resourceExists(Repository.java:134)
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.resourceExists(EmbeddedRegistry.java:644)
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.resourceExists(CacheBackedRegistry.java:293)
at org.wso2.carbon.registry.core.session.UserRegistry.resourceExistsInternal(UserRegistry.java:777)
at org.wso2.carbon.registry.core.session.UserRegistry.access$800(UserRegistry.java:60)
at org.wso2.carbon.registry.core.session.UserRegistry$9.run(UserRegistry.java:760)
at org.wso2.carbon.registry.core.session.UserRegistry$9.run(UserRegistry.java:757)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.carbon.registry.core.session.UserRegistry.resourceExists(UserRegistry.java:757)
at org.wso2.carbon.deployment.synchronizer.internal.repository.CarbonRepositoryUtils.getDeploymentSyncConfigurationFromRegistry(CarbonRepositoryUtils.java:262)
at org.wso2.carbon.deployment.synchronizer.internal.repository.CarbonRepositoryUtils.getActiveSynchronizerConfiguration(CarbonRepositoryUtils.java:108)
at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.commit(DeploymentSynchronizerServiceImpl.java:96)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.deploymentSyncCommit(CarbonDeploymentSchedulerTask.java:207)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ------ End remote and begin local stack-trace ------.(Unknown Source)
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.resolveResponse(BasicInvocation.java:862)
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.resolveResponseOrThrowException(BasicInvocation.java:795)
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.get(BasicInvocation.java:698)
at com.hazelcast.spi.impl.BasicInvocation$InvocationFuture.get(BasicInvocation.java:676)
at com.hazelcast.map.proxy.MapProxySupport.invokeOperation(MapProxySupport.java:257)
at com.hazelcast.map.proxy.MapProxySupport.getInternal(MapProxySupport.java:161)
at com.hazelcast.map.proxy.MapProxyImpl.get(MapProxyImpl.java:53)
at org.wso2.carbon.core.clustering.hazelcast.HazelcastDistributedMapProvider$DistMap.get(HazelcastDistributedMapProvider.java:130)
at org.wso2.carbon.caching.impl.CacheImpl.get(CacheImpl.java:182)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCPathCache.getPathID(JDBCPathCache.java:299)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCResourceDAO.getResourceID(JDBCResourceDAO.java:81)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCResourceDAO.resourceExists(JDBCResourceDAO.java:151)
at org.wso2.carbon.registry.core.jdbc.Repository.resourceExists(Repository.java:134)
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.resourceExists(EmbeddedRegistry.java:644)
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.resourceExists(CacheBackedRegistry.java:293)
at org.wso2.carbon.registry.core.session.UserRegistry.resourceExistsInternal(UserRegistry.java:777)
at org.wso2.carbon.registry.core.session.UserRegistry.access$800(UserRegistry.java:60)
at org.wso2.carbon.registry.core.session.UserRegistry$9.run(UserRegistry.java:760)
at org.wso2.carbon.registry.core.session.UserRegistry$9.run(UserRegistry.java:757)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.carbon.registry.core.session.UserRegistry.resourceExists(UserRegistry.java:757)
at org.wso2.carbon.deployment.synchronizer.internal.repository.CarbonRepositoryUtils.getDeploymentSyncConfigurationFromRegistry(CarbonRepositoryUtils.java:262)
at org.wso2.carbon.deployment.synchronizer.internal.repository.CarbonRepositoryUtils.getActiveSynchronizerConfiguration(CarbonRepositoryUtils.java:108)
at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.commit(DeploymentSynchronizerServiceImpl.java:96)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.deploymentSyncCommit(CarbonDeploymentSchedulerTask.java:207)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Are there any other configuration files that I need to tweak?
Thanks in advance.
Are you using any secondary user stores? If not you do not need Dep-sync. You can simply disable it. If you are using Secondary user stores the best option is to avoid using Dep-sync and copy secondary user store related configurations manually.