Camunda Receive Task - camunda

I have a question I have defined the following process in Camunda.
And deployed a servlet in the same application server that allows me to send signals to the deployed process easily.
However, everytime I call the signal method using the process instance id, I get an exception
03-Feb-2019 19:36:18.365 ERROR [o.c.b.engine.context] : ENGINE-16004 Exception while closing command context: cannot signal execution 9c3b0cbc-27da-11e9-ae94-0242ac140004: it has no current activity
org.camunda.bpm.engine.impl.pvm.PvmException: cannot signal execution 9c3b0cbc-27da-11e9-ae94-0242ac140004: it has no current activity
at org.camunda.bpm.engine.impl.pvm.runtime.PvmExecutionImpl.signal(PvmExecutionImpl.java:716)
at org.camunda.bpm.engine.impl.cmd.SignalCmd.execute(SignalCmd.java:63)
at org.camunda.bpm.engine.impl.interceptor.CommandExecutorImpl.execute(CommandExecutorImpl.java:24)
at org.camunda.bpm.engine.impl.interceptor.CommandContextInterceptor.execute(CommandContextInterceptor.java:104)
at org.camunda.bpm.engine.impl.interceptor.ProcessApplicationContextInterceptor.execute(ProcessApplicationContextInterceptor.java:66)
at org.camunda.bpm.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:30)
at org.camunda.bpm.engine.impl.RuntimeServiceImpl.signal(RuntimeServiceImpl.java:423)
at systems.appollo.ams.camunda.signaler.SignalerServlet.doGet(SignalerServlet.java:26)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:622)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1132)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1539)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1495)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)

Related

EOFException in dataflow job writing to Spanner

I have a fairly simple dataflow job that reads from BigQuery and writes to a Spanner instance. All the reading has completed but for some reason it is consistently failing with an EOFException when starting the write to Spanner and it is in the class MutationGroupEncoder. There are about 5.8M rows and 1.36GB of data that have been partitioned, but then after writing about 3500 it fails.
Exception trace below - none of our code is involved in the stack trace, it's all library code that is marshaling the output data around. I haven't been able to find anyone else reporting a similar bug. We are on version 2.5.0 of the Google Cloud Apache Beam SDK.
java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: java.lang.RuntimeException: java.io.EOFException
at com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn$1.output(GroupAlsoByWindowsParDoFn.java:183)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner$1.outputWindowedValue(GroupAlsoByWindowFnRunner.java:102)
at com.google.cloud.dataflow.worker.util.BatchGroupAlsoByWindowViaIteratorsFn.processElement(BatchGroupAlsoByWindowViaIteratorsFn.java:124)
at com.google.cloud.dataflow.worker.util.BatchGroupAlsoByWindowViaIteratorsFn.processElement(BatchGroupAlsoByWindowViaIteratorsFn.java:53)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.invokeProcessElement(GroupAlsoByWindowFnRunner.java:115)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.processElement(GroupAlsoByWindowFnRunner.java:73)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn.processElement(GroupAlsoByWindowsParDoFn.java:113)
at com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
at com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:200)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
at com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:391)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:360)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:288)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:134)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:114)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:101)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.util.UserCodeException: java.lang.RuntimeException: java.io.EOFException
at org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:36)
at org.apache.beam.sdk.io.gcp.spanner.SpannerIO$BatchFn$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:185)
at org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:146)
at com.google.cloud.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:323)
at com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
at com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn$1.output(GroupAlsoByWindowsParDoFn.java:181)
... 21 more
Caused by: java.lang.RuntimeException: java.io.EOFException
at org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.decode(MutationGroupEncoder.java:271)
at org.apache.beam.sdk.io.gcp.spanner.SpannerIO$BatchFn.processElement(SpannerIO.java:1030)
Caused by: java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:197)
at java.io.DataInputStream.readFully(DataInputStream.java:169)
at org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.readBytes(MutationGroupEncoder.java:475)
at org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.decodePrimitive(MutationGroupEncoder.java:434)
at org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.decodeModification(MutationGroupEncoder.java:326)
at org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.decodeMutation(MutationGroupEncoder.java:280)
at org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.decode(MutationGroupEncoder.java:264)
at org.apache.beam.sdk.io.gcp.spanner.SpannerIO$BatchFn.processElement(SpannerIO.java:1030)
at org.apache.beam.sdk.io.gcp.spanner.SpannerIO$BatchFn$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:185)
at org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:146)
at com.google.cloud.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:323)
at com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
at com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn$1.output(GroupAlsoByWindowsParDoFn.java:181)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner$1.outputWindowedValue(GroupAlsoByWindowFnRunner.java:102)
at com.google.cloud.dataflow.worker.util.BatchGroupAlsoByWindowViaIteratorsFn.processElement(BatchGroupAlsoByWindowViaIteratorsFn.java:124)
at com.google.cloud.dataflow.worker.util.BatchGroupAlsoByWindowViaIteratorsFn.processElement(BatchGroupAlsoByWindowViaIteratorsFn.java:53)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.invokeProcessElement(GroupAlsoByWindowFnRunner.java:115)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.processElement(GroupAlsoByWindowFnRunner.java:73)
at com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn.processElement(GroupAlsoByWindowsParDoFn.java:113)
at com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
at com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:200)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
at com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:391)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:360)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:288)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:134)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:114)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:101)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
had same issue and it's got resolved after changing to 2.6.0 , and was able to upload 100 GB (350 Million records). But i observed dataflow jobs are slow when compared with 2.4.0.

wso2am-2.2.0 distributed setup - not publishing events

Using wso2am-2.2.0 with wum we are running gateway instances in a cluster as -Dprofile=gateway-manager
in the logs after the startup I see an exception
Caused by: java.lang.NoClassDefFoundError: org/wso2/carbon/apimgt/micro/gateway/usage/publisher/util/UsagePublisherException
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
at java.lang.Class.getConstructor0(Class.java:3075)
at java.lang.Class.getDeclaredConstructor(Class.java:2178)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:78)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory
.java:1032)
... 135 more
Caused by: java.lang.ClassNotFoundException: org.wso2.carbon.apimgt.micro.gateway.usage.publisher.util.UsagePublisherException
at org.wso2.carbon.webapp.mgt.l
When the gateway is used (an API is invoked), I see following exception:
TID: [-1234] [] ERROR {org.wso2.carbon.apimgt.gateway.throttling.publisher.ThrottleDataPublisher} - Error while publishing throttling events to global policy server {org.wso2.carbon.apimgt.gateway.throttling.publisher.ThrottleDataPublisher}
java.lang.NullPointerException
at org.wso2.carbon.apimgt.gateway.throttling.publisher.ThrottleDataPublisher.publishNonThrottledEvent(ThrottleDataPublisher.java:121)
at org.wso2.carbon.apimgt.gateway.handlers.throttling.ThrottleHandler.doRoleBasedAccessThrottlingWithCEP(ThrottleHandler.java:349)
at org.wso2.carbon.apimgt.gateway.handlers.throttling.ThrottleHandler.doThrottle(ThrottleHandler.java:512)
at org.wso2.carbon.apimgt.gateway.handlers.throttling.ThrottleHandler.handleRequest(ThrottleHandler.java:460)
at org.apache.synapse.rest.API.process(API.java:325)
at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:90)
at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:69)
at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:303)
at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:92)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:337)
at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:158)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
and as well seems the throttling data are not propagated (I don't see them processed by the analytics)

Aborting a web service call from the server side

We have a web service where sometimes the clients are stuck in a loop and are harassing the server, using up threads and causing high CPU. We would like to force the client to hit a timeout, but not use a thread in our thread pool (by sleeping, for example). We're using Jetty to supply the web services:
at sun.reflect.GeneratedMethodAccessor102.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.sun.xml.ws.api.server.InstanceResolver$1.invoke(InstanceResolver.java:250)
at com.sun.xml.ws.server.InvokerTube$2.invoke(InvokerTube.java:149)
at com.sun.xml.ws.server.sei.SEIInvokerTube.processRequest(SEIInvokerTube.java:88)
at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:1063)
at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:979)
at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:950)
at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:825)
at com.sun.xml.ws.server.WSEndpointImpl$2.process(WSEndpointImpl.java:380)
at com.sun.xml.ws.transport.http.HttpAdapter$HttpToolkit.handle(HttpAdapter.java:651)
at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:264)
at com.sun.xml.ws.transport.http.servlet.ServletAdapter.invokeAsync(ServletAdapter.java:218)
at com.sun.xml.ws.transport.http.servlet.WSServletDelegate.doGet(WSServletDelegate.java:159)
at com.sun.xml.ws.transport.http.servlet.WSServletDelegate.doPost(WSServletDelegate.java:194)
at com.sun.xml.ws.transport.http.servlet.WSServlet.doPost(WSServlet.java:80)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at org.eclipse.jetty.server.handler.DebugHandler.handle(DebugHandler.java:81)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:485)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:290)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:606)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:535)
at java.lang.Thread.run(Thread.java:745)
Any way to have the web server abort the work but NOT return any response to the client?
UPDATE: For clarification - we control ONLY the server. We do not control the clients.
You may find this link informative!

Websockets on Atmosphere.js keep closing , possible bringing the server down

We have a GWT application deployed on AWS ELB.
We're using an Atmosphere.js and websocket set up to get number of live users every second to the UI.
I keep seeing this error on the UI:
Websocket closed, reason: Connection was closed abnormally (that is,
with no close frame being sent). - wasClean: false
and on the server side I see
02-Feb-2017 09:27:08.486 SEVERE [http-nio-8080-exec-5]
org.atmosphere.container.JSR356Endpoint.onError java.io.EOFException
at
org.apache.coyote.http11.upgrade.NioServletInputStream.doRead(NioServletInputStream.java:97) at
org.apache.coyote.http11.upgrade.AbstractServletInputStream.read(AbstractServletInputStream.java:124)
at
org.apache.tomcat.websocket.server.WsFrameServer.onDataAvailable(WsFrameServer.java:60)
at
org.apache.tomcat.websocket.server.WsHttpUpgradeHandler$WsReadListener.onDataAvailable(WsHttpUpgradeHandler.java:185)
at
org.apache.coyote.http11.upgrade.AbstractServletInputStream.onDataAvailable(AbstractServletInputStream.java:198)
at
org.apache.coyote.http11.upgrade.AbstractProcessor.upgradeDispatch(AbstractProcessor.java:96) at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:647)
at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1520)
at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1476)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
EDIT :
Listener configuration:
At times when there is a lot of activity on this page, the server eventually crashes.
What could be the cause for the exception I see on the server side?

WSO2 BPS creating a BPEL human task throws 411 Error: Length Required

Having a BPEL process creating a human task, creating a RemoteTask throws the following exception (note that the BPEL process runs ok until the human task and I can create the human task invoking its WS endpoint)) :
Error sending message to Axis2 for ODE mex
{PartnerRoleMex#hqejbhcnphrbsqc3x2nneo [PID {http://b2bg2.imtech.realdolmen.com/bps/sample}HumanTaskTest-17] calling org.apache.ode.bpel.epr.WSAEndpoint#4e72a575.ErrorHandling(...) Status REQUEST}
org.apache.axis2.AxisFault: Transport error: 411 Error: Length Required
at org.apache.axis2.transport.http.HTTPSender.handleResponse(HTTPSender.java:331)
at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:196)
at org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:77)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:451)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:278)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutOnlyAxisOperationClient.executeImpl(OutOnlyAxisOperation.java:297)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
at org.wso2.carbon.bpel.core.ode.integration.utils.AxisServiceUtils.invokeService(AxisServiceUtils.java:316)
at org.wso2.carbon.bpel.core.ode.integration.PartnerService.invoke(PartnerService.java:324)
at org.wso2.carbon.bpel.core.ode.integration.BPELMessageExchangeContextImpl.invokePartner(BPELMessageExchangeContextImpl.java:43)
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.invoke(BpelRuntimeContextImpl.java:897)
at org.apache.ode.bpel.runtime.INVOKE.run(INVOKE.java:130)
at sun.reflect.GeneratedMethodAccessor65.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.ode.jacob.vpu.JacobVPU$JacobThreadImpl.run(JacobVPU.java:451)
at org.apache.ode.jacob.vpu.JacobVPU.execute(JacobVPU.java:139)
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.execute(BpelRuntimeContextImpl.java:1002)
at org.apache.ode.bpel.engine.PartnerLinkMyRoleImpl.invokeNewInstance(PartnerLinkMyRoleImpl.java:208)
at org.apache.ode.bpel.engine.BpelProcess$1.invoke(BpelProcess.java:283)
at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java:224)
at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java:279)
at org.apache.ode.bpel.engine.BpelProcess.handleJobDetails(BpelProcess.java:434)
at org.apache.ode.bpel.engine.BpelEngineImpl.onScheduledJob
(BpelEngineImpl.java:558)
at org.apache.ode.bpel.engine.BpelServerImpl.onScheduledJob(BpelServerImpl.java:467)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob$1.call(SimpleScheduler.java:633)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob$1.call(SimpleScheduler.java:627)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:298)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:253)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:627)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:611)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
For me it looks like the Content-Length header neither the "chunked" is not present when the web services are called internally between the BPEL process and the HT implemention
Apparently the WSDL of the HT (human task) must have the valid service endpoint address in the service declaration. The default WSDL created by Eclipse (WSO2 DEV) has the endpoint set to the www.example.org.
I assumed the internal transport is used (as it is between processes), but apparently it's not true :(